Resolved all merge conflicts

Signed-off-by: jetchirag <thechiragaggarwal@gmail.com>
This commit is contained in:
jetchirag 2023-07-29 01:39:21 +05:30
commit 29e718813d
206 changed files with 12901 additions and 5989 deletions

View File

@ -1,56 +1,86 @@
---
kind: pipeline
name: python-3-8-alpine-3-13
services:
- name: postgresql
image: postgres:13.1-alpine
image: docker.io/postgres:13.1-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: postgresql2
image: docker.io/postgres:13.1-alpine
environment:
POSTGRES_PASSWORD: test2
POSTGRES_DB: test
POSTGRES_USER: postgres2
commands:
- docker-entrypoint.sh -p 5433
- name: mysql
image: mariadb:10.5
image: docker.io/mariadb:10.5
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
- name: mysql2
image: docker.io/mariadb:10.5
environment:
MYSQL_ROOT_PASSWORD: test2
MYSQL_DATABASE: test
commands:
- docker-entrypoint.sh --port=3307
- name: mongodb
image: mongo:5.0.5
image: docker.io/mongo:5.0.5
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: test
- name: mongodb2
image: docker.io/mongo:5.0.5
environment:
MONGO_INITDB_ROOT_USERNAME: root2
MONGO_INITDB_ROOT_PASSWORD: test2
commands:
- docker-entrypoint.sh --port=27018
clone:
skip_verify: true
steps:
- name: build
image: alpine:3.13
image: docker.io/alpine:3.13
environment:
TEST_CONTAINER: true
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: documentation
type: exec
platform:
os: linux
arch: amd64
clone:
skip_verify: true
steps:
- name: build
image: plugins/docker
settings:
username:
environment:
USERNAME:
from_secret: docker_username
password:
PASSWORD:
from_secret: docker_password
registry: projects.torsion.org
repo: projects.torsion.org/borgmatic-collective/borgmatic
tags: docs
dockerfile: docs/Dockerfile
IMAGE_NAME: projects.torsion.org/borgmatic-collective/borgmatic:docs
commands:
- podman login --username "$USERNAME" --password "$PASSWORD" projects.torsion.org
- podman build --tag "$IMAGE_NAME" --file docs/Dockerfile --storage-opt "overlay.mount_program=/usr/bin/fuse-overlayfs" .
- podman push "$IMAGE_NAME"
trigger:
repo:
- borgmatic-collective/borgmatic
branch:
- master
- main
event:
- push

View File

@ -1,35 +0,0 @@
#### What I'm trying to do and why
#### Steps to reproduce (if a bug)
Include (sanitized) borgmatic configuration files if applicable.
#### Actual behavior (if a bug)
Include (sanitized) `--verbosity 2` output if applicable.
#### Expected behavior (if a bug)
#### Other notes / implementation ideas
#### Environment
**borgmatic version:** [version here]
Use `sudo borgmatic --version` or `sudo pip show borgmatic | grep ^Version`
**borgmatic installation method:** [e.g., Debian package, Docker container, etc.]
**Borg version:** [version here]
Use `sudo borg --version`
**Python version:** [version here]
Use `python3 --version`
**Database version (if applicable):** [version here]
Use `psql --version` or `mysql --version` on client and server.
**operating system and version:** [OS here]

View File

@ -0,0 +1,77 @@
name: "Bug or question/support"
about: "For filing a bug or getting support"
body:
- type: textarea
id: problem
attributes:
label: What I'm trying to do and why
validations:
required: true
- type: textarea
id: repro_steps
attributes:
label: Steps to reproduce
description: Include (sanitized) borgmatic configuration files if applicable.
validations:
required: false
- type: textarea
id: actual_behavior
attributes:
label: Actual behavior
description: Include (sanitized) `--verbosity 2` output if applicable.
validations:
required: false
- type: textarea
id: expected_behavior
attributes:
label: Expected behavior
validations:
required: false
- type: textarea
id: notes
attributes:
label: Other notes / implementation ideas
validations:
required: false
- type: input
id: borgmatic_version
attributes:
label: borgmatic version
description: Use `sudo borgmatic --version` or `sudo pip show borgmatic | grep ^Version`
validations:
required: false
- type: input
id: borgmatic_install_method
attributes:
label: borgmatic installation method
description: e.g., pip install, Debian package, container, etc.
validations:
required: false
- type: input
id: borg_version
attributes:
label: Borg version
description: Use `sudo borg --version`
validations:
required: false
- type: input
id: python_version
attributes:
label: Python version
description: Use `python3 --version`
validations:
required: false
- type: input
id: database_version
attributes:
label: Database version (if applicable)
description: Use `psql --version` / `mysql --version` / `mongodump --version` / `sqlite3 --version`
validations:
required: false
- type: input
id: operating_system_version
attributes:
label: Operating system and version
description: On Linux, use `cat /etc/os-release`
validations:
required: false

View File

@ -0,0 +1 @@
blank_issues_enabled: false

View File

@ -0,0 +1,15 @@
name: "Feature"
about: "For filing a feature request or idea"
body:
- type: textarea
id: request
attributes:
label: What I'd like to do and why
validations:
required: true
- type: textarea
id: notes
attributes:
label: Other notes / implementation ideas
validations:
required: false

138
NEWS
View File

@ -1,4 +1,129 @@
1.7.10.dev0
1.8.1.dev0
* #728: Fix for "prune" action error when using the "keep_exclude_tags" option.
* #730: Fix for Borg's interactive prompt on the "check --repair" action automatically getting
answered "NO" even when the "check_i_know_what_i_am_doing" option isn't set.
1.8.0
* #575: BREAKING: For the "borgmatic borg" action, instead of implicitly injecting
repository/archive into the resulting Borg command-line, pass repository to Borg via an
environment variable and make archive available for explicit use in your commands. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/run-arbitrary-borg-commands/
* #719: Fix an error when running "borg key export" through borgmatic.
* #720: Fix an error when dumping a database and the "exclude_nodump" option is set.
* #724: Add "check_i_know_what_i_am_doing" option to bypass Borg confirmation prompt when running
"check --repair".
* When merging two configuration files, error gracefully if the two files do not adhere to the same
format.
* #721: Remove configuration sections ("location:", "storage:", "hooks:" etc.), while still keeping
deprecated support for them. Now, all options are at the same level, and you don't need to worry
about commenting/uncommenting section headers when you change an option (if you remove your
sections first).
* #721: BREAKING: The retention prefix and the consistency prefix can no longer have different
values (unless one is not set).
* #721: BREAKING: The storage umask and the hooks umask can no longer have different values (unless
one is not set).
* BREAKING: Flags like "--config" that previously took multiple values now need to be given once
per value, e.g. "--config first.yaml --config second.yaml" instead of "--config first.yaml
second.yaml". This prevents argument parsing errors on ambiguous commands.
* BREAKING: Remove the deprecated (and silently ignored) "--successful" flag on the "list" action,
as newer versions of Borg list successful (non-checkpoint) archives by default.
* All deprecated configuration option values now generate warning logs.
* Remove the deprecated (and non-functional) "--excludes" flag in favor of excludes within
configuration.
* Fix an error when logging too-long command output during error handling. Now, long command output
is truncated before logging.
1.7.15
* #326: Add configuration options and command-line flags for backing up a database from one
location while restoring it somewhere else.
* #399: Add a documentation troubleshooting note for MySQL/MariaDB authentication errors.
* #529: Remove upgrade-borgmatic-config command for upgrading borgmatic 1.1.0 INI-style
configuration.
* #529: Deprecate generate-borgmatic-config in favor of new "config generate" action.
* #529: Deprecate validate-borgmatic-config in favor of new "config validate" action.
* #697, #712, #716: Extract borgmatic configuration from backup via new "config bootstrap"
action—even when borgmatic has no configuration yet!
* #669: Add sample systemd user service for running borgmatic as a non-root user.
* #711, #713: Fix an error when "data" check time files are accessed without getting upgraded
first.
1.7.14
* #484: Add a new verbosity level (-2) to disable output entirely (for console, syslog, log file,
or monitoring), so not even errors are shown.
* #688: Tweak archive check probing logic to use the newest timestamp found when multiple exist.
* #659: Add Borg 2 date-based matching flags to various actions for archive selection.
* #703: Fix an error when loading the configuration schema on Fedora Linux.
* #704: Fix "check" action error when repository and archive checks are configured but the archive
check gets skipped due to the configured frequency.
* #706: Fix "--archive latest" on "list" and "info" actions that only worked on the first of
multiple configured repositories.
1.7.13
* #375: Restore particular PostgreSQL schemas from a database dump via "borgmatic restore --schema"
flag. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#restore-particular-schemas
* #678: Fix error from PostgreSQL when dumping a database with a "format" of "plain".
* #678: Fix PostgreSQL hook to support "psql_command" and "pg_restore_command" options containing
commands with arguments.
* #678: Fix calls to psql in PostgreSQL hook to ignore "~/.psqlrc", whose settings can break
database dumping.
* #680: Add support for logging each log line as a JSON object via global "--log-json" flag.
* #682: Fix "source_directories_must_exist" option to expand globs and tildes in source directories.
* #684: Rename "master" development branch to "main" to use more inclusive language. You'll need to
update your development checkouts accordingly.
* #686: Add fish shell completion script so you can tab-complete on the borgmatic command-line. See
the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#shell-completion
* #687: Fix borgmatic error when not finding the configuration schema for certain "pip install
--editable" development installs.
* #688: Fix archive checks being skipped even when particular archives haven't been checked
recently. This occurred when using multiple borgmatic configuration files with different
"archive_name_format"s, for instance.
* #691: Fix error in "borgmatic restore" action when the configured repository path is relative
instead of absolute.
* #694: Run "borgmatic borg" action without capturing output so interactive prompts and flags like
"--progress" still work.
1.7.12
* #413: Add "log_file" context to command hooks so your scripts can consume the borgmatic log file.
See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
* #666, #670: Fix error when running the "info" action with the "--match-archives" or "--archive"
flags. Also fix the "--match-archives"/"--archive" flags to correctly override the
"match_archives" configuration option for the "transfer", "list", "rlist", and "info" actions.
* #668: Fix error when running the "prune" action with both "archive_name_format" and "prefix"
options set.
* #672: Selectively shallow merge certain mappings or sequences when including configuration files.
See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#shallow-merge
* #672: Selectively omit list values when including configuration files. See the documentation for
more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#list-merge
* #673: View the results of configuration file merging via "validate-borgmatic-config --show" flag.
See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#debugging-includes
* Add optional support for running end-to-end tests and building documentation with rootless Podman
instead of Docker.
1.7.11
* #479, #588: BREAKING: Automatically use the "archive_name_format" option to filter which archives
get used for borgmatic actions that operate on multiple archives. Override this behavior with the
new "match_archives" option in the storage section. This change is "breaking" in that it silently
changes which archives get considered for "rlist", "prune", "check", etc. See the documentation
for more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#archive-naming
* #479, #588: The "prefix" options have been deprecated in favor of the new "archive_name_format"
auto-matching behavior and the "match_archives" option.
* #658: Add "--log-file-format" flag for customizing the log message format. See the documentation
for more information:
https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#logging-to-file
* #662: Fix regression in which the "check_repositories" option failed to match repositories.
* #663: Fix regression in which the "transfer" action produced a traceback.
* Add spellchecking of source code during test runs.
1.7.10
* #396: When a database command errors, display and log the error message instead of swallowing it.
* #501: Optionally error if a source directory does not exist via "source_directories_must_exist"
option in borgmatic's location configuration.
* #576: Add support for "file://" paths within "repositories" option.
@ -8,6 +133,9 @@
* #618: Add support for BORG_FILES_CACHE_TTL environment variable via "borg_files_cache_ttl" option
in borgmatic's storage configuration.
* #623: Fix confusing message when an error occurs running actions for a configuration file.
* #635: Add optional repository labels so you can select a repository via "--repository yourlabel"
at the command-line. See the configuration reference for more information:
https://torsion.org/borgmatic/docs/reference/configuration/
* #649: Add documentation on backing up a database running in a container:
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#containers
* #655: Fix error when databases are configured and a source directory doesn't exist.
@ -321,7 +449,7 @@
* #398: Clarify canonical home of borgmatic in documentation.
* #406: Clarify that spaces in path names should not be backslashed in path names.
* #423: Fix error handling to error loudly when Borg gets killed due to running out of memory!
* Fix build so as not to attempt to build and push documentation for a non-master branch.
* Fix build so as not to attempt to build and push documentation for a non-main branch.
* "Fix" build failure with Alpine Edge by switching from Edge to Alpine 3.13.
* Move #borgmatic IRC channel from Freenode to Libera Chat due to Freenode takeover drama.
IRC connection info: https://torsion.org/borgmatic/#issues
@ -384,7 +512,7 @@
configuration schema descriptions.
1.5.6
* #292: Allow before_backup and similiar hooks to exit with a soft failure without altering the
* #292: Allow before_backup and similar hooks to exit with a soft failure without altering the
monitoring status on Healthchecks or other providers. Support this by waiting to ping monitoring
services with a "start" status until after before_* hooks finish. Failures in before_* hooks
still trigger a monitoring "fail" status.
@ -453,7 +581,7 @@
* For "list" and "info" actions, show repository names even at verbosity 0.
1.4.22
* #276, #285: Disable colored output when "--json" flag is used, so as to produce valid JSON ouput.
* #276, #285: Disable colored output when "--json" flag is used, so as to produce valid JSON output.
* After a backup of a database dump in directory format, properly remove the dump directory.
* In "borgmatic --help", don't expand $HOME in listing of default "--config" paths.
@ -825,7 +953,7 @@
* #77: Skip non-"*.yaml" config filenames in /etc/borgmatic.d/ so as not to parse backup files,
editor swap files, etc.
* #81: Document user-defined hooks run before/after backup, or on error.
* Add code style guidelines to the documention.
* Add code style guidelines to the documentation.
1.2.0
* #61: Support for Borg --list option via borgmatic command-line to list all archives.

View File

@ -11,54 +11,46 @@ borgmatic is simple, configuration-driven backup software for servers and
workstations. Protect your files with client-side encryption. Backup your
databases too. Monitor it all with integrated third-party services.
The canonical home of borgmatic is at <a href="https://torsion.org/borgmatic">https://torsion.org/borgmatic</a>.
The canonical home of borgmatic is at <a href="https://torsion.org/borgmatic">https://torsion.org/borgmatic/</a>
Here's an example configuration file:
```yaml
location:
# List of source directories to backup.
source_directories:
- /home
- /etc
# List of source directories to backup.
source_directories:
- /home
- /etc
# Paths of local or remote repositories to backup to.
repositories:
- ssh://1234@usw-s001.rsync.net/./backups.borg
- ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo
- /var/lib/backups/local.borg
# Paths of local or remote repositories to backup to.
repositories:
- path: ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo
label: borgbase
- path: /var/lib/backups/local.borg
label: local
retention:
# Retention policy for how many backups to keep.
keep_daily: 7
keep_weekly: 4
keep_monthly: 6
# Retention policy for how many backups to keep.
keep_daily: 7
keep_weekly: 4
keep_monthly: 6
consistency:
# List of checks to run to validate your backups.
checks:
- name: repository
- name: archives
frequency: 2 weeks
# List of checks to run to validate your backups.
checks:
- name: repository
- name: archives
frequency: 2 weeks
hooks:
# Custom preparation scripts to run.
before_backup:
- prepare-for-backup.sh
# Custom preparation scripts to run.
before_backup:
- prepare-for-backup.sh
# Databases to dump and include in backups.
postgresql_databases:
- name: users
# Databases to dump and include in backups.
postgresql_databases:
- name: users
# Third-party services to notify you if backups aren't happening.
healthchecks: https://hc-ping.com/be067061-cf96-4412-8eae-62b0c50d6a8c
# Third-party services to notify you if backups aren't happening.
healthchecks: https://hc-ping.com/be067061-cf96-4412-8eae-62b0c50d6a8c
```
Want to see borgmatic in action? Check out the <a
href="https://asciinema.org/a/203761?autoplay=1" target="_blank">screencast</a>.
<a href="https://asciinema.org/a/203761?autoplay=1" target="_blank"><img src="https://asciinema.org/a/203761.png" width="480"></a>
borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
## Integrations
@ -90,16 +82,15 @@ reference guides</a>.
Need somewhere to store your encrypted off-site backups? The following hosting
providers include specific support for Borg/borgmatic—and fund borgmatic
development and hosting when you use these links to sign up. (These are
referral links, but without any tracking scripts or cookies.)
development and hosting when you use these referral links to sign up:
<ul>
<li class="referral"><a href="https://www.borgbase.com/?utm_source=borgmatic">BorgBase</a>: Borg hosting service with support for monitoring, 2FA, and append-only repos</li>
<li class="referral"><a href="https://hetzner.cloud/?ref=v9dOJ98Ic9I8">Hetzner</a>: A "storage box" that includes support for Borg</li>
</ul>
Additionally, [rsync.net](https://www.rsync.net/products/borg.html) and
[Hetzner](https://www.hetzner.com/storage/storage-box) have compatible storage
offerings, but do not currently fund borgmatic development or hosting.
Additionally, rsync.net has a compatible storage offering, but does not fund
borgmatic development or hosting.
## Support and contributing
@ -120,10 +111,7 @@ issues.
### Social
Check out the [Borg subreddit](https://www.reddit.com/r/BorgBackup/) for
general Borg and borgmatic discussion and support.
Also follow [borgmatic on Mastodon](https://fosstodon.org/@borgmatic).
Follow [borgmatic on Mastodon](https://fosstodon.org/@borgmatic).
### Chat
@ -164,5 +152,5 @@ Also, please check out the [borgmatic development
how-to](https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/) for
info on cloning source code, running tests, etc.
<a href="https://build.torsion.org/borgmatic-collective/borgmatic" alt="build status">![Build Status](https://build.torsion.org/api/badges/borgmatic-collective/borgmatic/status.svg?ref=refs/heads/master)</a>
<a href="https://build.torsion.org/borgmatic-collective/borgmatic" alt="build status">![Build Status](https://build.torsion.org/api/badges/borgmatic-collective/borgmatic/status.svg?ref=refs/heads/main)</a>

View File

@ -7,8 +7,8 @@ permalink: security-policy/index.html
While we want to hear about security vulnerabilities in all versions of
borgmatic, security fixes are only made to the most recently released version.
It's simply not practical for our small volunteer effort to maintain multiple
release branches and put out separate security patches for each.
It's not practical for our small volunteer effort to maintain multiple release
branches and put out separate security patches for each.
## Reporting a vulnerability

View File

@ -0,0 +1,9 @@
import argparse
def update_arguments(arguments, **updates):
'''
Given an argparse.Namespace instance of command-line arguments and one or more keyword argument
updates to perform, return a copy of the arguments with those updates applied.
'''
return argparse.Namespace(**dict(vars(arguments), **updates))

View File

@ -8,7 +8,13 @@ logger = logging.getLogger(__name__)
def run_borg(
repository, storage, local_borg_version, borg_arguments, local_path, remote_path,
repository,
config,
local_borg_version,
borg_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "borg" action for the given repository.
@ -16,18 +22,21 @@ def run_borg(
if borg_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, borg_arguments.repository
):
logger.info(f'{repository}: Running arbitrary Borg command')
logger.info(
f'{repository.get("label", repository["path"])}: Running arbitrary Borg command'
)
archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository,
repository['path'],
borg_arguments.archive,
storage,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
borgmatic.borg.borg.run_arbitrary_borg(
repository,
storage,
repository['path'],
config,
local_borg_version,
options=borg_arguments.options,
archive=archive_name,

View File

@ -7,7 +7,13 @@ logger = logging.getLogger(__name__)
def run_break_lock(
repository, storage, local_borg_version, break_lock_arguments, local_path, remote_path,
repository,
config,
local_borg_version,
break_lock_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "break-lock" action for the given repository.
@ -15,7 +21,14 @@ def run_break_lock(
if break_lock_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, break_lock_arguments.repository
):
logger.info(f'{repository}: Breaking repository and cache locks')
borgmatic.borg.break_lock.break_lock(
repository, storage, local_borg_version, local_path=local_path, remote_path=remote_path,
logger.info(
f'{repository.get("label", repository["path"])}: Breaking repository and cache locks'
)
borgmatic.borg.break_lock.break_lock(
repository['path'],
config,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -10,10 +10,7 @@ logger = logging.getLogger(__name__)
def run_check(
config_filename,
repository,
location,
storage,
consistency,
hooks,
config,
hook_context,
local_borg_version,
check_arguments,
@ -30,20 +27,19 @@ def run_check(
return
borgmatic.hooks.command.execute_hook(
hooks.get('before_check'),
hooks.get('umask'),
config.get('before_check'),
config.get('umask'),
config_filename,
'pre-check',
global_arguments.dry_run,
**hook_context,
)
logger.info(f'{repository}: Running consistency checks')
logger.info(f'{repository.get("label", repository["path"])}: Running consistency checks')
borgmatic.borg.check.check_archives(
repository,
location,
storage,
consistency,
repository['path'],
config,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
progress=check_arguments.progress,
@ -52,8 +48,8 @@ def run_check(
force=check_arguments.force,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_check'),
hooks.get('umask'),
config.get('after_check'),
config.get('umask'),
config_filename,
'post-check',
global_arguments.dry_run,

View File

@ -11,9 +11,7 @@ logger = logging.getLogger(__name__)
def run_compact(
config_filename,
repository,
storage,
retention,
hooks,
config,
hook_context,
local_borg_version,
compact_arguments,
@ -31,20 +29,23 @@ def run_compact(
return
borgmatic.hooks.command.execute_hook(
hooks.get('before_compact'),
hooks.get('umask'),
config.get('before_compact'),
config.get('umask'),
config_filename,
'pre-compact',
global_arguments.dry_run,
**hook_context,
)
if borgmatic.borg.feature.available(borgmatic.borg.feature.Feature.COMPACT, local_borg_version):
logger.info(f'{repository}: Compacting segments{dry_run_label}')
logger.info(
f'{repository.get("label", repository["path"])}: Compacting segments{dry_run_label}'
)
borgmatic.borg.compact.compact_segments(
global_arguments.dry_run,
repository,
storage,
repository['path'],
config,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
progress=compact_arguments.progress,
@ -52,10 +53,12 @@ def run_compact(
threshold=compact_arguments.threshold,
)
else: # pragma: nocover
logger.info(f'{repository}: Skipping compact (only available/needed in Borg 1.2+)')
logger.info(
f'{repository.get("label", repository["path"])}: Skipping compact (only available/needed in Borg 1.2+)'
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_compact'),
hooks.get('umask'),
config.get('after_compact'),
config.get('umask'),
config_filename,
'post-compact',
global_arguments.dry_run,

View File

View File

@ -0,0 +1,103 @@
import json
import logging
import os
import borgmatic.borg.extract
import borgmatic.borg.rlist
import borgmatic.config.validate
import borgmatic.hooks.command
from borgmatic.borg.state import DEFAULT_BORGMATIC_SOURCE_DIRECTORY
logger = logging.getLogger(__name__)
def get_config_paths(bootstrap_arguments, global_arguments, local_borg_version):
'''
Given:
The bootstrap arguments, which include the repository and archive name, borgmatic source directory,
destination directory, and whether to strip components.
The global arguments, which include the dry run flag
and the local borg version,
Return:
The config paths from the manifest.json file in the borgmatic source directory after extracting it from the
repository.
Raise ValueError if the manifest JSON is missing, can't be decoded, or doesn't contain the
expected configuration path data.
'''
borgmatic_source_directory = (
bootstrap_arguments.borgmatic_source_directory or DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
borgmatic_manifest_path = os.path.expanduser(
os.path.join(borgmatic_source_directory, 'bootstrap', 'manifest.json')
)
extract_process = borgmatic.borg.extract.extract_archive(
global_arguments.dry_run,
bootstrap_arguments.repository,
borgmatic.borg.rlist.resolve_archive_name(
bootstrap_arguments.repository,
bootstrap_arguments.archive,
{},
local_borg_version,
global_arguments,
),
[borgmatic_manifest_path],
{},
local_borg_version,
global_arguments,
extract_to_stdout=True,
)
manifest_json = extract_process.stdout.read()
if not manifest_json:
raise ValueError(
'Cannot read configuration paths from archive due to missing bootstrap manifest'
)
try:
manifest_data = json.loads(manifest_json)
except json.JSONDecodeError as error:
raise ValueError(
f'Cannot read configuration paths from archive due to invalid bootstrap manifest JSON: {error}'
)
try:
return manifest_data['config_paths']
except KeyError:
raise ValueError(
'Cannot read configuration paths from archive due to invalid bootstrap manifest'
)
def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
'''
Run the "bootstrap" action for the given repository.
Raise ValueError if the bootstrap configuration could not be loaded.
Raise CalledProcessError or OSError if Borg could not be run.
'''
manifest_config_paths = get_config_paths(
bootstrap_arguments, global_arguments, local_borg_version
)
logger.info(f"Bootstrapping config paths: {', '.join(manifest_config_paths)}")
borgmatic.borg.extract.extract_archive(
global_arguments.dry_run,
bootstrap_arguments.repository,
borgmatic.borg.rlist.resolve_archive_name(
bootstrap_arguments.repository,
bootstrap_arguments.archive,
{},
local_borg_version,
global_arguments,
),
[config_path.lstrip(os.path.sep) for config_path in manifest_config_paths],
{},
local_borg_version,
global_arguments,
extract_to_stdout=False,
destination_path=bootstrap_arguments.destination,
strip_components=bootstrap_arguments.strip_components,
progress=bootstrap_arguments.progress,
)

View File

@ -0,0 +1,48 @@
import logging
import borgmatic.config.generate
import borgmatic.config.validate
import borgmatic.logger
logger = logging.getLogger(__name__)
def run_generate(generate_arguments, global_arguments):
'''
Given the generate arguments and the global arguments, each as an argparse.Namespace instance,
run the "generate" action.
Raise FileExistsError if a file already exists at the destination path and the generate
arguments do not have overwrite set.
'''
borgmatic.logger.add_custom_log_levels()
dry_run_label = ' (dry run; not actually writing anything)' if global_arguments.dry_run else ''
logger.answer(
f'Generating a configuration file at: {generate_arguments.destination_filename}{dry_run_label}'
)
borgmatic.config.generate.generate_sample_configuration(
global_arguments.dry_run,
generate_arguments.source_filename,
generate_arguments.destination_filename,
borgmatic.config.validate.schema_filename(),
overwrite=generate_arguments.overwrite,
)
if generate_arguments.source_filename:
logger.answer(
f'''
Merged in the contents of configuration file at: {generate_arguments.source_filename}
To review the changes made, run:
diff --unified {generate_arguments.source_filename} {generate_arguments.destination_filename}'''
)
logger.answer(
'''
This includes all available configuration options with example values, the few
required options as indicated. Please edit the file to suit your needs.
If you ever need help: https://torsion.org/borgmatic/#issues'''
)

View File

@ -0,0 +1,25 @@
import logging
import borgmatic.config.generate
import borgmatic.logger
logger = logging.getLogger(__name__)
def run_validate(validate_arguments, configs):
'''
Given the validate arguments as an argparse.Namespace instance and a dict of configuration
filename to corresponding parsed configuration, run the "validate" action.
Most of the validation is actually performed implicitly by the standard borgmatic configuration
loading machinery prior to here, so this function mainly exists to support additional validate
flags like "--show".
'''
borgmatic.logger.add_custom_log_levels()
if validate_arguments.show:
for config_path, config in configs.items():
if len(configs) > 1:
logger.answer('---')
logger.answer(borgmatic.config.generate.render_configuration(config))

View File

@ -1,7 +1,14 @@
import json
import logging
import os
try:
import importlib_metadata
except ModuleNotFoundError: # pragma: nocover
import importlib.metadata as importlib_metadata
import borgmatic.borg.create
import borgmatic.borg.state
import borgmatic.config.validate
import borgmatic.hooks.command
import borgmatic.hooks.dispatch
@ -10,12 +17,39 @@ import borgmatic.hooks.dump
logger = logging.getLogger(__name__)
def create_borgmatic_manifest(config, config_paths, dry_run):
'''
Create a borgmatic manifest file to store the paths to the configuration files used to create
the archive.
'''
if dry_run:
return
borgmatic_source_directory = config.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
borgmatic_manifest_path = os.path.expanduser(
os.path.join(borgmatic_source_directory, 'bootstrap', 'manifest.json')
)
if not os.path.exists(borgmatic_manifest_path):
os.makedirs(os.path.dirname(borgmatic_manifest_path), exist_ok=True)
with open(borgmatic_manifest_path, 'w') as config_list_file:
json.dump(
{
'borgmatic_version': importlib_metadata.version('borgmatic'),
'config_paths': config_paths,
},
config_list_file,
)
def run_create(
config_filename,
repository,
location,
storage,
hooks,
config,
hook_context,
local_borg_version,
create_arguments,
@ -35,38 +69,37 @@ def run_create(
return
borgmatic.hooks.command.execute_hook(
hooks.get('before_backup'),
hooks.get('umask'),
config.get('before_backup'),
config.get('umask'),
config_filename,
'pre-backup',
global_arguments.dry_run,
**hook_context,
)
logger.info(f'{repository}: Creating archive{dry_run_label}')
logger.info(f'{repository.get("label", repository["path"])}: Creating archive{dry_run_label}')
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
repository,
config,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
active_dumps = borgmatic.hooks.dispatch.call_hooks(
'dump_databases',
hooks,
repository,
config,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
create_borgmatic_manifest(config, global_arguments.used_config_paths, global_arguments.dry_run)
stream_processes = [process for processes in active_dumps.values() for process in processes]
json_output = borgmatic.borg.create.create_archive(
global_arguments.dry_run,
repository,
location,
storage,
repository['path'],
config,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
progress=create_arguments.progress,
@ -80,15 +113,14 @@ def run_create(
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
config,
config_filename,
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_backup'),
hooks.get('umask'),
config.get('after_backup'),
config.get('umask'),
config_filename,
'post-backup',
global_arguments.dry_run,

View File

@ -9,7 +9,7 @@ logger = logging.getLogger(__name__)
def run_export_tar(
repository,
storage,
config,
local_borg_version,
export_tar_arguments,
global_arguments,
@ -22,22 +22,26 @@ def run_export_tar(
if export_tar_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, export_tar_arguments.repository
):
logger.info(f'{repository}: Exporting archive {export_tar_arguments.archive} as tar file')
logger.info(
f'{repository["path"]}: Exporting archive {export_tar_arguments.archive} as tar file'
)
borgmatic.borg.export_tar.export_tar_archive(
global_arguments.dry_run,
repository,
repository['path'],
borgmatic.borg.rlist.resolve_archive_name(
repository,
repository['path'],
export_tar_arguments.archive,
storage,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
),
export_tar_arguments.paths,
export_tar_arguments.destination,
storage,
config,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
tar_filter=export_tar_arguments.tar_filter,

View File

@ -11,9 +11,7 @@ logger = logging.getLogger(__name__)
def run_extract(
config_filename,
repository,
location,
storage,
hooks,
config,
hook_context,
local_borg_version,
extract_arguments,
@ -25,8 +23,8 @@ def run_extract(
Run the "extract" action for the given repository.
'''
borgmatic.hooks.command.execute_hook(
hooks.get('before_extract'),
hooks.get('umask'),
config.get('before_extract'),
config.get('umask'),
config_filename,
'pre-extract',
global_arguments.dry_run,
@ -35,22 +33,25 @@ def run_extract(
if extract_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, extract_arguments.repository
):
logger.info(f'{repository}: Extracting archive {extract_arguments.archive}')
logger.info(
f'{repository.get("label", repository["path"])}: Extracting archive {extract_arguments.archive}'
)
borgmatic.borg.extract.extract_archive(
global_arguments.dry_run,
repository,
repository['path'],
borgmatic.borg.rlist.resolve_archive_name(
repository,
repository['path'],
extract_arguments.archive,
storage,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
),
extract_arguments.paths,
location,
storage,
config,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
destination_path=extract_arguments.destination,
@ -58,8 +59,8 @@ def run_extract(
progress=extract_arguments.progress,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_extract'),
hooks.get('umask'),
config.get('after_extract'),
config.get('umask'),
config_filename,
'post-extract',
global_arguments.dry_run,

View File

@ -1,6 +1,7 @@
import json
import logging
import borgmatic.actions.arguments
import borgmatic.borg.info
import borgmatic.borg.rlist
import borgmatic.config.validate
@ -9,7 +10,13 @@ logger = logging.getLogger(__name__)
def run_info(
repository, storage, local_borg_version, info_arguments, local_path, remote_path,
repository,
config,
local_borg_version,
info_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "info" action for the given repository and archive.
@ -20,22 +27,26 @@ def run_info(
repository, info_arguments.repository
):
if not info_arguments.json: # pragma: nocover
logger.answer(f'{repository}: Displaying archive summary information')
info_arguments.archive = borgmatic.borg.rlist.resolve_archive_name(
repository,
logger.answer(
f'{repository.get("label", repository["path"])}: Displaying archive summary information'
)
archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
info_arguments.archive,
storage,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
json_output = borgmatic.borg.info.display_archives_info(
repository,
storage,
repository['path'],
config,
local_borg_version,
info_arguments=info_arguments,
local_path=local_path,
remote_path=remote_path,
borgmatic.actions.arguments.update_arguments(info_arguments, archive=archive_name),
global_arguments,
local_path,
remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

View File

@ -1,6 +1,7 @@
import json
import logging
import borgmatic.actions.arguments
import borgmatic.borg.list
import borgmatic.config.validate
@ -8,7 +9,13 @@ logger = logging.getLogger(__name__)
def run_list(
repository, storage, local_borg_version, list_arguments, local_path, remote_path,
repository,
config,
local_borg_version,
list_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "list" action for the given repository and archive.
@ -20,24 +27,27 @@ def run_list(
):
if not list_arguments.json: # pragma: nocover
if list_arguments.find_paths:
logger.answer(f'{repository}: Searching archives')
logger.answer(f'{repository.get("label", repository["path"])}: Searching archives')
elif not list_arguments.archive:
logger.answer(f'{repository}: Listing archives')
list_arguments.archive = borgmatic.borg.rlist.resolve_archive_name(
repository,
logger.answer(f'{repository.get("label", repository["path"])}: Listing archives')
archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
list_arguments.archive,
storage,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
json_output = borgmatic.borg.list.list_archive(
repository,
storage,
repository['path'],
config,
local_borg_version,
list_arguments=list_arguments,
local_path=local_path,
remote_path=remote_path,
borgmatic.actions.arguments.update_arguments(list_arguments, archive=archive_name),
global_arguments,
local_path,
remote_path,
)
if json_output: # pragma: nocover
yield json.loads(json_output)

View File

@ -8,7 +8,13 @@ logger = logging.getLogger(__name__)
def run_mount(
repository, storage, local_borg_version, mount_arguments, local_path, remote_path,
repository,
config,
local_borg_version,
mount_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "mount" action for the given repository.
@ -17,26 +23,27 @@ def run_mount(
repository, mount_arguments.repository
):
if mount_arguments.archive:
logger.info(f'{repository}: Mounting archive {mount_arguments.archive}')
logger.info(
f'{repository.get("label", repository["path"])}: Mounting archive {mount_arguments.archive}'
)
else: # pragma: nocover
logger.info(f'{repository}: Mounting repository')
logger.info(f'{repository.get("label", repository["path"])}: Mounting repository')
borgmatic.borg.mount.mount_archive(
repository,
repository['path'],
borgmatic.borg.rlist.resolve_archive_name(
repository,
repository['path'],
mount_arguments.archive,
storage,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
),
mount_arguments.mount_point,
mount_arguments.paths,
mount_arguments.foreground,
mount_arguments.options,
storage,
mount_arguments,
config,
local_borg_version,
global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -10,9 +10,7 @@ logger = logging.getLogger(__name__)
def run_prune(
config_filename,
repository,
storage,
retention,
hooks,
config,
hook_context,
local_borg_version,
prune_arguments,
@ -30,28 +28,27 @@ def run_prune(
return
borgmatic.hooks.command.execute_hook(
hooks.get('before_prune'),
hooks.get('umask'),
config.get('before_prune'),
config.get('umask'),
config_filename,
'pre-prune',
global_arguments.dry_run,
**hook_context,
)
logger.info(f'{repository}: Pruning archives{dry_run_label}')
logger.info(f'{repository.get("label", repository["path"])}: Pruning archives{dry_run_label}')
borgmatic.borg.prune.prune_archives(
global_arguments.dry_run,
repository,
storage,
retention,
repository['path'],
config,
local_borg_version,
prune_arguments,
global_arguments,
local_path=local_path,
remote_path=remote_path,
stats=prune_arguments.stats,
list_archives=prune_arguments.list_archives,
)
borgmatic.hooks.command.execute_hook(
hooks.get('after_prune'),
hooks.get('umask'),
config.get('after_prune'),
config.get('umask'),
config_filename,
'post-prune',
global_arguments.dry_run,

View File

@ -8,7 +8,7 @@ logger = logging.getLogger(__name__)
def run_rcreate(
repository,
storage,
config,
local_borg_version,
rcreate_arguments,
global_arguments,
@ -23,12 +23,13 @@ def run_rcreate(
):
return
logger.info(f'{repository}: Creating repository')
logger.info(f'{repository.get("label", repository["path"])}: Creating repository')
borgmatic.borg.rcreate.create_repository(
global_arguments.dry_run,
repository,
storage,
repository['path'],
config,
local_borg_version,
global_arguments,
rcreate_arguments.encryption_mode,
rcreate_arguments.source_repository,
rcreate_arguments.copy_crypt_key,

View File

@ -18,12 +18,12 @@ UNSPECIFIED_HOOK = object()
def get_configured_database(
hooks, archive_database_names, hook_name, database_name, configuration_database_name=None
config, archive_database_names, hook_name, database_name, configuration_database_name=None
):
'''
Find the first database with the given hook name and database name in the configured hooks
dict and the given archive database names dict (from hook name to database names contained in
a particular backup archive). If UNSPECIFIED_HOOK is given as the hook name, search all database
Find the first database with the given hook name and database name in the configuration dict and
the given archive database names dict (from hook name to database names contained in a
particular backup archive). If UNSPECIFIED_HOOK is given as the hook name, search all database
hooks for the named database. If a configuration database name is given, use that instead of the
database name to lookup the database in the given hooks configuration.
@ -33,9 +33,13 @@ def get_configured_database(
configuration_database_name = database_name
if hook_name == UNSPECIFIED_HOOK:
hooks_to_search = hooks
hooks_to_search = {
hook_name: value
for (hook_name, value) in config.items()
if hook_name in borgmatic.hooks.dump.DATABASE_HOOK_NAMES
}
else:
hooks_to_search = {hook_name: hooks[hook_name]}
hooks_to_search = {hook_name: config[hook_name]}
return next(
(
@ -58,9 +62,7 @@ def get_configured_hook_name_and_database(hooks, database_name):
def restore_single_database(
repository,
location,
storage,
hooks,
config,
local_borg_version,
global_arguments,
local_path,
@ -68,31 +70,34 @@ def restore_single_database(
archive_name,
hook_name,
database,
connection_params,
): # pragma: no cover
'''
Given (among other things) an archive name, a database hook name, and a configured database
Given (among other things) an archive name, a database hook name, the hostname,
port, username and password as connection params, and a configured database
configuration dict, restore that database from the archive.
'''
logger.info(f'{repository}: Restoring database {database["name"]}')
logger.info(
f'{repository.get("label", repository["path"])}: Restoring database {database["name"]}'
)
dump_pattern = borgmatic.hooks.dispatch.call_hooks(
'make_database_dump_pattern',
hooks,
repository,
config,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
database['name'],
)[hook_name]
# Kick off a single database extract to stdout.
extract_process = borgmatic.borg.extract.extract_archive(
dry_run=global_arguments.dry_run,
repository=repository,
repository=repository['path'],
archive=archive_name,
paths=borgmatic.hooks.dump.convert_glob_patterns_to_borg_patterns([dump_pattern]),
location_config=location,
storage_config=storage,
config=config,
local_borg_version=local_borg_version,
global_arguments=global_arguments,
local_path=local_path,
remote_path=remote_path,
destination_path='/',
@ -104,26 +109,33 @@ def restore_single_database(
# Run a single database restore, consuming the extract stdout (if any).
borgmatic.hooks.dispatch.call_hooks(
'restore_database_dump',
{hook_name: [database]},
repository,
config,
repository['path'],
database['name'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
extract_process,
connection_params,
)
def collect_archive_database_names(
repository, archive, location, storage, local_borg_version, local_path, remote_path,
repository,
archive,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
):
'''
Given a local or remote repository path, a resolved archive name, a location configuration dict,
a storage configuration dict, the local Borg version, and local and remote Borg paths, query the
archive for the names of databases it contains and return them as a dict from hook name to a
sequence of database names.
Given a local or remote repository path, a resolved archive name, a configuration dict, the
local Borg version, global_arguments an argparse.Namespace, and local and remote Borg paths,
query the archive for the names of databases it contains and return them as a dict from hook
name to a sequence of database names.
'''
borgmatic_source_directory = os.path.expanduser(
location.get(
config.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
).lstrip('/')
@ -133,8 +145,9 @@ def collect_archive_database_names(
dump_paths = borgmatic.borg.list.capture_archive_listing(
repository,
archive,
storage,
config,
local_borg_version,
global_arguments,
list_path=parent_dump_path,
local_path=local_path,
remote_path=remote_path,
@ -180,7 +193,7 @@ def find_databases_to_restore(requested_database_names, archive_database_names):
if 'all' in restore_names[UNSPECIFIED_HOOK]:
restore_names[UNSPECIFIED_HOOK].remove('all')
for (hook_name, database_names) in archive_database_names.items():
for hook_name, database_names in archive_database_names.items():
restore_names.setdefault(hook_name, []).extend(database_names)
# If a database is to be restored as part of "all", then remove it from restore names so
@ -235,9 +248,7 @@ def ensure_databases_found(restore_names, remaining_restore_names, found_names):
def run_restore(
repository,
location,
storage,
hooks,
config,
local_borg_version,
restore_arguments,
global_arguments,
@ -255,31 +266,51 @@ def run_restore(
):
return
logger.info(f'{repository}: Restoring databases from archive {restore_arguments.archive}')
logger.info(
f'{repository.get("label", repository["path"])}: Restoring databases from archive {restore_arguments.archive}'
)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
repository,
config,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)
archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository, restore_arguments.archive, storage, local_borg_version, local_path, remote_path,
repository['path'],
restore_arguments.archive,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
archive_database_names = collect_archive_database_names(
repository, archive_name, location, storage, local_borg_version, local_path, remote_path,
repository['path'],
archive_name,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
restore_names = find_databases_to_restore(restore_arguments.databases, archive_database_names)
found_names = set()
remaining_restore_names = {}
connection_params = {
'hostname': restore_arguments.hostname,
'port': restore_arguments.port,
'username': restore_arguments.username,
'password': restore_arguments.password,
'restore_path': restore_arguments.restore_path,
}
for hook_name, database_names in restore_names.items():
for database_name in database_names:
found_hook_name, found_database = get_configured_database(
hooks, archive_database_names, hook_name, database_name
config, archive_database_names, hook_name, database_name
)
if not found_database:
@ -291,24 +322,23 @@ def run_restore(
found_names.add(database_name)
restore_single_database(
repository,
location,
storage,
hooks,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
archive_name,
found_hook_name or hook_name,
found_database,
dict(found_database, **{'schemas': restore_arguments.schemas}),
connection_params,
)
# For any database that weren't found via exact matches in the hooks configuration, try to
# fallback to "all" entries.
# For any database that weren't found via exact matches in the configuration, try to fallback
# to "all" entries.
for hook_name, database_names in remaining_restore_names.items():
for database_name in database_names:
found_hook_name, found_database = get_configured_database(
hooks, archive_database_names, hook_name, database_name, 'all'
config, archive_database_names, hook_name, database_name, 'all'
)
if not found_database:
@ -320,24 +350,22 @@ def run_restore(
restore_single_database(
repository,
location,
storage,
hooks,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
archive_name,
found_hook_name or hook_name,
database,
dict(database, **{'schemas': restore_arguments.schemas}),
connection_params,
)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_database_dumps',
hooks,
repository,
config,
repository['path'],
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run,
)

View File

@ -8,7 +8,13 @@ logger = logging.getLogger(__name__)
def run_rinfo(
repository, storage, local_borg_version, rinfo_arguments, local_path, remote_path,
repository,
config,
local_borg_version,
rinfo_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "rinfo" action for the given repository.
@ -19,13 +25,16 @@ def run_rinfo(
repository, rinfo_arguments.repository
):
if not rinfo_arguments.json: # pragma: nocover
logger.answer(f'{repository}: Displaying repository summary information')
logger.answer(
f'{repository.get("label", repository["path"])}: Displaying repository summary information'
)
json_output = borgmatic.borg.rinfo.display_repository_info(
repository,
storage,
repository['path'],
config,
local_borg_version,
rinfo_arguments=rinfo_arguments,
global_arguments=global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -8,7 +8,13 @@ logger = logging.getLogger(__name__)
def run_rlist(
repository, storage, local_borg_version, rlist_arguments, local_path, remote_path,
repository,
config,
local_borg_version,
rlist_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "rlist" action for the given repository.
@ -19,13 +25,14 @@ def run_rlist(
repository, rlist_arguments.repository
):
if not rlist_arguments.json: # pragma: nocover
logger.answer(f'{repository}: Listing repository')
logger.answer(f'{repository.get("label", repository["path"])}: Listing repository')
json_output = borgmatic.borg.rlist.list_repository(
repository,
storage,
repository['path'],
config,
local_borg_version,
rlist_arguments=rlist_arguments,
global_arguments=global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -7,7 +7,7 @@ logger = logging.getLogger(__name__)
def run_transfer(
repository,
storage,
config,
local_borg_version,
transfer_arguments,
global_arguments,
@ -17,13 +17,16 @@ def run_transfer(
'''
Run the "transfer" action for the given repository.
'''
logger.info(f'{repository}: Transferring archives to repository')
logger.info(
f'{repository.get("label", repository["path"])}: Transferring archives to repository'
)
borgmatic.borg.transfer.transfer_archives(
global_arguments.dry_run,
repository,
storage,
repository['path'],
config,
local_borg_version,
transfer_arguments,
global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -1,20 +1,19 @@
import logging
import borgmatic.commands.arguments
import borgmatic.logger
from borgmatic.borg import environment, flags
from borgmatic.execute import execute_command
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
REPOSITORYLESS_BORG_COMMANDS = {'serve', None}
BORG_SUBCOMMANDS_WITH_SUBCOMMANDS = {'key', 'debug'}
BORG_SUBCOMMANDS_WITHOUT_REPOSITORY = (('debug', 'info'), ('debug', 'convert-profile'), ())
def run_arbitrary_borg(
repository,
storage_config,
repository_path,
config,
local_borg_version,
options,
archive=None,
@ -22,12 +21,13 @@ def run_arbitrary_borg(
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, a
Given a local or remote repository path, a configuration dict, the local Borg version, a
sequence of arbitrary command-line Borg options, and an optional archive name, run an arbitrary
Borg command on the given repository/archive.
Borg command, passing in REPOSITORY and ARCHIVE environment variables for optional use in the
command.
'''
borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None)
lock_wait = config.get('lock_wait', None)
try:
options = options[1:] if options[0] == '--' else options
@ -36,33 +36,35 @@ def run_arbitrary_borg(
command_options_start_index = 2 if options[0] in BORG_SUBCOMMANDS_WITH_SUBCOMMANDS else 1
borg_command = tuple(options[:command_options_start_index])
command_options = tuple(options[command_options_start_index:])
if borg_command and borg_command[0] in borgmatic.commands.arguments.ACTION_ALIASES.keys():
logger.warning(
f"Borg's {borg_command[0]} subcommand is supported natively by borgmatic. Try this instead: borgmatic {borg_command[0]}"
)
except IndexError:
borg_command = ()
command_options = ()
if borg_command in BORG_SUBCOMMANDS_WITHOUT_REPOSITORY:
repository_archive_flags = ()
elif archive:
repository_archive_flags = flags.make_repository_archive_flags(
repository, archive, local_borg_version
)
else:
repository_archive_flags = flags.make_repository_flags(repository, local_borg_version)
full_command = (
(local_path,)
+ borg_command
+ repository_archive_flags
+ command_options
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait)
+ command_options
)
return execute_command(
full_command,
output_log_level=logging.ANSWER,
output_file=DO_NOT_CAPTURE,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
shell=True,
extra_environment=dict(
(environment.make_environment(config) or {}),
**{
'BORG_REPO': repository_path,
'ARCHIVE': archive if archive else '',
},
),
)

View File

@ -7,25 +7,31 @@ logger = logging.getLogger(__name__)
def break_lock(
repository, storage_config, local_borg_version, local_path='borg', remote_path=None,
repository_path,
config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage configuration dict, the local Borg version,
and optional local and remote Borg paths, break any repository and cache locks leftover from Borg
aborting.
Given a local or remote repository path, a configuration dict, the local Borg version, an
argparse.Namespace of global arguments, and optional local and remote Borg paths, break any
repository and cache locks leftover from Borg aborting.
'''
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
full_command = (
(local_path, 'break-lock')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_repository_flags(repository, local_borg_version)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
borg_environment = environment.make_environment(storage_config)
borg_environment = environment.make_environment(config)
execute_command(full_command, borg_local_path=local_path, extra_environment=borg_environment)

View File

@ -1,5 +1,7 @@
import argparse
import datetime
import hashlib
import itertools
import json
import logging
import os
@ -12,18 +14,17 @@ DEFAULT_CHECKS = (
{'name': 'repository', 'frequency': '1 month'},
{'name': 'archives', 'frequency': '1 month'},
)
DEFAULT_PREFIX = '{hostname}-' # noqa: FS003
logger = logging.getLogger(__name__)
def parse_checks(consistency_config, only_checks=None):
def parse_checks(config, only_checks=None):
'''
Given a consistency config with a "checks" sequence of dicts and an optional list of override
Given a configuration dict with a "checks" sequence of dicts and an optional list of override
checks, return a tuple of named checks to run.
For example, given a retention config of:
For example, given a config of:
{'checks': ({'name': 'repository'}, {'name': 'archives'})}
@ -35,8 +36,7 @@ def parse_checks(consistency_config, only_checks=None):
has a name of "disabled", return an empty tuple, meaning that no checks should be run.
'''
checks = only_checks or tuple(
check_config['name']
for check_config in (consistency_config.get('checks', None) or DEFAULT_CHECKS)
check_config['name'] for check_config in (config.get('checks', None) or DEFAULT_CHECKS)
)
checks = tuple(check.lower() for check in checks)
if 'disabled' in checks:
@ -89,17 +89,22 @@ def parse_frequency(frequency):
def filter_checks_on_frequency(
location_config, consistency_config, borg_repository_id, checks, force
config,
borg_repository_id,
checks,
force,
archives_check_id=None,
):
'''
Given a location config, a consistency config with a "checks" sequence of dicts, a Borg
repository ID, a sequence of checks, and whether to force checks to run, filter down those
checks based on the configured "frequency" for each check as compared to its check time file.
Given a configuration dict with a "checks" sequence of dicts, a Borg repository ID, a sequence
of checks, whether to force checks to run, and an ID for the archives check potentially being
run (if any), filter down those checks based on the configured "frequency" for each check as
compared to its check time file.
In other words, a check whose check time file's timestamp is too new (based on the configured
frequency) will get cut from the returned sequence of checks. Example:
consistency_config = {
config = {
'checks': [
{
'name': 'archives',
@ -108,9 +113,9 @@ def filter_checks_on_frequency(
]
}
When this function is called with that consistency_config and "archives" in checks, "archives"
will get filtered out of the returned result if its check time file is newer than 2 weeks old,
indicating that it's not yet time to run that check again.
When this function is called with that config and "archives" in checks, "archives" will get
filtered out of the returned result if its check time file is newer than 2 weeks old, indicating
that it's not yet time to run that check again.
Raise ValueError if a frequency cannot be parsed.
'''
@ -119,7 +124,7 @@ def filter_checks_on_frequency(
if force:
return tuple(filtered_checks)
for check_config in consistency_config.get('checks', DEFAULT_CHECKS):
for check_config in config.get('checks', DEFAULT_CHECKS):
check = check_config['name']
if checks and check not in checks:
continue
@ -128,9 +133,7 @@ def filter_checks_on_frequency(
if not frequency_delta:
continue
check_time = read_check_time(
make_check_time_path(location_config, borg_repository_id, check)
)
check_time = probe_for_check_time(config, borg_repository_id, check, archives_check_id)
if not check_time:
continue
@ -139,17 +142,66 @@ def filter_checks_on_frequency(
if datetime.datetime.now() < check_time + frequency_delta:
remaining = check_time + frequency_delta - datetime.datetime.now()
logger.info(
f'Skipping {check} check due to configured frequency; {remaining} until next check'
f'Skipping {check} check due to configured frequency; {remaining} until next check (use --force to check anyway)'
)
filtered_checks.remove(check)
return tuple(filtered_checks)
def make_check_flags(local_borg_version, checks, check_last=None, prefix=None):
def make_archive_filter_flags(local_borg_version, config, checks, check_last=None, prefix=None):
'''
Given the local Borg version and a parsed sequence of checks, transform the checks into tuple of
command-line flags.
Given the local Borg version, a configuration dict, a parsed sequence of checks, the check last
value, and a consistency check prefix, transform the checks into tuple of command-line flags for
filtering archives in a check command.
If a check_last value is given and "archives" is in checks, then include a "--last" flag. And if
a prefix value is given and "archives" is in checks, then include a "--match-archives" flag.
'''
if 'archives' in checks or 'data' in checks:
return (('--last', str(check_last)) if check_last else ()) + (
(
('--match-archives', f'sh:{prefix}*')
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else ('--glob-archives', f'{prefix}*')
)
if prefix
else (
flags.make_match_archives_flags(
config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
)
)
if check_last:
logger.warning(
'Ignoring check_last option, as "archives" or "data" are not in consistency checks'
)
if prefix:
logger.warning(
'Ignoring consistency prefix option, as "archives" or "data" are not in consistency checks'
)
return ()
def make_archives_check_id(archive_filter_flags):
'''
Given a sequence of flags to filter archives, return a unique hash corresponding to those
particular flags. If there are no flags, return None.
'''
if not archive_filter_flags:
return None
return hashlib.sha256(' '.join(archive_filter_flags).encode()).hexdigest()
def make_check_flags(checks, archive_filter_flags):
'''
Given a parsed sequence of checks and a sequence of flags to filter archives, transform the
checks into tuple of command-line check flags.
For example, given parsed checks of:
@ -161,10 +213,6 @@ def make_check_flags(local_borg_version, checks, check_last=None, prefix=None):
However, if both "repository" and "archives" are in checks, then omit them from the returned
flags because Borg does both checks by default. If "data" is in checks, that implies "archives".
Additionally, if a check_last value is given and "archives" is in checks, then include a
"--last" flag. And if a prefix value is given and "archives" is in checks, then include a
"--match-archives" flag.
'''
if 'data' in checks:
data_flags = ('--verify-data',)
@ -172,25 +220,7 @@ def make_check_flags(local_borg_version, checks, check_last=None, prefix=None):
else:
data_flags = ()
if 'archives' in checks:
last_flags = ('--last', str(check_last)) if check_last else ()
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
match_archives_flags = ('--match-archives', f'sh:{prefix}*') if prefix else ()
else:
match_archives_flags = ('--glob-archives', f'{prefix}*') if prefix else ()
else:
last_flags = ()
match_archives_flags = ()
if check_last:
logger.warning(
'Ignoring check_last option, as "archives" or "data" are not in consistency checks'
)
if prefix:
logger.warning(
'Ignoring consistency prefix option, as "archives" or "data" are not in consistency checks'
)
common_flags = last_flags + match_archives_flags + data_flags
common_flags = (archive_filter_flags if 'archives' in checks else ()) + data_flags
if {'repository', 'archives'}.issubset(set(checks)):
return common_flags
@ -201,18 +231,27 @@ def make_check_flags(local_borg_version, checks, check_last=None, prefix=None):
)
def make_check_time_path(location_config, borg_repository_id, check_type):
def make_check_time_path(config, borg_repository_id, check_type, archives_check_id=None):
'''
Given a location configuration dict, a Borg repository ID, and the name of a check type
("repository", "archives", etc.), return a path for recording that check's time (the time of
that check last occurring).
Given a configuration dict, a Borg repository ID, the name of a check type ("repository",
"archives", etc.), and a unique hash of the archives filter flags, return a path for recording
that check's time (the time of that check last occurring).
'''
borgmatic_source_directory = os.path.expanduser(
config.get('borgmatic_source_directory', state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY)
)
if check_type in ('archives', 'data'):
return os.path.join(
borgmatic_source_directory,
'checks',
borg_repository_id,
check_type,
archives_check_id if archives_check_id else 'all',
)
return os.path.join(
os.path.expanduser(
location_config.get(
'borgmatic_source_directory', state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
),
borgmatic_source_directory,
'checks',
borg_repository_id,
check_type,
@ -242,12 +281,79 @@ def read_check_time(path):
return None
def probe_for_check_time(config, borg_repository_id, check, archives_check_id):
'''
Given a configuration dict, a Borg repository ID, the name of a check type ("repository",
"archives", etc.), and a unique hash of the archives filter flags, return a the corresponding
check time or None if such a check time does not exist.
When the check type is "archives" or "data", this function probes two different paths to find
the check time, e.g.:
~/.borgmatic/checks/1234567890/archives/9876543210
~/.borgmatic/checks/1234567890/archives/all
... and returns the maximum modification time of the files found (if any). The first path
represents a more specific archives check time (a check on a subset of archives), and the second
is a fallback to the last "all" archives check.
For other check types, this function reads from a single check time path, e.g.:
~/.borgmatic/checks/1234567890/repository
'''
check_times = (
read_check_time(group[0])
for group in itertools.groupby(
(
make_check_time_path(config, borg_repository_id, check, archives_check_id),
make_check_time_path(config, borg_repository_id, check),
)
)
)
try:
return max(check_time for check_time in check_times if check_time)
except ValueError:
return None
def upgrade_check_times(config, borg_repository_id):
'''
Given a configuration dict and a Borg repository ID, upgrade any corresponding check times on
disk from old-style paths to new-style paths.
Currently, the only upgrade performed is renaming an archive or data check path that looks like:
~/.borgmatic/checks/1234567890/archives
to:
~/.borgmatic/checks/1234567890/archives/all
'''
for check_type in ('archives', 'data'):
new_path = make_check_time_path(config, borg_repository_id, check_type, 'all')
old_path = os.path.dirname(new_path)
temporary_path = f'{old_path}.temp'
if not os.path.isfile(old_path) and not os.path.isfile(temporary_path):
continue
logger.debug(f'Upgrading archives check time from {old_path} to {new_path}')
try:
os.rename(old_path, temporary_path)
except FileNotFoundError:
pass
os.mkdir(old_path)
os.rename(temporary_path, new_path)
def check_archives(
repository,
location_config,
storage_config,
consistency_config,
repository_path,
config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
progress=None,
@ -256,10 +362,9 @@ def check_archives(
force=None,
):
'''
Given a local or remote repository path, a storage config dict, a consistency config dict,
local/remote commands to run, whether to include progress information, whether to attempt a
repair, and an optional list of checks to use instead of configured checks, check the contained
Borg archives for consistency.
Given a local or remote repository path, a configuration dict, local/remote commands to run,
whether to include progress information, whether to attempt a repair, and an optional list of
checks to use instead of configured checks, check the contained Borg archives for consistency.
If there are no consistency checks to run, skip running them.
@ -268,30 +373,40 @@ def check_archives(
try:
borg_repository_id = json.loads(
rinfo.display_repository_info(
repository,
storage_config,
repository_path,
config,
local_borg_version,
argparse.Namespace(json=True),
global_arguments,
local_path,
remote_path,
)
)['repository']['id']
except (json.JSONDecodeError, KeyError):
raise ValueError(f'Cannot determine Borg repository ID for {repository}')
raise ValueError(f'Cannot determine Borg repository ID for {repository_path}')
upgrade_check_times(config, borg_repository_id)
check_last = config.get('check_last', None)
prefix = config.get('prefix')
configured_checks = parse_checks(config, only_checks)
lock_wait = None
extra_borg_options = config.get('extra_borg_options', {}).get('check', '')
archive_filter_flags = make_archive_filter_flags(
local_borg_version, config, configured_checks, check_last, prefix
)
archives_check_id = make_archives_check_id(archive_filter_flags)
checks = filter_checks_on_frequency(
location_config,
consistency_config,
config,
borg_repository_id,
parse_checks(consistency_config, only_checks),
configured_checks,
force,
archives_check_id,
)
check_last = consistency_config.get('check_last', None)
lock_wait = None
extra_borg_options = storage_config.get('extra_borg_options', {}).get('check', '')
if set(checks).intersection({'repository', 'archives', 'data'}):
lock_wait = storage_config.get('lock_wait', None)
lock_wait = config.get('lock_wait')
verbosity_flags = ()
if logger.isEnabledFor(logging.INFO):
@ -299,21 +414,20 @@ def check_archives(
if logger.isEnabledFor(logging.DEBUG):
verbosity_flags = ('--debug', '--show-rc')
prefix = consistency_config.get('prefix', DEFAULT_PREFIX)
full_command = (
(local_path, 'check')
+ (('--repair',) if repair else ())
+ make_check_flags(local_borg_version, checks, check_last, prefix)
+ make_check_flags(checks, archive_filter_flags)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ (('--progress',) if progress else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository, local_borg_version)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
borg_environment = environment.make_environment(storage_config)
borg_environment = environment.make_environment(config)
# The Borg repair option triggers an interactive prompt, which won't work when output is
# captured. And progress messes with the terminal directly.
@ -325,10 +439,18 @@ def check_archives(
execute_command(full_command, extra_environment=borg_environment)
for check in checks:
write_check_time(make_check_time_path(location_config, borg_repository_id, check))
write_check_time(
make_check_time_path(config, borg_repository_id, check, archives_check_id)
)
if 'extract' in checks:
extract.extract_last_archive_dry_run(
storage_config, local_borg_version, repository, lock_wait, local_path, remote_path
config,
local_borg_version,
global_arguments,
repository_path,
lock_wait,
local_path,
remote_path,
)
write_check_time(make_check_time_path(location_config, borg_repository_id, 'extract'))
write_check_time(make_check_time_path(config, borg_repository_id, 'extract'))

View File

@ -8,9 +8,10 @@ logger = logging.getLogger(__name__)
def compact_segments(
dry_run,
repository,
storage_config,
repository_path,
config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
progress=False,
@ -18,17 +19,18 @@ def compact_segments(
threshold=None,
):
'''
Given dry-run flag, a local or remote repository path, a storage config dict, and the local
Borg version, compact the segments in a repository.
Given dry-run flag, a local or remote repository path, a configuration dict, and the local Borg
version, compact the segments in a repository.
'''
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('compact', '')
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
extra_borg_options = config.get('extra_borg_options', {}).get('compact', '')
full_command = (
(local_path, 'compact')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--progress',) if progress else ())
+ (('--cleanup-commits',) if cleanup_commits else ())
@ -36,16 +38,16 @@ def compact_segments(
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository, local_borg_version)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if dry_run:
logging.info(f'{repository}: Skipping compact (dry run)')
logging.info(f'{repository_path}: Skipping compact (dry run)')
return
execute_command(
full_command,
output_log_level=logging.INFO,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
extra_environment=environment.make_environment(config),
)

View File

@ -146,12 +146,12 @@ def ensure_files_readable(*filename_lists):
open(file_object).close()
def make_pattern_flags(location_config, pattern_filename=None):
def make_pattern_flags(config, pattern_filename=None):
'''
Given a location config dict with a potential patterns_from option, and a filename containing
any additional patterns, return the corresponding Borg flags for those files as a tuple.
Given a configuration dict with a potential patterns_from option, and a filename containing any
additional patterns, return the corresponding Borg flags for those files as a tuple.
'''
pattern_filenames = tuple(location_config.get('patterns_from') or ()) + (
pattern_filenames = tuple(config.get('patterns_from') or ()) + (
(pattern_filename,) if pattern_filename else ()
)
@ -162,12 +162,12 @@ def make_pattern_flags(location_config, pattern_filename=None):
)
def make_exclude_flags(location_config, exclude_filename=None):
def make_exclude_flags(config, exclude_filename=None):
'''
Given a location config dict with various exclude options, and a filename containing any exclude
Given a configuration dict with various exclude options, and a filename containing any exclude
patterns, return the corresponding Borg flags as a tuple.
'''
exclude_filenames = tuple(location_config.get('exclude_from') or ()) + (
exclude_filenames = tuple(config.get('exclude_from') or ()) + (
(exclude_filename,) if exclude_filename else ()
)
exclude_from_flags = tuple(
@ -175,17 +175,15 @@ def make_exclude_flags(location_config, exclude_filename=None):
('--exclude-from', exclude_filename) for exclude_filename in exclude_filenames
)
)
caches_flag = ('--exclude-caches',) if location_config.get('exclude_caches') else ()
caches_flag = ('--exclude-caches',) if config.get('exclude_caches') else ()
if_present_flags = tuple(
itertools.chain.from_iterable(
('--exclude-if-present', if_present)
for if_present in location_config.get('exclude_if_present', ())
for if_present in config.get('exclude_if_present', ())
)
)
keep_exclude_tags_flags = (
('--keep-exclude-tags',) if location_config.get('keep_exclude_tags') else ()
)
exclude_nodump_flags = ('--exclude-nodump',) if location_config.get('exclude_nodump') else ()
keep_exclude_tags_flags = ('--keep-exclude-tags',) if config.get('keep_exclude_tags') else ()
exclude_nodump_flags = ('--exclude-nodump',) if config.get('exclude_nodump') else ()
return (
exclude_from_flags
@ -280,17 +278,21 @@ def collect_special_file_paths(
create_command, local_path, working_directory, borg_environment, skip_directories
):
'''
Given a Borg create command as a tuple, a local Borg path, a working directory, and a dict of
Given a Borg create command as a tuple, a local Borg path, a working directory, a dict of
environment variables to pass to Borg, and a sequence of parent directories to skip, collect the
paths for any special files (character devices, block devices, and named pipes / FIFOs) that
Borg would encounter during a create. These are all paths that could cause Borg to hang if its
--read-special flag is used.
'''
# Omit "--exclude-nodump" from the Borg dry run command, because that flag causes Borg to open
# files including any named pipe we've created.
paths_output = execute_command_and_capture_output(
create_command + ('--dry-run', '--list'),
tuple(argument for argument in create_command if argument != '--exclude-nodump')
+ ('--dry-run', '--list'),
capture_stderr=True,
working_directory=working_directory,
extra_environment=borg_environment,
borg_local_path=local_path,
)
paths = tuple(
@ -314,7 +316,7 @@ def check_all_source_directories_exist(source_directories):
missing_directories = [
source_directory
for source_directory in source_directories
if not os.path.exists(source_directory)
if not all([os.path.exists(directory) for directory in expand_directory(source_directory)])
]
if missing_directories:
raise ValueError(f"Source directories do not exist: {', '.join(missing_directories)}")
@ -322,10 +324,10 @@ def check_all_source_directories_exist(source_directories):
def create_archive(
dry_run,
repository,
location_config,
storage_config,
repository_path,
config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
progress=False,
@ -335,70 +337,70 @@ def create_archive(
stream_processes=None,
):
'''
Given vebosity/dry-run flags, a local or remote repository path, a location config dict, and a
storage config dict, create a Borg archive and return Borg's JSON output (if any).
Given vebosity/dry-run flags, a local or remote repository path, and a configuration dict,
create a Borg archive and return Borg's JSON output (if any).
If a sequence of stream processes is given (instances of subprocess.Popen), then execute the
create command while also triggering the given processes to produce output.
'''
borgmatic.logger.add_custom_log_levels()
borgmatic_source_directories = expand_directories(
collect_borgmatic_source_directories(location_config.get('borgmatic_source_directory'))
collect_borgmatic_source_directories(config.get('borgmatic_source_directory'))
)
if location_config.get('source_directories_must_exist', False):
check_all_source_directories_exist(location_config.get('source_directories'))
if config.get('source_directories_must_exist', False):
check_all_source_directories_exist(config.get('source_directories'))
sources = deduplicate_directories(
map_directories_to_devices(
expand_directories(
tuple(location_config.get('source_directories', ())) + borgmatic_source_directories
tuple(config.get('source_directories', ()))
+ borgmatic_source_directories
+ tuple(global_arguments.used_config_paths)
)
),
additional_directory_devices=map_directories_to_devices(
expand_directories(pattern_root_directories(location_config.get('patterns')))
expand_directories(pattern_root_directories(config.get('patterns')))
),
)
ensure_files_readable(location_config.get('patterns_from'), location_config.get('exclude_from'))
ensure_files_readable(config.get('patterns_from'), config.get('exclude_from'))
try:
working_directory = os.path.expanduser(location_config.get('working_directory'))
working_directory = os.path.expanduser(config.get('working_directory'))
except TypeError:
working_directory = None
pattern_file = (
write_pattern_file(location_config.get('patterns'), sources)
if location_config.get('patterns') or location_config.get('patterns_from')
write_pattern_file(config.get('patterns'), sources)
if config.get('patterns') or config.get('patterns_from')
else None
)
exclude_file = write_pattern_file(
expand_home_directories(location_config.get('exclude_patterns'))
)
checkpoint_interval = storage_config.get('checkpoint_interval', None)
checkpoint_volume = storage_config.get('checkpoint_volume', None)
chunker_params = storage_config.get('chunker_params', None)
compression = storage_config.get('compression', None)
upload_rate_limit = storage_config.get('upload_rate_limit', None)
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
exclude_file = write_pattern_file(expand_home_directories(config.get('exclude_patterns')))
checkpoint_interval = config.get('checkpoint_interval', None)
checkpoint_volume = config.get('checkpoint_volume', None)
chunker_params = config.get('chunker_params', None)
compression = config.get('compression', None)
upload_rate_limit = config.get('upload_rate_limit', None)
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
list_filter_flags = make_list_filter_flags(local_borg_version, dry_run)
files_cache = location_config.get('files_cache')
archive_name_format = storage_config.get('archive_name_format', DEFAULT_ARCHIVE_NAME_FORMAT)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('create', '')
files_cache = config.get('files_cache')
archive_name_format = config.get('archive_name_format', DEFAULT_ARCHIVE_NAME_FORMAT)
extra_borg_options = config.get('extra_borg_options', {}).get('create', '')
if feature.available(feature.Feature.ATIME, local_borg_version):
atime_flags = ('--atime',) if location_config.get('atime') is True else ()
atime_flags = ('--atime',) if config.get('atime') is True else ()
else:
atime_flags = ('--noatime',) if location_config.get('atime') is False else ()
atime_flags = ('--noatime',) if config.get('atime') is False else ()
if feature.available(feature.Feature.NOFLAGS, local_borg_version):
noflags_flags = ('--noflags',) if location_config.get('flags') is False else ()
noflags_flags = ('--noflags',) if config.get('flags') is False else ()
else:
noflags_flags = ('--nobsdflags',) if location_config.get('flags') is False else ()
noflags_flags = ('--nobsdflags',) if config.get('flags') is False else ()
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
numeric_ids_flags = ('--numeric-ids',) if location_config.get('numeric_ids') else ()
numeric_ids_flags = ('--numeric-ids',) if config.get('numeric_ids') else ()
else:
numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_ids') else ()
numeric_ids_flags = ('--numeric-owner',) if config.get('numeric_ids') else ()
if feature.available(feature.Feature.UPLOAD_RATELIMIT, local_borg_version):
upload_ratelimit_flags = (
@ -409,35 +411,32 @@ def create_archive(
('--remote-ratelimit', str(upload_rate_limit)) if upload_rate_limit else ()
)
if stream_processes and location_config.get('read_special') is False:
if stream_processes and config.get('read_special') is False:
logger.warning(
f'{repository}: Ignoring configured "read_special" value of false, as true is needed for database hooks.'
f'{repository_path}: Ignoring configured "read_special" value of false, as true is needed for database hooks.'
)
create_command = (
tuple(local_path.split(' '))
+ ('create',)
+ make_pattern_flags(location_config, pattern_file.name if pattern_file else None)
+ make_exclude_flags(location_config, exclude_file.name if exclude_file else None)
+ make_pattern_flags(config, pattern_file.name if pattern_file else None)
+ make_exclude_flags(config, exclude_file.name if exclude_file else None)
+ (('--checkpoint-interval', str(checkpoint_interval)) if checkpoint_interval else ())
+ (('--checkpoint-volume', str(checkpoint_volume)) if checkpoint_volume else ())
+ (('--chunker-params', chunker_params) if chunker_params else ())
+ (('--compression', compression) if compression else ())
+ upload_ratelimit_flags
+ (
('--one-file-system',)
if location_config.get('one_file_system') or stream_processes
else ()
)
+ (('--one-file-system',) if config.get('one_file_system') or stream_processes else ())
+ numeric_ids_flags
+ atime_flags
+ (('--noctime',) if location_config.get('ctime') is False else ())
+ (('--nobirthtime',) if location_config.get('birthtime') is False else ())
+ (('--read-special',) if location_config.get('read_special') or stream_processes else ())
+ (('--noctime',) if config.get('ctime') is False else ())
+ (('--nobirthtime',) if config.get('birthtime') is False else ())
+ (('--read-special',) if config.get('read_special') or stream_processes else ())
+ noflags_flags
+ (('--files-cache', files_cache) if files_cache else ())
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (
('--list', '--filter', list_filter_flags)
@ -446,7 +445,9 @@ def create_archive(
)
+ (('--dry-run',) if dry_run else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_archive_flags(repository, archive_name_format, local_borg_version)
+ flags.make_repository_archive_flags(
repository_path, archive_name_format, local_borg_version
)
+ (sources if not pattern_file else ())
)
@ -461,12 +462,12 @@ def create_archive(
# the terminal directly.
output_file = DO_NOT_CAPTURE if progress else None
borg_environment = environment.make_environment(storage_config)
borg_environment = environment.make_environment(config)
# If database hooks are enabled (as indicated by streaming processes), exclude files that might
# cause Borg to hang. But skip this if the user has explicitly set the "read_special" to True.
if stream_processes and not location_config.get('read_special'):
logger.debug(f'{repository}: Collecting special file paths')
if stream_processes and not config.get('read_special'):
logger.debug(f'{repository_path}: Collecting special file paths')
special_file_paths = collect_special_file_paths(
create_command,
local_path,
@ -477,15 +478,15 @@ def create_archive(
if special_file_paths:
logger.warning(
f'{repository}: Excluding special files to prevent Borg from hanging: {", ".join(special_file_paths)}'
f'{repository_path}: Excluding special files to prevent Borg from hanging: {", ".join(special_file_paths)}'
)
exclude_file = write_pattern_file(
expand_home_directories(
tuple(location_config.get('exclude_patterns') or ()) + special_file_paths
tuple(config.get('exclude_patterns') or ()) + special_file_paths
),
pattern_file=exclude_file,
)
create_command += make_exclude_flags(location_config, exclude_file.name)
create_command += make_exclude_flags(config, exclude_file.name)
create_command += (
(('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
@ -507,7 +508,10 @@ def create_archive(
)
elif output_log_level is None:
return execute_command_and_capture_output(
create_command, working_directory=working_directory, extra_environment=borg_environment,
create_command,
working_directory=working_directory,
extra_environment=borg_environment,
borg_local_path=local_path,
)
else:
execute_command(

View File

@ -11,21 +11,25 @@ OPTION_TO_ENVIRONMENT_VARIABLE = {
'temporary_directory': 'TMPDIR',
}
DEFAULT_BOOL_OPTION_TO_ENVIRONMENT_VARIABLE = {
DEFAULT_BOOL_OPTION_TO_DOWNCASE_ENVIRONMENT_VARIABLE = {
'relocated_repo_access_is_ok': 'BORG_RELOCATED_REPO_ACCESS_IS_OK',
'unknown_unencrypted_repo_access_is_ok': 'BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK',
}
DEFAULT_BOOL_OPTION_TO_UPPERCASE_ENVIRONMENT_VARIABLE = {
'check_i_know_what_i_am_doing': 'BORG_CHECK_I_KNOW_WHAT_I_AM_DOING',
}
def make_environment(storage_config):
def make_environment(config):
'''
Given a borgmatic storage configuration dict, return its options converted to a Borg environment
Given a borgmatic configuration dict, return its options converted to a Borg environment
variable dict.
'''
environment = {}
for option_name, environment_variable_name in OPTION_TO_ENVIRONMENT_VARIABLE.items():
value = storage_config.get(option_name)
value = config.get(option_name)
if value:
environment[environment_variable_name] = str(value)
@ -33,8 +37,17 @@ def make_environment(storage_config):
for (
option_name,
environment_variable_name,
) in DEFAULT_BOOL_OPTION_TO_ENVIRONMENT_VARIABLE.items():
value = storage_config.get(option_name, False)
environment[environment_variable_name] = 'yes' if value else 'no'
) in DEFAULT_BOOL_OPTION_TO_DOWNCASE_ENVIRONMENT_VARIABLE.items():
value = config.get(option_name)
if value is not None:
environment[environment_variable_name] = 'yes' if value else 'no'
for (
option_name,
environment_variable_name,
) in DEFAULT_BOOL_OPTION_TO_UPPERCASE_ENVIRONMENT_VARIABLE.items():
value = config.get(option_name)
if value is not None:
environment[environment_variable_name] = 'YES' if value else 'NO'
return environment

View File

@ -9,12 +9,13 @@ logger = logging.getLogger(__name__)
def export_tar_archive(
dry_run,
repository,
repository_path,
archive,
paths,
destination_path,
storage_config,
config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
tar_filter=None,
@ -23,21 +24,22 @@ def export_tar_archive(
):
'''
Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to
export from the archive, a destination path to export to, a storage configuration dict, the
local Borg version, optional local and remote Borg paths, an optional filter program, whether to
include per-file details, and an optional number of path components to strip, export the archive
into the given destination path as a tar-formatted file.
export from the archive, a destination path to export to, a configuration dict, the local Borg
version, optional local and remote Borg paths, an optional filter program, whether to include
per-file details, and an optional number of path components to strip, export the archive into
the given destination path as a tar-formatted file.
If the destination path is "-", then stream the output to stdout instead of to a file.
'''
borgmatic.logger.add_custom_log_levels()
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
full_command = (
(local_path, 'export-tar')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--list',) if list_files else ())
@ -45,7 +47,11 @@ def export_tar_archive(
+ (('--dry-run',) if dry_run else ())
+ (('--tar-filter', tar_filter) if tar_filter else ())
+ (('--strip-components', str(strip_components)) if strip_components else ())
+ flags.make_repository_archive_flags(repository, archive, local_borg_version,)
+ flags.make_repository_archive_flags(
repository_path,
archive,
local_borg_version,
)
+ (destination_path,)
+ (tuple(paths) if paths else ())
)
@ -56,7 +62,7 @@ def export_tar_archive(
output_log_level = logging.INFO
if dry_run:
logging.info(f'{repository}: Skipping export to tar file (dry run)')
logging.info(f'{repository_path}: Skipping export to tar file (dry run)')
return
execute_command(
@ -64,5 +70,5 @@ def export_tar_archive(
output_file=DO_NOT_CAPTURE if destination_path == '-' else None,
output_log_level=output_log_level,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
extra_environment=environment.make_environment(config),
)

View File

@ -2,6 +2,7 @@ import logging
import os
import subprocess
import borgmatic.config.validate
from borgmatic.borg import environment, feature, flags, rlist
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
@ -9,9 +10,10 @@ logger = logging.getLogger(__name__)
def extract_last_archive_dry_run(
storage_config,
config,
local_borg_version,
repository,
global_arguments,
repository_path,
lock_wait=None,
local_path='borg',
remote_path=None,
@ -20,8 +22,6 @@ def extract_last_archive_dry_run(
Perform an extraction dry-run of the most recent archive. If there are no archives, skip the
dry-run.
'''
remote_path_flags = ('--remote-path', remote_path) if remote_path else ()
lock_wait_flags = ('--lock-wait', str(lock_wait)) if lock_wait else ()
verbosity_flags = ()
if logger.isEnabledFor(logging.DEBUG):
verbosity_flags = ('--debug', '--show-rc')
@ -30,21 +30,30 @@ def extract_last_archive_dry_run(
try:
last_archive_name = rlist.resolve_archive_name(
repository, 'latest', storage_config, local_borg_version, local_path, remote_path
repository_path,
'latest',
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
except ValueError:
logger.warning('No archives found. Skipping extract consistency check.')
return
list_flag = ('--list',) if logger.isEnabledFor(logging.DEBUG) else ()
borg_environment = environment.make_environment(storage_config)
borg_environment = environment.make_environment(config)
full_extract_command = (
(local_path, 'extract', '--dry-run')
+ remote_path_flags
+ lock_wait_flags
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ list_flag
+ flags.make_repository_archive_flags(repository, last_archive_name, local_borg_version)
+ flags.make_repository_archive_flags(
repository_path, last_archive_name, local_borg_version
)
)
execute_command(
@ -57,9 +66,9 @@ def extract_archive(
repository,
archive,
paths,
location_config,
storage_config,
config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
destination_path=None,
@ -69,23 +78,23 @@ def extract_archive(
):
'''
Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to
restore from the archive, the local Borg version string, location/storage configuration dicts,
optional local and remote Borg paths, and an optional destination path to extract to, extract
the archive into the current directory.
restore from the archive, the local Borg version string, an argparse.Namespace of global
arguments, a configuration dict, optional local and remote Borg paths, and an optional
destination path to extract to, extract the archive into the current directory.
If extract to stdout is True, then start the extraction streaming to stdout, and return that
extract process as an instance of subprocess.Popen.
'''
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
if progress and extract_to_stdout:
raise ValueError('progress and extract_to_stdout cannot both be set')
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
numeric_ids_flags = ('--numeric-ids',) if location_config.get('numeric_ids') else ()
numeric_ids_flags = ('--numeric-ids',) if config.get('numeric_ids') else ()
else:
numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_ids') else ()
numeric_ids_flags = ('--numeric-owner',) if config.get('numeric_ids') else ()
if strip_components == 'all':
if not paths:
@ -99,6 +108,7 @@ def extract_archive(
+ (('--remote-path', remote_path) if remote_path else ())
+ numeric_ids_flags
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--list', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
@ -106,11 +116,17 @@ def extract_archive(
+ (('--strip-components', str(strip_components)) if strip_components else ())
+ (('--progress',) if progress else ())
+ (('--stdout',) if extract_to_stdout else ())
+ flags.make_repository_archive_flags(repository, archive, local_borg_version,)
+ flags.make_repository_archive_flags(
# Make the repository path absolute so the working directory changes below don't
# prevent Borg from finding the repo.
borgmatic.config.validate.normalize_repository_path(repository),
archive,
local_borg_version,
)
+ (tuple(paths) if paths else ())
)
borg_environment = environment.make_environment(storage_config)
borg_environment = environment.make_environment(config)
# The progress output isn't compatible with captured and logged output, as progress messes with
# the terminal directly.

View File

@ -1,6 +1,6 @@
from enum import Enum
from pkg_resources import parse_version
from packaging.version import parse
class Feature(Enum):
@ -18,17 +18,17 @@ class Feature(Enum):
FEATURE_TO_MINIMUM_BORG_VERSION = {
Feature.COMPACT: parse_version('1.2.0a2'), # borg compact
Feature.ATIME: parse_version('1.2.0a7'), # borg create --atime
Feature.NOFLAGS: parse_version('1.2.0a8'), # borg create --noflags
Feature.NUMERIC_IDS: parse_version('1.2.0b3'), # borg create/extract/mount --numeric-ids
Feature.UPLOAD_RATELIMIT: parse_version('1.2.0b3'), # borg create --upload-ratelimit
Feature.SEPARATE_REPOSITORY_ARCHIVE: parse_version('2.0.0a2'), # --repo with separate archive
Feature.RCREATE: parse_version('2.0.0a2'), # borg rcreate
Feature.RLIST: parse_version('2.0.0a2'), # borg rlist
Feature.RINFO: parse_version('2.0.0a2'), # borg rinfo
Feature.MATCH_ARCHIVES: parse_version('2.0.0b3'), # borg --match-archives
Feature.EXCLUDED_FILES_MINUS: parse_version('2.0.0b5'), # --list --filter uses "-" for excludes
Feature.COMPACT: parse('1.2.0a2'), # borg compact
Feature.ATIME: parse('1.2.0a7'), # borg create --atime
Feature.NOFLAGS: parse('1.2.0a8'), # borg create --noflags
Feature.NUMERIC_IDS: parse('1.2.0b3'), # borg create/extract/mount --numeric-ids
Feature.UPLOAD_RATELIMIT: parse('1.2.0b3'), # borg create --upload-ratelimit
Feature.SEPARATE_REPOSITORY_ARCHIVE: parse('2.0.0a2'), # --repo with separate archive
Feature.RCREATE: parse('2.0.0a2'), # borg rcreate
Feature.RLIST: parse('2.0.0a2'), # borg rlist
Feature.RINFO: parse('2.0.0a2'), # borg rinfo
Feature.MATCH_ARCHIVES: parse('2.0.0b3'), # borg --match-archives
Feature.EXCLUDED_FILES_MINUS: parse('2.0.0b5'), # --list --filter uses "-" for excludes
}
@ -37,4 +37,4 @@ def available(feature, borg_version):
Given a Borg Feature constant and a Borg version string, return whether that feature is
available in that version of Borg.
'''
return FEATURE_TO_MINIMUM_BORG_VERSION[feature] <= parse_version(borg_version)
return FEATURE_TO_MINIMUM_BORG_VERSION[feature] <= parse(borg_version)

View File

@ -1,4 +1,5 @@
import itertools
import re
from borgmatic.borg import feature
@ -33,7 +34,7 @@ def make_flags_from_arguments(arguments, excludes=()):
)
def make_repository_flags(repository, local_borg_version):
def make_repository_flags(repository_path, local_borg_version):
'''
Given the path of a Borg repository and the local Borg version, return Borg-version-appropriate
command-line flags (as a tuple) for selecting that repository.
@ -42,17 +43,41 @@ def make_repository_flags(repository, local_borg_version):
('--repo',)
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
else ()
) + (repository,)
) + (repository_path,)
def make_repository_archive_flags(repository, archive, local_borg_version):
def make_repository_archive_flags(repository_path, archive, local_borg_version):
'''
Given the path of a Borg repository, an archive name or pattern, and the local Borg version,
return Borg-version-appropriate command-line flags (as a tuple) for selecting that repository
and archive.
'''
return (
('--repo', repository, archive)
('--repo', repository_path, archive)
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
else (f'{repository}::{archive}',)
else (f'{repository_path}::{archive}',)
)
def make_match_archives_flags(match_archives, archive_name_format, local_borg_version):
'''
Return match archives flags based on the given match archives value, if any. If it isn't set,
return match archives flags to match archives created with the given archive name format, if
any. This is done by replacing certain archive name format placeholders for ephemeral data (like
"{now}") with globs.
'''
if match_archives:
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
return ('--match-archives', match_archives)
else:
return ('--glob-archives', re.sub(r'^sh:', '', match_archives))
if not archive_name_format:
return ()
derived_match_archives = re.sub(r'\{(now|utcnow|pid)([:%\w\.-]*)\}', '*', archive_name_format)
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
return ('--match-archives', f'sh:{derived_match_archives}')
else:
return ('--glob-archives', f'{derived_match_archives}')

View File

@ -8,20 +8,21 @@ logger = logging.getLogger(__name__)
def display_archives_info(
repository,
storage_config,
repository_path,
config,
local_borg_version,
info_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, and the
arguments to the info action, display summary information for Borg archives in the repository or
return JSON summary information.
Given a local or remote repository path, a configuration dict, the local Borg version, global
arguments as an argparse.Namespace, and the arguments to the info action, display summary
information for Borg archives in the repository or return JSON summary information.
'''
borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None)
lock_wait = config.get('lock_wait', None)
full_command = (
(local_path, 'info')
@ -36,6 +37,7 @@ def display_archives_info(
else ()
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', lock_wait)
+ (
(
@ -44,27 +46,32 @@ def display_archives_info(
else flags.make_flags('glob-archives', f'{info_arguments.prefix}*')
)
if info_arguments.prefix
else ()
else (
flags.make_match_archives_flags(
info_arguments.match_archives
or info_arguments.archive
or config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
)
)
+ flags.make_flags_from_arguments(
info_arguments, excludes=('repository', 'archive', 'prefix')
)
+ flags.make_repository_flags(repository, local_borg_version)
+ (
flags.make_flags('match-archives', info_arguments.archive)
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else flags.make_flags('glob-archives', info_arguments.archive)
info_arguments, excludes=('repository', 'archive', 'prefix', 'match_archives')
)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if info_arguments.json:
return execute_command_and_capture_output(
full_command, extra_environment=environment.make_environment(storage_config),
full_command,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
)
else:
execute_command(
full_command,
output_log_level=logging.ANSWER,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
extra_environment=environment.make_environment(config),
)

View File

@ -14,26 +14,26 @@ ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST = ('prefix', 'match_archives', 'sort_by', 'f
MAKE_FLAGS_EXCLUDES = (
'repository',
'archive',
'successful',
'paths',
'find_paths',
) + ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST
def make_list_command(
repository,
storage_config,
repository_path,
config,
local_borg_version,
list_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the arguments to the list
action, and local and remote Borg paths, return a command as a tuple to list archives or paths
within an archive.
Given a local or remote repository path, a configuration dict, the arguments to the list action,
and local and remote Borg paths, return a command as a tuple to list archives or paths within an
archive.
'''
lock_wait = storage_config.get('lock_wait', None)
lock_wait = config.get('lock_wait', None)
return (
(local_path, 'list')
@ -48,14 +48,15 @@ def make_list_command(
else ()
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', lock_wait)
+ flags.make_flags_from_arguments(list_arguments, excludes=MAKE_FLAGS_EXCLUDES)
+ (
flags.make_repository_archive_flags(
repository, list_arguments.archive, local_borg_version
repository_path, list_arguments.archive, local_borg_version
)
if list_arguments.archive
else flags.make_repository_flags(repository, local_borg_version)
else flags.make_repository_flags(repository_path, local_borg_version)
)
+ (tuple(list_arguments.paths) if list_arguments.paths else ())
)
@ -86,39 +87,43 @@ def make_find_paths(find_paths):
def capture_archive_listing(
repository,
repository_path,
archive,
storage_config,
config,
local_borg_version,
global_arguments,
list_path=None,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an archive name, a storage config dict, the local Borg
version, the archive path in which to list files, and local and remote Borg paths, capture the
output of listing that archive and return it as a list of file paths.
Given a local or remote repository path, an archive name, a configuration dict, the local Borg
version, global arguments as an argparse.Namespace, the archive path in which to list files, and
local and remote Borg paths, capture the output of listing that archive and return it as a list
of file paths.
'''
borg_environment = environment.make_environment(storage_config)
borg_environment = environment.make_environment(config)
return tuple(
execute_command_and_capture_output(
make_list_command(
repository,
storage_config,
repository_path,
config,
local_borg_version,
argparse.Namespace(
repository=repository,
repository=repository_path,
archive=archive,
paths=[f'sh:{list_path}'],
find_paths=None,
json=None,
format='{path}{NL}', # noqa: FS003
),
global_arguments,
local_path,
remote_path,
),
extra_environment=borg_environment,
borg_local_path=local_path,
)
.strip('\n')
.split('\n')
@ -126,19 +131,21 @@ def capture_archive_listing(
def list_archive(
repository,
storage_config,
repository_path,
config,
local_borg_version,
list_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, the
arguments to the list action, and local and remote Borg paths, display the output of listing
the files of a Borg archive (or return JSON output). If list_arguments.find_paths are given,
list the files by searching across multiple archives. If neither find_paths nor archive name
are given, instead list the archives in the given repository.
Given a local or remote repository path, a configuration dict, the local Borg version, global
arguments as an argparse.Namespace, the arguments to the list action as an argparse.Namespace,
and local and remote Borg paths, display the output of listing the files of a Borg archive (or
return JSON output). If list_arguments.find_paths are given, list the files by searching across
multiple archives. If neither find_paths nor archive name are given, instead list the archives
in the given repository.
'''
borgmatic.logger.add_custom_log_levels()
@ -149,7 +156,7 @@ def list_archive(
)
rlist_arguments = argparse.Namespace(
repository=repository,
repository=repository_path,
short=list_arguments.short,
format=list_arguments.format,
json=list_arguments.json,
@ -160,7 +167,13 @@ def list_archive(
last=list_arguments.last,
)
return rlist.list_repository(
repository, storage_config, local_borg_version, rlist_arguments, local_path, remote_path
repository_path,
config,
local_borg_version,
rlist_arguments,
global_arguments,
local_path,
remote_path,
)
if list_arguments.archive:
@ -175,13 +188,13 @@ def list_archive(
'The --json flag on the list action is not supported when using the --archive/--find flags.'
)
borg_environment = environment.make_environment(storage_config)
borg_environment = environment.make_environment(config)
# If there are any paths to find (and there's not a single archive already selected), start by
# getting a list of archives to search.
if list_arguments.find_paths and not list_arguments.archive:
rlist_arguments = argparse.Namespace(
repository=repository,
repository=repository_path,
short=True,
format=None,
json=None,
@ -196,14 +209,16 @@ def list_archive(
archive_lines = tuple(
execute_command_and_capture_output(
rlist.make_rlist_command(
repository,
storage_config,
repository_path,
config,
local_borg_version,
rlist_arguments,
global_arguments,
local_path,
remote_path,
),
extra_environment=borg_environment,
borg_local_path=local_path,
)
.strip('\n')
.split('\n')
@ -213,7 +228,7 @@ def list_archive(
# For each archive listed by Borg, run list on the contents of that archive.
for archive in archive_lines:
logger.answer(f'{repository}: Listing archive {archive}')
logger.answer(f'{repository_path}: Listing archive {archive}')
archive_arguments = copy.copy(list_arguments)
archive_arguments.archive = archive
@ -224,10 +239,11 @@ def list_archive(
setattr(archive_arguments, name, None)
main_command = make_list_command(
repository,
storage_config,
repository_path,
config,
local_borg_version,
archive_arguments,
global_arguments,
local_path,
remote_path,
) + make_find_paths(list_arguments.find_paths)

View File

@ -7,38 +7,40 @@ logger = logging.getLogger(__name__)
def mount_archive(
repository,
repository_path,
archive,
mount_point,
paths,
foreground,
options,
storage_config,
mount_arguments,
config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an optional archive name, a filesystem mount point,
zero or more paths to mount from the archive, extra Borg mount options, a storage configuration
dict, the local Borg version, and optional local and remote Borg paths, mount the archive onto
the mount point.
dict, the local Borg version, global arguments as an argparse.Namespace instance, and optional
local and remote Borg paths, mount the archive onto the mount point.
'''
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
full_command = (
(local_path, 'mount')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--foreground',) if foreground else ())
+ (('-o', options) if options else ())
+ flags.make_flags_from_arguments(
mount_arguments,
excludes=('repository', 'archive', 'mount_point', 'paths', 'options'),
)
+ (('-o', mount_arguments.options) if mount_arguments.options else ())
+ (
(
flags.make_repository_flags(repository, local_borg_version)
flags.make_repository_flags(repository_path, local_borg_version)
+ (
('--match-archives', archive)
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
@ -47,19 +49,19 @@ def mount_archive(
)
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
else (
flags.make_repository_archive_flags(repository, archive, local_borg_version)
flags.make_repository_archive_flags(repository_path, archive, local_borg_version)
if archive
else flags.make_repository_flags(repository, local_borg_version)
else flags.make_repository_flags(repository_path, local_borg_version)
)
)
+ (mount_point,)
+ (tuple(paths) if paths else ())
+ (mount_arguments.mount_point,)
+ (tuple(mount_arguments.paths) if mount_arguments.paths else ())
)
borg_environment = environment.make_environment(storage_config)
borg_environment = environment.make_environment(config)
# Don't capture the output when foreground mode is used so that ctrl-C can work properly.
if foreground:
if mount_arguments.foreground:
execute_command(
full_command,
output_file=DO_NOT_CAPTURE,

View File

@ -7,10 +7,10 @@ from borgmatic.execute import execute_command
logger = logging.getLogger(__name__)
def make_prune_flags(retention_config, local_borg_version):
def make_prune_flags(config, local_borg_version):
'''
Given a retention config dict mapping from option name to value, tranform it into an iterable of
command-line name-value flag pairs.
Given a configuration dict mapping from option name to value, transform it into an sequence of
command-line flags.
For example, given a retention config of:
@ -23,61 +23,70 @@ def make_prune_flags(retention_config, local_borg_version):
('--keep-monthly', '6'),
)
'''
config = retention_config.copy()
prefix = config.pop('prefix', '{hostname}-') # noqa: FS003
flag_pairs = (
('--' + option_name.replace('_', '-'), str(value))
for option_name, value in config.items()
if option_name.startswith('keep_') and option_name != 'keep_exclude_tags'
)
prefix = config.get('prefix')
if prefix:
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
config['match_archives'] = f'sh:{prefix}*'
else:
config['glob_archives'] = f'{prefix}*'
return (
('--' + option_name.replace('_', '-'), str(value)) for option_name, value in config.items()
return tuple(element for pair in flag_pairs for element in pair) + (
(
('--match-archives', f'sh:{prefix}*')
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
else ('--glob-archives', f'{prefix}*')
)
if prefix
else (
flags.make_match_archives_flags(
config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
)
)
def prune_archives(
dry_run,
repository,
storage_config,
retention_config,
repository_path,
config,
local_borg_version,
prune_arguments,
global_arguments,
local_path='borg',
remote_path=None,
stats=False,
list_archives=False,
):
'''
Given dry-run flag, a local or remote repository path, a storage config dict, and a
retention config dict, prune Borg archives according to the retention policy specified in that
configuration.
Given dry-run flag, a local or remote repository path, and a configuration dict, prune Borg
archives according to the retention policy specified in that configuration.
'''
borgmatic.logger.add_custom_log_levels()
umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
extra_borg_options = storage_config.get('extra_borg_options', {}).get('prune', '')
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
extra_borg_options = config.get('extra_borg_options', {}).get('prune', '')
full_command = (
(local_path, 'prune')
+ tuple(
element
for pair in make_prune_flags(retention_config, local_borg_version)
for element in pair
)
+ make_prune_flags(config, local_borg_version)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--stats',) if stats and not dry_run else ())
+ (('--stats',) if prune_arguments.stats and not dry_run else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--list',) if list_archives else ())
+ flags.make_flags_from_arguments(
prune_arguments,
excludes=('repository', 'stats', 'list_archives'),
)
+ (('--list',) if prune_arguments.list_archives else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--dry-run',) if dry_run else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository, local_borg_version)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if stats or list_archives:
if prune_arguments.stats or prune_arguments.list_archives:
output_log_level = logging.ANSWER
else:
output_log_level = logging.INFO
@ -86,5 +95,5 @@ def prune_archives(
full_command,
output_log_level=output_log_level,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
extra_environment=environment.make_environment(config),
)

View File

@ -13,9 +13,10 @@ RINFO_REPOSITORY_NOT_FOUND_EXIT_CODE = 2
def create_repository(
dry_run,
repository,
storage_config,
repository_path,
config,
local_borg_version,
global_arguments,
encryption_mode,
source_repository=None,
copy_crypt_key=False,
@ -26,27 +27,29 @@ def create_repository(
remote_path=None,
):
'''
Given a dry-run flag, a local or remote repository path, a storage configuration dict, the local
Borg version, a Borg encryption mode, the path to another repo whose key material should be
reused, whether the repository should be append-only, and the storage quota to use, create the
Given a dry-run flag, a local or remote repository path, a configuration dict, the local Borg
version, a Borg encryption mode, the path to another repo whose key material should be reused,
whether the repository should be append-only, and the storage quota to use, create the
repository. If the repository already exists, then log and skip creation.
'''
try:
rinfo.display_repository_info(
repository,
storage_config,
repository_path,
config,
local_borg_version,
argparse.Namespace(json=True),
global_arguments,
local_path,
remote_path,
)
logger.info(f'{repository}: Repository already exists. Skipping creation.')
logger.info(f'{repository_path}: Repository already exists. Skipping creation.')
return
except subprocess.CalledProcessError as error:
if error.returncode != RINFO_REPOSITORY_NOT_FOUND_EXIT_CODE:
raise
extra_borg_options = storage_config.get('extra_borg_options', {}).get('rcreate', '')
lock_wait = config.get('lock_wait')
extra_borg_options = config.get('extra_borg_options', {}).get('rcreate', '')
rcreate_command = (
(local_path,)
@ -63,13 +66,15 @@ def create_repository(
+ (('--make-parent-dirs',) if make_parent_dirs else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--remote-path', remote_path) if remote_path else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository, local_borg_version)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
if dry_run:
logging.info(f'{repository}: Skipping repository creation (dry run)')
logging.info(f'{repository_path}: Skipping repository creation (dry run)')
return
# Do not capture output here, so as to support interactive prompts.
@ -77,5 +82,5 @@ def create_repository(
rcreate_command,
output_file=DO_NOT_CAPTURE,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
extra_environment=environment.make_environment(config),
)

View File

@ -8,20 +8,21 @@ logger = logging.getLogger(__name__)
def display_repository_info(
repository,
storage_config,
repository_path,
config,
local_borg_version,
rinfo_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, and the
arguments to the rinfo action, display summary information for the Borg repository or return
JSON summary information.
Given a local or remote repository path, a configuration dict, the local Borg version, the
arguments to the rinfo action, and global arguments as an argparse.Namespace, display summary
information for the Borg repository or return JSON summary information.
'''
borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None)
lock_wait = config.get('lock_wait', None)
full_command = (
(local_path,)
@ -41,16 +42,19 @@ def display_repository_info(
else ()
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', lock_wait)
+ (('--json',) if rinfo_arguments.json else ())
+ flags.make_repository_flags(repository, local_borg_version)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
extra_environment = environment.make_environment(storage_config)
extra_environment = environment.make_environment(config)
if rinfo_arguments.json:
return execute_command_and_capture_output(
full_command, extra_environment=extra_environment,
full_command,
extra_environment=extra_environment,
borg_local_path=local_path,
)
else:
execute_command(

View File

@ -8,63 +8,70 @@ logger = logging.getLogger(__name__)
def resolve_archive_name(
repository, archive, storage_config, local_borg_version, local_path='borg', remote_path=None
repository_path,
archive,
config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, an archive name, a storage config dict, a local Borg
path, and a remote Borg path, simply return the archive name. But if the archive name is
"latest", then instead introspect the repository for the latest archive and return its name.
Given a local or remote repository path, an archive name, a configuration dict, the local Borg
version, global arguments as an argparse.Namespace, a local Borg path, and a remote Borg path,
return the archive name. But if the archive name is "latest", then instead introspect the
repository for the latest archive and return its name.
Raise ValueError if "latest" is given but there are no archives in the repository.
'''
if archive != 'latest':
return archive
lock_wait = storage_config.get('lock_wait', None)
full_command = (
(
local_path,
'rlist' if feature.available(feature.Feature.RLIST, local_borg_version) else 'list',
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait)
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', config.get('lock_wait'))
+ flags.make_flags('last', 1)
+ ('--short',)
+ flags.make_repository_flags(repository, local_borg_version)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
output = execute_command_and_capture_output(
full_command, extra_environment=environment.make_environment(storage_config),
full_command,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
)
try:
latest_archive = output.strip().splitlines()[-1]
except IndexError:
raise ValueError('No archives found in the repository')
logger.debug(f'{repository}: Latest archive is {latest_archive}')
logger.debug(f'{repository_path}: Latest archive is {latest_archive}')
return latest_archive
MAKE_FLAGS_EXCLUDES = ('repository', 'prefix')
MAKE_FLAGS_EXCLUDES = ('repository', 'prefix', 'match_archives')
def make_rlist_command(
repository,
storage_config,
repository_path,
config,
local_borg_version,
rlist_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, the
arguments to the rlist action, and local and remote Borg paths, return a command as a tuple to
list archives with a repository.
Given a local or remote repository path, a configuration dict, the local Borg version, the
arguments to the rlist action, global arguments as an argparse.Namespace instance, and local and
remote Borg paths, return a command as a tuple to list archives with a repository.
'''
lock_wait = storage_config.get('lock_wait', None)
return (
(
local_path,
@ -81,7 +88,8 @@ def make_rlist_command(
else ()
)
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', lock_wait)
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', config.get('lock_wait'))
+ (
(
flags.make_flags('match-archives', f'sh:{rlist_arguments.prefix}*')
@ -89,35 +97,51 @@ def make_rlist_command(
else flags.make_flags('glob-archives', f'{rlist_arguments.prefix}*')
)
if rlist_arguments.prefix
else ()
else (
flags.make_match_archives_flags(
rlist_arguments.match_archives or config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
)
)
+ flags.make_flags_from_arguments(rlist_arguments, excludes=MAKE_FLAGS_EXCLUDES)
+ flags.make_repository_flags(repository, local_borg_version)
+ flags.make_repository_flags(repository_path, local_borg_version)
)
def list_repository(
repository,
storage_config,
repository_path,
config,
local_borg_version,
rlist_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a storage config dict, the local Borg version, the
arguments to the list action, and local and remote Borg paths, display the output of listing
Borg archives in the given repository (or return JSON output).
Given a local or remote repository path, a configuration dict, the local Borg version, the
arguments to the list action, global arguments as an argparse.Namespace instance, and local and
remote Borg paths, display the output of listing Borg archives in the given repository (or
return JSON output).
'''
borgmatic.logger.add_custom_log_levels()
borg_environment = environment.make_environment(storage_config)
borg_environment = environment.make_environment(config)
main_command = make_rlist_command(
repository, storage_config, local_borg_version, rlist_arguments, local_path, remote_path
repository_path,
config,
local_borg_version,
rlist_arguments,
global_arguments,
local_path,
remote_path,
)
if rlist_arguments.json:
return execute_command_and_capture_output(main_command, extra_environment=borg_environment)
return execute_command_and_capture_output(
main_command, extra_environment=borg_environment, borg_local_path=local_path
)
else:
execute_command(
main_command,

View File

@ -9,16 +9,18 @@ logger = logging.getLogger(__name__)
def transfer_archives(
dry_run,
repository,
storage_config,
repository_path,
config,
local_borg_version,
transfer_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a dry-run flag, a local or remote repository path, a storage config dict, the local Borg
version, and the arguments to the transfer action, transfer archives to the given repository.
Given a dry-run flag, a local or remote repository path, a configuration dict, the local Borg
version, the arguments to the transfer action, and global arguments as an argparse.Namespace
instance, transfer archives to the given repository.
'''
borgmatic.logger.add_custom_log_levels()
@ -27,18 +29,24 @@ def transfer_archives(
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('remote-path', remote_path)
+ flags.make_flags('lock-wait', storage_config.get('lock_wait', None))
+ (('--progress',) if transfer_arguments.progress else ())
+ flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', config.get('lock_wait', None))
+ (
flags.make_flags(
'match-archives', transfer_arguments.match_archives or transfer_arguments.archive
flags.make_flags_from_arguments(
transfer_arguments,
excludes=('repository', 'source_repository', 'archive', 'match_archives'),
)
or (
flags.make_match_archives_flags(
transfer_arguments.match_archives
or transfer_arguments.archive
or config.get('match_archives'),
config.get('archive_name_format'),
local_borg_version,
)
)
)
+ flags.make_flags_from_arguments(
transfer_arguments,
excludes=('repository', 'source_repository', 'archive', 'match_archives'),
)
+ flags.make_repository_flags(repository, local_borg_version)
+ flags.make_repository_flags(repository_path, local_borg_version)
+ flags.make_flags('other-repo', transfer_arguments.source_repository)
+ flags.make_flags('dry-run', dry_run)
)
@ -48,5 +56,5 @@ def transfer_archives(
output_log_level=logging.ANSWER,
output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None,
borg_local_path=local_path,
extra_environment=environment.make_environment(storage_config),
extra_environment=environment.make_environment(config),
)

View File

@ -6,9 +6,9 @@ from borgmatic.execute import execute_command_and_capture_output
logger = logging.getLogger(__name__)
def local_borg_version(storage_config, local_path='borg'):
def local_borg_version(config, local_path='borg'):
'''
Given a storage configuration dict and a local Borg binary path, return a version string for it.
Given a configuration dict and a local Borg binary path, return a version string for it.
Raise OSError or CalledProcessError if there is a problem running Borg.
Raise ValueError if the version cannot be parsed.
@ -19,7 +19,9 @@ def local_borg_version(storage_config, local_path='borg'):
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
)
output = execute_command_and_capture_output(
full_command, extra_environment=environment.make_environment(storage_config),
full_command,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
)
try:

File diff suppressed because it is too large Load Diff

View File

@ -8,12 +8,19 @@ from queue import Queue
from subprocess import CalledProcessError
import colorama
import pkg_resources
try:
import importlib_metadata
except ModuleNotFoundError: # pragma: nocover
import importlib.metadata as importlib_metadata
import borgmatic.actions.borg
import borgmatic.actions.break_lock
import borgmatic.actions.check
import borgmatic.actions.compact
import borgmatic.actions.config.bootstrap
import borgmatic.actions.config.generate
import borgmatic.actions.config.validate
import borgmatic.actions.create
import borgmatic.actions.export_tar
import borgmatic.actions.extract
@ -26,20 +33,19 @@ import borgmatic.actions.restore
import borgmatic.actions.rinfo
import borgmatic.actions.rlist
import borgmatic.actions.transfer
import borgmatic.commands.completion
import borgmatic.commands.completion.bash
import borgmatic.commands.completion.fish
from borgmatic.borg import umount as borg_umount
from borgmatic.borg import version as borg_version
from borgmatic.commands.arguments import parse_arguments
from borgmatic.config import checks, collect, convert, validate
from borgmatic.config import checks, collect, validate
from borgmatic.hooks import command, dispatch, monitor
from borgmatic.logger import add_custom_log_levels, configure_logging, should_do_markup
from borgmatic.logger import DISABLED, add_custom_log_levels, configure_logging, should_do_markup
from borgmatic.signals import configure_signals
from borgmatic.verbosity import verbosity_to_log_level
logger = logging.getLogger(__name__)
LEGACY_CONFIG_PATH = '/etc/borgmatic/config'
def run_configuration(config_filename, config, arguments):
'''
@ -52,16 +58,12 @@ def run_configuration(config_filename, config, arguments):
* JSON output strings from successfully executing any actions that produce JSON
* logging.LogRecord instances containing errors from any actions or backup hooks that fail
'''
(location, storage, retention, consistency, hooks) = (
config.get(section_name, {})
for section_name in ('location', 'storage', 'retention', 'consistency', 'hooks')
)
global_arguments = arguments['global']
local_path = location.get('local_path', 'borg')
remote_path = location.get('remote_path')
retries = storage.get('retries', 0)
retry_wait = storage.get('retry_wait', 0)
local_path = config.get('local_path', 'borg')
remote_path = config.get('remote_path')
retries = config.get('retries', 0)
retry_wait = config.get('retry_wait', 0)
encountered_error = None
error_repository = ''
using_primary_action = {'create', 'prune', 'compact', 'check'}.intersection(arguments)
@ -70,18 +72,19 @@ def run_configuration(config_filename, config, arguments):
action_names = [
action for action in arguments.keys() if action != 'global'
]
monitoring_hooks_are_activated = using_primary_action and monitoring_log_level != DISABLED
try:
local_borg_version = borg_version.local_borg_version(storage, local_path)
local_borg_version = borg_version.local_borg_version(config, local_path)
except (OSError, CalledProcessError, ValueError) as error:
yield from log_error_records(f'{config_filename}: Error getting local Borg version', error)
return
try:
if using_primary_action and not skip_monitoring:
if using_primary_action and not skip_monitoring and monitoring_hooks_are_activated:
dispatch.call_hooks(
'initialize_monitor',
hooks,
config,
config_filename,
monitor.MONITOR_HOOK_NAMES,
monitoring_log_level,
@ -91,7 +94,7 @@ def run_configuration(config_filename, config, arguments):
for action_name in action_names:
dispatch.call_hooks(
'ping_monitor',
hooks,
config,
config_filename,
monitor.MONITOR_HOOK_NAMES,
monitor.State.START,
@ -108,46 +111,47 @@ def run_configuration(config_filename, config, arguments):
if not encountered_error:
repo_queue = Queue()
for repo in location['repositories']:
for repo in config['repositories']:
repo_queue.put(
(repo, 0),
)
while not repo_queue.empty():
repository_path, retry_num = repo_queue.get()
repository, retry_num = repo_queue.get()
logger.debug(
f'{repository.get("label", repository["path"])}: Running actions for repository'
)
timeout = retry_num * retry_wait
if timeout:
logger.warning(f'{config_filename}: Sleeping {timeout}s before next retry')
logger.warning(
f'{repository.get("label", repository["path"])}: Sleeping {timeout}s before next retry'
)
time.sleep(timeout)
try:
yield from run_actions(
arguments=arguments,
config_filename=config_filename,
location=location,
storage=storage,
retention=retention,
consistency=consistency,
hooks=hooks,
config=config,
local_path=local_path,
remote_path=remote_path,
local_borg_version=local_borg_version,
repository_path=repository_path,
repository=repository,
)
except (OSError, CalledProcessError, ValueError) as error:
if retry_num < retries:
repo_queue.put(
(repository_path, retry_num + 1),
(repository, retry_num + 1),
)
tuple( # Consume the generator so as to trigger logging.
log_error_records(
f'{repository_path}: Error running actions for repository',
f'{repository.get("label", repository["path"])}: Error running actions for repository',
error,
levelno=logging.WARNING,
log_command_error_output=True,
)
)
logger.warning(
f'{config_filename}: Retrying... attempt {retry_num + 1}/{retries}'
f'{repository.get("label", repository["path"])}: Retrying... attempt {retry_num + 1}/{retries}'
)
continue
@ -155,18 +159,19 @@ def run_configuration(config_filename, config, arguments):
return
yield from log_error_records(
f'{repository_path}: Error running actions for repository', error
f'{repository.get("label", repository["path"])}: Error running actions for repository',
error,
)
encountered_error = error
error_repository = repository_path
error_repository = repository['path']
try:
if using_primary_action and not skip_monitoring:
if using_primary_action and not skip_monitoring and monitoring_hooks_are_activated:
# send logs irrespective of error
for action_name in action_names:
dispatch.call_hooks(
'ping_monitor',
hooks,
config,
config_filename,
monitor.MONITOR_HOOK_NAMES,
monitor.State.LOG,
@ -179,15 +184,15 @@ def run_configuration(config_filename, config, arguments):
return
encountered_error = error
yield from log_error_records(f'{repository_path}: Error pinging monitor', error)
yield from log_error_records(f'{repository["path"]}: Error pinging monitor', error)
if not encountered_error:
try:
if using_primary_action and not skip_monitoring:
if using_primary_action and not skip_monitoring and monitoring_hooks_are_activated:
for action_name in action_names:
dispatch.call_hooks(
'ping_monitor',
hooks,
config,
config_filename,
monitor.MONITOR_HOOK_NAMES,
monitor.State.FINISH,
@ -197,7 +202,7 @@ def run_configuration(config_filename, config, arguments):
)
dispatch.call_hooks(
'destroy_monitor',
hooks,
config,
config_filename,
monitor.MONITOR_HOOK_NAMES,
monitoring_log_level,
@ -213,8 +218,8 @@ def run_configuration(config_filename, config, arguments):
if encountered_error and using_primary_action:
try:
command.execute_hook(
hooks.get('on_error'),
hooks.get('umask'),
config.get('on_error'),
config.get('umask'),
config_filename,
'on-error',
global_arguments.dry_run,
@ -225,7 +230,7 @@ def run_configuration(config_filename, config, arguments):
for action_name in action_names:
dispatch.call_hooks(
'ping_monitor',
hooks,
config,
config_filename,
monitor.MONITOR_HOOK_NAMES,
monitor.State.FAIL,
@ -235,7 +240,7 @@ def run_configuration(config_filename, config, arguments):
)
dispatch.call_hooks(
'destroy_monitor',
hooks,
config,
config_filename,
monitor.MONITOR_HOOK_NAMES,
monitoring_log_level,
@ -252,15 +257,11 @@ def run_actions(
*,
arguments,
config_filename,
location,
storage,
retention,
consistency,
hooks,
config,
local_path,
remote_path,
local_borg_version,
repository_path,
repository,
):
'''
Given parsed command-line arguments as an argparse.ArgumentParser instance, the configuration
@ -275,29 +276,30 @@ def run_actions(
invalid.
'''
add_custom_log_levels()
repository = os.path.expanduser(repository_path)
repository_path = os.path.expanduser(repository['path'])
global_arguments = arguments['global']
dry_run_label = ' (dry run; not making any changes)' if global_arguments.dry_run else ''
hook_context = {
'repository': repository_path,
# Deprecated: For backwards compatibility with borgmatic < 1.6.0.
'repositories': ','.join(location['repositories']),
'repositories': ','.join([repo['path'] for repo in config['repositories']]),
'log_file': global_arguments.log_file if global_arguments.log_file else '',
}
command.execute_hook(
hooks.get('before_actions'),
hooks.get('umask'),
config.get('before_actions'),
config.get('umask'),
config_filename,
'pre-actions',
global_arguments.dry_run,
**hook_context,
)
for (action_name, action_arguments) in arguments.items():
for action_name, action_arguments in arguments.items():
if action_name == 'rcreate':
borgmatic.actions.rcreate.run_rcreate(
repository,
storage,
config,
local_borg_version,
action_arguments,
global_arguments,
@ -307,7 +309,7 @@ def run_actions(
elif action_name == 'transfer':
borgmatic.actions.transfer.run_transfer(
repository,
storage,
config,
local_borg_version,
action_arguments,
global_arguments,
@ -318,9 +320,7 @@ def run_actions(
yield from borgmatic.actions.create.run_create(
config_filename,
repository,
location,
storage,
hooks,
config,
hook_context,
local_borg_version,
action_arguments,
@ -333,9 +333,7 @@ def run_actions(
borgmatic.actions.prune.run_prune(
config_filename,
repository,
storage,
retention,
hooks,
config,
hook_context,
local_borg_version,
action_arguments,
@ -348,9 +346,7 @@ def run_actions(
borgmatic.actions.compact.run_compact(
config_filename,
repository,
storage,
retention,
hooks,
config,
hook_context,
local_borg_version,
action_arguments,
@ -360,14 +356,11 @@ def run_actions(
remote_path,
)
elif action_name == 'check':
if checks.repository_enabled_for_checks(repository, consistency):
if checks.repository_enabled_for_checks(repository, config):
borgmatic.actions.check.run_check(
config_filename,
repository,
location,
storage,
consistency,
hooks,
config,
hook_context,
local_borg_version,
action_arguments,
@ -379,9 +372,7 @@ def run_actions(
borgmatic.actions.extract.run_extract(
config_filename,
repository,
location,
storage,
hooks,
config,
hook_context,
local_borg_version,
action_arguments,
@ -392,7 +383,7 @@ def run_actions(
elif action_name == 'export-tar':
borgmatic.actions.export_tar.run_export_tar(
repository,
storage,
config,
local_borg_version,
action_arguments,
global_arguments,
@ -402,18 +393,17 @@ def run_actions(
elif action_name == 'mount':
borgmatic.actions.mount.run_mount(
repository,
storage,
config,
local_borg_version,
arguments['mount'],
action_arguments,
global_arguments,
local_path,
remote_path,
)
elif action_name == 'restore':
borgmatic.actions.restore.run_restore(
repository,
location,
storage,
hooks,
config,
local_borg_version,
action_arguments,
global_arguments,
@ -423,61 +413,67 @@ def run_actions(
elif action_name == 'rlist':
yield from borgmatic.actions.rlist.run_rlist(
repository,
storage,
config,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
elif action_name == 'list':
yield from borgmatic.actions.list.run_list(
repository,
storage,
config,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
elif action_name == 'rinfo':
yield from borgmatic.actions.rinfo.run_rinfo(
repository,
storage,
config,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
elif action_name == 'info':
yield from borgmatic.actions.info.run_info(
repository,
storage,
config,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
elif action_name == 'break-lock':
borgmatic.actions.break_lock.run_break_lock(
repository,
storage,
config,
local_borg_version,
arguments['break-lock'],
action_arguments,
global_arguments,
local_path,
remote_path,
)
elif action_name == 'borg':
borgmatic.actions.borg.run_borg(
repository,
storage,
config,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
command.execute_hook(
hooks.get('after_actions'),
hooks.get('umask'),
config.get('after_actions'),
config.get('umask'),
config_filename,
'post-actions',
global_arguments.dry_run,
@ -490,6 +486,9 @@ def load_configurations(config_filenames, overrides=None, resolve_env=True):
Given a sequence of configuration filenames, load and validate each configuration file. Return
the results as a tuple of: dict of configuration filename to corresponding parsed configuration,
and sequence of logging.LogRecord instances containing any parse errors.
Log records are returned here instead of being logged directly because logging isn't yet
initialized at this point!
'''
# Dict mapping from config filename to corresponding parsed config dict.
configs = collections.OrderedDict()
@ -497,6 +496,17 @@ def load_configurations(config_filenames, overrides=None, resolve_env=True):
# Parse and load each configuration file.
for config_filename in config_filenames:
logs.extend(
[
logging.makeLogRecord(
dict(
levelno=logging.DEBUG,
levelname='DEBUG',
msg=f'{config_filename}: Loading configuration file',
)
),
]
)
try:
configs[config_filename], parse_logs = validate.parse_configuration(
config_filename, validate.schema_filename(), overrides, resolve_env
@ -546,6 +556,9 @@ def log_record(suppress_log=False, **kwargs):
return record
MAX_CAPTURED_OUTPUT_LENGTH = 1000
def log_error_records(
message, error=None, levelno=logging.CRITICAL, log_command_error_output=False
):
@ -568,12 +581,18 @@ def log_error_records(
except CalledProcessError as error:
yield log_record(levelno=levelno, levelname=level_name, msg=message)
if error.output:
try:
output = error.output.decode('utf-8')
except (UnicodeDecodeError, AttributeError):
output = error.output
# Suppress these logs for now and save full error output for the log summary at the end.
yield log_record(
levelno=levelno,
levelname=level_name,
msg=error.output,
suppress_log=not log_command_error_output,
msg=output[:MAX_CAPTURED_OUTPUT_LENGTH]
+ ' ...' * (len(output) > MAX_CAPTURED_OUTPUT_LENGTH),
suppress_log=True,
)
yield log_record(levelno=levelno, levelname=level_name, msg=error)
except (ValueError, OSError) as error:
@ -590,12 +609,106 @@ def get_local_path(configs):
Arbitrarily return the local path from the first configuration dict. Default to "borg" if not
set.
'''
return next(iter(configs.values())).get('location', {}).get('local_path', 'borg')
return next(iter(configs.values())).get('local_path', 'borg')
def collect_highlander_action_summary_logs(configs, arguments, configuration_parse_errors):
'''
Given a dict of configuration filename to corresponding parsed configuration, parsed
command-line arguments as a dict from subparser name to a parsed namespace of arguments, and
whether any configuration files encountered errors during parsing, run a highlander action
specified in the arguments, if any, and yield a series of logging.LogRecord instances containing
summary information.
A highlander action is an action that cannot coexist with other actions on the borgmatic
command-line, and borgmatic exits after processing such an action.
'''
add_custom_log_levels()
if 'bootstrap' in arguments:
try:
# No configuration file is needed for bootstrap.
local_borg_version = borg_version.local_borg_version({}, 'borg')
except (OSError, CalledProcessError, ValueError) as error:
yield from log_error_records('Error getting local Borg version', error)
return
try:
borgmatic.actions.config.bootstrap.run_bootstrap(
arguments['bootstrap'], arguments['global'], local_borg_version
)
yield logging.makeLogRecord(
dict(
levelno=logging.ANSWER,
levelname='ANSWER',
msg='Bootstrap successful',
)
)
except (
CalledProcessError,
ValueError,
OSError,
) as error:
yield from log_error_records(error)
return
if 'generate' in arguments:
try:
borgmatic.actions.config.generate.run_generate(
arguments['generate'], arguments['global']
)
yield logging.makeLogRecord(
dict(
levelno=logging.ANSWER,
levelname='ANSWER',
msg='Generate successful',
)
)
except (
CalledProcessError,
ValueError,
OSError,
) as error:
yield from log_error_records(error)
return
if 'validate' in arguments:
if configuration_parse_errors:
yield logging.makeLogRecord(
dict(
levelno=logging.CRITICAL,
levelname='CRITICAL',
msg='Configuration validation failed',
)
)
return
try:
borgmatic.actions.config.validate.run_validate(arguments['validate'], configs)
yield logging.makeLogRecord(
dict(
levelno=logging.ANSWER,
levelname='ANSWER',
msg='All configuration files are valid',
)
)
except (
CalledProcessError,
ValueError,
OSError,
) as error:
yield from log_error_records(error)
return
def collect_configuration_run_summary_logs(configs, arguments):
'''
Given a dict of configuration filename to corresponding parsed configuration, and parsed
Given a dict of configuration filename to corresponding parsed configuration and parsed
command-line arguments as a dict from subparser name to a parsed namespace of arguments, run
each configuration file and yield a series of logging.LogRecord instances containing summary
information about each run.
@ -629,10 +742,9 @@ def collect_configuration_run_summary_logs(configs, arguments):
if 'create' in arguments:
try:
for config_filename, config in configs.items():
hooks = config.get('hooks', {})
command.execute_hook(
hooks.get('before_everything'),
hooks.get('umask'),
config.get('before_everything'),
config.get('umask'),
config_filename,
'pre-everything',
arguments['global'].dry_run,
@ -677,10 +789,9 @@ def collect_configuration_run_summary_logs(configs, arguments):
if 'create' in arguments:
try:
for config_filename, config in configs.items():
hooks = config.get('hooks', {})
command.execute_hook(
hooks.get('after_everything'),
hooks.get('umask'),
config.get('after_everything'),
config.get('umask'),
config_filename,
'post-everything',
arguments['global'].dry_run,
@ -698,7 +809,7 @@ def exit_with_help_link(): # pragma: no cover
sys.exit(1)
def main(): # pragma: no cover
def main(extra_summary_logs=[]): # pragma: no cover
configure_signals()
try:
@ -716,16 +827,23 @@ def main(): # pragma: no cover
global_arguments = arguments['global']
if global_arguments.version:
print(pkg_resources.require('borgmatic')[0].version)
print(importlib_metadata.version('borgmatic'))
sys.exit(0)
if global_arguments.bash_completion:
print(borgmatic.commands.completion.bash_completion())
print(borgmatic.commands.completion.bash.bash_completion())
sys.exit(0)
if global_arguments.fish_completion:
print(borgmatic.commands.completion.fish.fish_completion())
sys.exit(0)
config_filenames = tuple(collect.collect_config_filenames(global_arguments.config_paths))
global_arguments.used_config_paths = list(config_filenames)
configs, parse_logs = load_configurations(
config_filenames, global_arguments.overrides, global_arguments.resolve_env
)
configuration_parse_errors = (
(max(log.levelno for log in parse_logs) >= logging.CRITICAL) if parse_logs else False
)
any_json_flags = any(
getattr(sub_arguments, 'json', False) for sub_arguments in arguments.values()
@ -741,16 +859,25 @@ def main(): # pragma: no cover
verbosity_to_log_level(global_arguments.log_file_verbosity),
verbosity_to_log_level(global_arguments.monitoring_verbosity),
global_arguments.log_file,
global_arguments.log_file_format,
)
except (FileNotFoundError, PermissionError) as error:
configure_logging(logging.CRITICAL)
logger.critical(f'Error configuring logging: {error}')
exit_with_help_link()
logger.debug('Ensuring legacy configuration is upgraded')
convert.guard_configuration_upgraded(LEGACY_CONFIG_PATH, config_filenames)
summary_logs = parse_logs + list(collect_configuration_run_summary_logs(configs, arguments))
summary_logs = (
extra_summary_logs
+ parse_logs
+ (
list(
collect_highlander_action_summary_logs(
configs, arguments, configuration_parse_errors
)
)
or list(collect_configuration_run_summary_logs(configs, arguments))
)
)
summary_logs_max_level = max(log.levelno for log in summary_logs)
for message in ('', 'summary:'):

View File

@ -0,0 +1,36 @@
import borgmatic.commands.arguments
def upgrade_message(language: str, upgrade_command: str, completion_file: str):
return f'''
Your {language} completions script is from a different version of borgmatic than is
currently installed. Please upgrade your script so your completions match the
command-line flags in your installed borgmatic! Try this to upgrade:
{upgrade_command}
source {completion_file}
'''
def available_actions(subparsers, current_action=None):
'''
Given subparsers as an argparse._SubParsersAction instance and a current action name (if
any), return the actions names that can follow the current action on a command-line.
This takes into account which sub-actions that the current action supports. For instance, if
"bootstrap" is a sub-action for "config", then "bootstrap" should be able to follow a current
action of "config" but not "list".
'''
action_to_subactions = borgmatic.commands.arguments.get_subactions_for_actions(
subparsers.choices
)
current_subactions = action_to_subactions.get(current_action)
if current_subactions:
return current_subactions
all_subactions = set(
subaction for subactions in action_to_subactions.values() for subaction in subactions
)
return tuple(action for action in subparsers.choices.keys() if action not in all_subactions)

View File

@ -1,13 +1,5 @@
from borgmatic.commands import arguments
UPGRADE_MESSAGE = '''
Your bash completions script is from a different version of borgmatic than is
currently installed. Please upgrade your script so your completions match the
command-line flags in your installed borgmatic! Try this to upgrade:
sudo sh -c "borgmatic --bash-completion > $BASH_SOURCE"
source $BASH_SOURCE
'''
import borgmatic.commands.arguments
import borgmatic.commands.completion.actions
def parser_flags(parser):
@ -23,9 +15,12 @@ def bash_completion():
Return a bash completion script for the borgmatic command. Produce this by introspecting
borgmatic's command-line argument parsers.
'''
top_level_parser, subparsers = arguments.make_parsers()
global_flags = parser_flags(top_level_parser)
actions = ' '.join(subparsers.choices.keys())
(
unused_global_parser,
action_parsers,
global_plus_action_parser,
) = borgmatic.commands.arguments.make_parsers()
global_flags = parser_flags(global_plus_action_parser)
# Avert your eyes.
return '\n'.join(
@ -34,7 +29,11 @@ def bash_completion():
' local this_script="$(cat "$BASH_SOURCE" 2> /dev/null)"',
' local installed_script="$(borgmatic --bash-completion 2> /dev/null)"',
' if [ "$this_script" != "$installed_script" ] && [ "$installed_script" != "" ];'
f' then cat << EOF\n{UPGRADE_MESSAGE}\nEOF',
f''' then cat << EOF\n{borgmatic.commands.completion.actions.upgrade_message(
'bash',
'sudo sh -c "borgmatic --bash-completion > $BASH_SOURCE"',
'$BASH_SOURCE',
)}\nEOF''',
' fi',
'}',
'complete_borgmatic() {',
@ -44,12 +43,22 @@ def bash_completion():
COMPREPLY=($(compgen -W "%s %s %s" -- "${COMP_WORDS[COMP_CWORD]}"))
return 0
fi'''
% (action, parser_flags(subparser), actions, global_flags)
for action, subparser in subparsers.choices.items()
% (
action,
parser_flags(action_parser),
' '.join(
borgmatic.commands.completion.actions.available_actions(action_parsers, action)
),
global_flags,
)
for action, action_parser in reversed(action_parsers.choices.items())
)
+ (
' COMPREPLY=($(compgen -W "%s %s" -- "${COMP_WORDS[COMP_CWORD]}"))' # noqa: FS003
% (actions, global_flags),
% (
' '.join(borgmatic.commands.completion.actions.available_actions(action_parsers)),
global_flags,
),
' (check_version &)',
'}',
'\ncomplete -o bashdefault -o default -F complete_borgmatic borgmatic',

View File

@ -0,0 +1,176 @@
import shlex
from argparse import Action
from textwrap import dedent
import borgmatic.commands.arguments
import borgmatic.commands.completion.actions
def has_file_options(action: Action):
'''
Given an argparse.Action instance, return True if it takes a file argument.
'''
return action.metavar in (
'FILENAME',
'PATH',
) or action.dest in ('config_paths',)
def has_choice_options(action: Action):
'''
Given an argparse.Action instance, return True if it takes one of a predefined set of arguments.
'''
return action.choices is not None
def has_unknown_required_param_options(action: Action):
'''
A catch-all for options that take a required parameter, but we don't know what the parameter is.
This should be used last. These are actions that take something like a glob, a list of numbers, or a string.
Actions that match this pattern should not show the normal arguments, because those are unlikely to be valid.
'''
return (
action.required is True
or action.nargs
in (
'+',
'*',
)
or action.metavar in ('PATTERN', 'KEYS', 'N')
or (action.type is not None and action.default is None)
)
def has_exact_options(action: Action):
return (
has_file_options(action)
or has_choice_options(action)
or has_unknown_required_param_options(action)
)
def exact_options_completion(action: Action):
'''
Given an argparse.Action instance, return a completion invocation that forces file completions, options completion,
or just that some value follow the action, if the action takes such an argument and was the last action on the
command line prior to the cursor.
Otherwise, return an empty string.
'''
if not has_exact_options(action):
return ''
args = ' '.join(action.option_strings)
if has_file_options(action):
return f'''\ncomplete -c borgmatic -Fr -n "__borgmatic_current_arg {args}"'''
if has_choice_options(action):
return f'''\ncomplete -c borgmatic -f -a '{' '.join(map(str, action.choices))}' -n "__borgmatic_current_arg {args}"'''
if has_unknown_required_param_options(action):
return f'''\ncomplete -c borgmatic -x -n "__borgmatic_current_arg {args}"'''
raise ValueError(
f'Unexpected action: {action} passes has_exact_options but has no choices produced'
)
def dedent_strip_as_tuple(string: str):
'''
Dedent a string, then strip it to avoid requiring your first line to have content, then return a tuple of the string.
Makes it easier to write multiline strings for completions when you join them with a tuple.
'''
return (dedent(string).strip('\n'),)
def fish_completion():
'''
Return a fish completion script for the borgmatic command. Produce this by introspecting
borgmatic's command-line argument parsers.
'''
(
unused_global_parser,
action_parsers,
global_plus_action_parser,
) = borgmatic.commands.arguments.make_parsers()
all_action_parsers = ' '.join(action for action in action_parsers.choices.keys())
exact_option_args = tuple(
' '.join(action.option_strings)
for action_parser in action_parsers.choices.values()
for action in action_parser._actions
if has_exact_options(action)
) + tuple(
' '.join(action.option_strings)
for action in global_plus_action_parser._actions
if len(action.option_strings) > 0
if has_exact_options(action)
)
# Avert your eyes.
return '\n'.join(
dedent_strip_as_tuple(
f'''
function __borgmatic_check_version
set -fx this_filename (status current-filename)
fish -c '
if test -f "$this_filename"
set this_script (cat $this_filename 2> /dev/null)
set installed_script (borgmatic --fish-completion 2> /dev/null)
if [ "$this_script" != "$installed_script" ] && [ "$installed_script" != "" ]
echo "{borgmatic.commands.completion.actions.upgrade_message(
'fish',
'borgmatic --fish-completion | sudo tee $this_filename',
'$this_filename',
)}"
end
end
' &
end
__borgmatic_check_version
function __borgmatic_current_arg --description 'Check if any of the given arguments are the last on the command line before the cursor'
set -l all_args (commandline -poc)
# premature optimization to avoid iterating all args if there aren't enough
# to have a last arg beyond borgmatic
if [ (count $all_args) -lt 2 ]
return 1
end
for arg in $argv
if [ "$arg" = "$all_args[-1]" ]
return 0
end
end
return 1
end
set --local action_parser_condition "not __fish_seen_subcommand_from {all_action_parsers}"
set --local exact_option_condition "not __borgmatic_current_arg {' '.join(exact_option_args)}"
'''
)
+ ('\n# action_parser completions',)
+ tuple(
f'''complete -c borgmatic -f -n "$action_parser_condition" -n "$exact_option_condition" -a '{action_name}' -d {shlex.quote(action_parser.description)}'''
for action_name, action_parser in action_parsers.choices.items()
)
+ ('\n# global flags',)
+ tuple(
# -n is checked in order, so put faster / more likely to be true checks first
f'''complete -c borgmatic -f -n "$exact_option_condition" -a '{' '.join(action.option_strings)}' -d {shlex.quote(action.help)}{exact_options_completion(action)}'''
for action in global_plus_action_parser._actions
# ignore the noargs action, as this is an impossible completion for fish
if len(action.option_strings) > 0
if 'Deprecated' not in action.help
)
+ ('\n# action_parser flags',)
+ tuple(
f'''complete -c borgmatic -f -n "$exact_option_condition" -a '{' '.join(action.option_strings)}' -d {shlex.quote(action.help)} -n "__fish_seen_subcommand_from {action_name}"{exact_options_completion(action)}'''
for action_name, action_parser in action_parsers.choices.items()
for action in action_parser._actions
if 'Deprecated' not in (action.help or ())
)
)

View File

@ -1,102 +0,0 @@
import os
import sys
import textwrap
from argparse import ArgumentParser
from ruamel import yaml
from borgmatic.config import convert, generate, legacy, validate
DEFAULT_SOURCE_CONFIG_FILENAME = '/etc/borgmatic/config'
DEFAULT_SOURCE_EXCLUDES_FILENAME = '/etc/borgmatic/excludes'
DEFAULT_DESTINATION_CONFIG_FILENAME = '/etc/borgmatic/config.yaml'
def parse_arguments(*arguments):
'''
Given command-line arguments with which this script was invoked, parse the arguments and return
them as an ArgumentParser instance.
'''
parser = ArgumentParser(
description='''
Convert legacy INI-style borgmatic configuration and excludes files to a single YAML
configuration file. Note that this replaces any comments from the source files.
'''
)
parser.add_argument(
'-s',
'--source-config',
dest='source_config_filename',
default=DEFAULT_SOURCE_CONFIG_FILENAME,
help=f'Source INI-style configuration filename. Default: {DEFAULT_SOURCE_CONFIG_FILENAME}',
)
parser.add_argument(
'-e',
'--source-excludes',
dest='source_excludes_filename',
default=DEFAULT_SOURCE_EXCLUDES_FILENAME
if os.path.exists(DEFAULT_SOURCE_EXCLUDES_FILENAME)
else None,
help='Excludes filename',
)
parser.add_argument(
'-d',
'--destination-config',
dest='destination_config_filename',
default=DEFAULT_DESTINATION_CONFIG_FILENAME,
help=f'Destination YAML configuration filename. Default: {DEFAULT_DESTINATION_CONFIG_FILENAME}',
)
return parser.parse_args(arguments)
TEXT_WRAP_CHARACTERS = 80
def display_result(args): # pragma: no cover
result_lines = textwrap.wrap(
f'Your borgmatic configuration has been upgraded. Please review the result in {args.destination_config_filename}.',
TEXT_WRAP_CHARACTERS,
)
excludes_phrase = (
f' and {args.source_excludes_filename}' if args.source_excludes_filename else ''
)
delete_lines = textwrap.wrap(
f'Once you are satisfied, you can safely delete {args.source_config_filename}{excludes_phrase}.',
TEXT_WRAP_CHARACTERS,
)
print('\n'.join(result_lines))
print()
print('\n'.join(delete_lines))
def main(): # pragma: no cover
try:
args = parse_arguments(*sys.argv[1:])
schema = yaml.round_trip_load(open(validate.schema_filename()).read())
source_config = legacy.parse_configuration(
args.source_config_filename, legacy.CONFIG_FORMAT
)
source_config_file_mode = os.stat(args.source_config_filename).st_mode
source_excludes = (
open(args.source_excludes_filename).read().splitlines()
if args.source_excludes_filename
else []
)
destination_config = convert.convert_legacy_parsed_config(
source_config, source_excludes, schema
)
generate.write_configuration(
args.destination_config_filename,
generate.render_configuration(destination_config),
mode=source_config_file_mode,
)
display_result(args)
except (ValueError, OSError) as error:
print(error, file=sys.stderr)
sys.exit(1)

View File

@ -1,63 +1,17 @@
import logging
import sys
from argparse import ArgumentParser
from borgmatic.config import generate, validate
DEFAULT_DESTINATION_CONFIG_FILENAME = '/etc/borgmatic/config.yaml'
import borgmatic.commands.borgmatic
def parse_arguments(*arguments):
'''
Given command-line arguments with which this script was invoked, parse the arguments and return
them as an ArgumentParser instance.
'''
parser = ArgumentParser(description='Generate a sample borgmatic YAML configuration file.')
parser.add_argument(
'-s',
'--source',
dest='source_filename',
help='Optional YAML configuration file to merge into the generated configuration, useful for upgrading your configuration',
)
parser.add_argument(
'-d',
'--destination',
dest='destination_filename',
default=DEFAULT_DESTINATION_CONFIG_FILENAME,
help=f'Destination YAML configuration file, default: {DEFAULT_DESTINATION_CONFIG_FILENAME}',
)
parser.add_argument(
'--overwrite',
default=False,
action='store_true',
help='Whether to overwrite any existing destination file, defaults to false',
)
return parser.parse_args(arguments)
def main(): # pragma: no cover
try:
args = parse_arguments(*sys.argv[1:])
generate.generate_sample_configuration(
args.source_filename,
args.destination_filename,
validate.schema_filename(),
overwrite=args.overwrite,
def main():
warning_log = logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg='generate-borgmatic-config is deprecated and will be removed from a future release. Please use "borgmatic config generate" instead.',
)
)
print(f'Generated a sample configuration file at {args.destination_filename}.')
print()
if args.source_filename:
print(f'Merged in the contents of configuration file at {args.source_filename}.')
print('To review the changes made, run:')
print()
print(f' diff --unified {args.source_filename} {args.destination_filename}')
print()
print('This includes all available configuration options with example values. The few')
print('required options are indicated. Please edit the file to suit your needs.')
print()
print('If you ever need help: https://torsion.org/borgmatic/#issues')
except (ValueError, OSError) as error:
print(error, file=sys.stderr)
sys.exit(1)
sys.argv = ['borgmatic', 'config', 'generate'] + sys.argv[1:]
borgmatic.commands.borgmatic.main([warning_log])

View File

@ -1,52 +1,17 @@
import logging
import sys
from argparse import ArgumentParser
from borgmatic.config import collect, validate
logger = logging.getLogger(__name__)
import borgmatic.commands.borgmatic
def parse_arguments(*arguments):
'''
Given command-line arguments with which this script was invoked, parse the arguments and return
them as an ArgumentParser instance.
'''
config_paths = collect.get_default_config_paths()
parser = ArgumentParser(description='Validate borgmatic configuration file(s).')
parser.add_argument(
'-c',
'--config',
nargs='+',
dest='config_paths',
default=config_paths,
help=f'Configuration filenames or directories, defaults to: {config_paths}',
def main():
warning_log = logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg='validate-borgmatic-config is deprecated and will be removed from a future release. Please use "borgmatic config validate" instead.',
)
)
return parser.parse_args(arguments)
def main(): # pragma: no cover
args = parse_arguments(*sys.argv[1:])
logging.basicConfig(level=logging.INFO, format='%(message)s')
config_filenames = tuple(collect.collect_config_filenames(args.config_paths))
if len(config_filenames) == 0:
logger.critical('No files to validate found')
sys.exit(1)
found_issues = False
for config_filename in config_filenames:
try:
validate.parse_configuration(config_filename, validate.schema_filename())
except (ValueError, OSError, validate.Validation_error) as error:
logging.critical(f'{config_filename}: Error parsing configuration file')
logging.critical(error)
found_issues = True
if found_issues:
sys.exit(1)
else:
logger.info(f"All given configuration files are valid: {', '.join(config_filenames)}")
sys.argv = ['borgmatic', 'config', 'validate'] + sys.argv[1:]
borgmatic.commands.borgmatic.main([warning_log])

View File

@ -24,9 +24,9 @@ def get_default_config_paths(expand_home=True):
def collect_config_filenames(config_paths):
'''
Given a sequence of config paths, both filenames and directories, resolve that to an iterable
of files. Accomplish this by listing any given directories looking for contained config files
(ending with the ".yaml" or ".yml" extension). This is non-recursive, so any directories within the given
directories are ignored.
of absolute files. Accomplish this by listing any given directories looking for contained config
files (ending with the ".yaml" or ".yml" extension). This is non-recursive, so any directories
within the given directories are ignored.
Return paths even if they don't exist on disk, so the user can find out about missing
configuration paths. However, skip a default config path if it's missing, so the user doesn't
@ -41,7 +41,7 @@ def collect_config_filenames(config_paths):
continue
if not os.path.isdir(path) or not exists:
yield path
yield os.path.abspath(path)
continue
if not os.access(path, os.R_OK):
@ -51,4 +51,4 @@ def collect_config_filenames(config_paths):
full_filename = os.path.join(path, filename)
matching_filetype = full_filename.endswith('.yaml') or full_filename.endswith('.yml')
if matching_filetype and not os.path.isdir(full_filename):
yield full_filename
yield os.path.abspath(full_filename)

View File

@ -1,95 +0,0 @@
import os
from ruamel import yaml
from borgmatic.config import generate
def _convert_section(source_section_config, section_schema):
'''
Given a legacy Parsed_config instance for a single section, convert it to its corresponding
yaml.comments.CommentedMap representation in preparation for actual serialization to YAML.
Where integer types exist in the given section schema, convert their values to integers.
'''
destination_section_config = yaml.comments.CommentedMap(
[
(
option_name,
int(option_value)
if section_schema['properties'].get(option_name, {}).get('type') == 'integer'
else option_value,
)
for option_name, option_value in source_section_config.items()
]
)
return destination_section_config
def convert_legacy_parsed_config(source_config, source_excludes, schema):
'''
Given a legacy Parsed_config instance loaded from an INI-style config file and a list of exclude
patterns, convert them to a corresponding yaml.comments.CommentedMap representation in
preparation for serialization to a single YAML config file.
Additionally, use the given schema as a source of helpful comments to include within the
returned CommentedMap.
'''
destination_config = yaml.comments.CommentedMap(
[
(section_name, _convert_section(section_config, schema['properties'][section_name]))
for section_name, section_config in source_config._asdict().items()
]
)
# Split space-seperated values into actual lists, make "repository" into a list, and merge in
# excludes.
location = destination_config['location']
location['source_directories'] = source_config.location['source_directories'].split(' ')
location['repositories'] = [location.pop('repository')]
location['exclude_patterns'] = source_excludes
if source_config.consistency.get('checks'):
destination_config['consistency']['checks'] = source_config.consistency['checks'].split(' ')
# Add comments to each section, and then add comments to the fields in each section.
generate.add_comments_to_configuration_object(destination_config, schema)
for section_name, section_config in destination_config.items():
generate.add_comments_to_configuration_object(
section_config, schema['properties'][section_name], indent=generate.INDENT
)
return destination_config
class Legacy_configuration_not_upgraded(FileNotFoundError):
def __init__(self):
super(Legacy_configuration_not_upgraded, self).__init__(
'''borgmatic changed its configuration file format in version 1.1.0 from INI-style
to YAML. This better supports validation, and has a more natural way to express
lists of values. To upgrade your existing configuration, run:
sudo upgrade-borgmatic-config
That will generate a new YAML configuration file at /etc/borgmatic/config.yaml
(by default) using the values from both your existing configuration and excludes
files. The new version of borgmatic will consume the YAML configuration file
instead of the old one.'''
)
def guard_configuration_upgraded(source_config_filename, destination_config_filenames):
'''
If legacy source configuration exists but no destination upgraded configs do, raise
Legacy_configuration_not_upgraded.
The idea is that we want to alert the user about upgrading their config if they haven't already.
'''
destination_config_exists = any(
os.path.exists(filename) for filename in destination_config_filenames
)
if os.path.exists(source_config_filename) and not destination_config_exists:
raise Legacy_configuration_not_upgraded()

View File

@ -11,7 +11,7 @@ INDENT = 4
SEQUENCE_INDENT = 2
def _insert_newline_before_comment(config, field_name):
def insert_newline_before_comment(config, field_name):
'''
Using some ruamel.yaml black magic, insert a blank line in the config right before the given
field and its comments.
@ -21,10 +21,10 @@ def _insert_newline_before_comment(config, field_name):
)
def _schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
def schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
'''
Given a loaded configuration schema, generate and return sample config for it. Include comments
for each section based on the schema "description".
for each option based on the schema "description".
'''
schema_type = schema.get('type')
example = schema.get('example')
@ -33,13 +33,13 @@ def _schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
if schema_type == 'array':
config = yaml.comments.CommentedSeq(
[_schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)]
[schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)]
)
add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT))
elif schema_type == 'object':
config = yaml.comments.CommentedMap(
[
(field_name, _schema_to_sample_configuration(sub_schema, level + 1))
(field_name, schema_to_sample_configuration(sub_schema, level + 1))
for field_name, sub_schema in schema['properties'].items()
]
)
@ -53,13 +53,13 @@ def _schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
return config
def _comment_out_line(line):
def comment_out_line(line):
# If it's already is commented out (or empty), there's nothing further to do!
stripped_line = line.lstrip()
if not stripped_line or stripped_line.startswith('#'):
return line
# Comment out the names of optional sections, inserting the '#' after any indent for aesthetics.
# Comment out the names of optional options, inserting the '#' after any indent for aesthetics.
matches = re.match(r'(\s*)', line)
indent_spaces = matches.group(0) if matches else ''
count_indent_spaces = len(indent_spaces)
@ -67,7 +67,7 @@ def _comment_out_line(line):
return '# '.join((indent_spaces, line[count_indent_spaces:]))
def _comment_out_optional_configuration(rendered_config):
def comment_out_optional_configuration(rendered_config):
'''
Post-process a rendered configuration string to comment out optional key/values, as determined
by a sentinel in the comment before each key.
@ -92,7 +92,7 @@ def _comment_out_optional_configuration(rendered_config):
if not line.strip():
optional = False
lines.append(_comment_out_line(line) if optional else line)
lines.append(comment_out_line(line) if optional else line)
return '\n'.join(lines)
@ -165,7 +165,6 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
return
REQUIRED_SECTION_NAMES = {'location', 'retention'}
REQUIRED_KEYS = {'source_directories', 'repositories', 'keep_daily'}
COMMENTED_OUT_SENTINEL = 'COMMENT_OUT'
@ -185,7 +184,7 @@ def add_comments_to_configuration_object(config, schema, indent=0, skip_first=Fa
# If this is an optional key, add an indicator to the comment flagging it to be commented
# out from the sample configuration. This sentinel is consumed by downstream processing that
# does the actual commenting out.
if field_name not in REQUIRED_SECTION_NAMES and field_name not in REQUIRED_KEYS:
if field_name not in REQUIRED_KEYS:
description = (
'\n'.join((description, COMMENTED_OUT_SENTINEL))
if description
@ -199,7 +198,7 @@ def add_comments_to_configuration_object(config, schema, indent=0, skip_first=Fa
config.yaml_set_comment_before_after_key(key=field_name, before=description, indent=indent)
if index > 0:
_insert_newline_before_comment(config, field_name)
insert_newline_before_comment(config, field_name)
RUAMEL_YAML_COMMENTS_INDEX = 1
@ -260,14 +259,14 @@ def merge_source_configuration_into_destination(destination_config, source_confi
)
continue
# This is some sort of scalar. Simply set it into the destination.
# This is some sort of scalar. Set it into the destination.
destination_config[field_name] = source_config[field_name]
return destination_config
def generate_sample_configuration(
source_filename, destination_filename, schema_filename, overwrite=False
dry_run, source_filename, destination_filename, schema_filename, overwrite=False
):
'''
Given an optional source configuration filename, and a required destination configuration
@ -284,11 +283,14 @@ def generate_sample_configuration(
normalize.normalize(source_filename, source_config)
destination_config = merge_source_configuration_into_destination(
_schema_to_sample_configuration(schema), source_config
schema_to_sample_configuration(schema), source_config
)
if dry_run:
return
write_configuration(
destination_filename,
_comment_out_optional_configuration(render_configuration(destination_config)),
comment_out_optional_configuration(render_configuration(destination_config)),
overwrite=overwrite,
)

View File

@ -1,146 +0,0 @@
from collections import OrderedDict, namedtuple
from configparser import RawConfigParser
Section_format = namedtuple('Section_format', ('name', 'options'))
Config_option = namedtuple('Config_option', ('name', 'value_type', 'required'))
def option(name, value_type=str, required=True):
'''
Given a config file option name, an expected type for its value, and whether it's required,
return a Config_option capturing that information.
'''
return Config_option(name, value_type, required)
CONFIG_FORMAT = (
Section_format(
'location',
(
option('source_directories'),
option('one_file_system', value_type=bool, required=False),
option('remote_path', required=False),
option('repository'),
),
),
Section_format(
'storage',
(
option('encryption_passphrase', required=False),
option('compression', required=False),
option('umask', required=False),
),
),
Section_format(
'retention',
(
option('keep_within', required=False),
option('keep_hourly', int, required=False),
option('keep_daily', int, required=False),
option('keep_weekly', int, required=False),
option('keep_monthly', int, required=False),
option('keep_yearly', int, required=False),
option('prefix', required=False),
),
),
Section_format(
'consistency', (option('checks', required=False), option('check_last', required=False))
),
)
def validate_configuration_format(parser, config_format):
'''
Given an open RawConfigParser and an expected config file format, validate that the parsed
configuration file has the expected sections, that any required options are present in those
sections, and that there aren't any unexpected options.
A section is required if any of its contained options are required.
Raise ValueError if anything is awry.
'''
section_names = set(parser.sections())
required_section_names = tuple(
section.name
for section in config_format
if any(option.required for option in section.options)
)
unknown_section_names = section_names - set(
section_format.name for section_format in config_format
)
if unknown_section_names:
raise ValueError(f"Unknown config sections found: {', '.join(unknown_section_names)}")
missing_section_names = set(required_section_names) - section_names
if missing_section_names:
raise ValueError(f"Missing config sections: {', '.join(missing_section_names)}")
for section_format in config_format:
if section_format.name not in section_names:
continue
option_names = parser.options(section_format.name)
expected_options = section_format.options
unexpected_option_names = set(option_names) - set(
option.name for option in expected_options
)
if unexpected_option_names:
raise ValueError(
f"Unexpected options found in config section {section_format.name}: {', '.join(sorted(unexpected_option_names))}",
)
missing_option_names = tuple(
option.name
for option in expected_options
if option.required
if option.name not in option_names
)
if missing_option_names:
raise ValueError(
f"Required options missing from config section {section_format.name}: {', '.join(missing_option_names)}",
)
def parse_section_options(parser, section_format):
'''
Given an open RawConfigParser and an expected section format, return the option values from that
section as a dict mapping from option name to value. Omit those options that are not present in
the parsed options.
Raise ValueError if any option values cannot be coerced to the expected Python data type.
'''
type_getter = {str: parser.get, int: parser.getint, bool: parser.getboolean}
return OrderedDict(
(option.name, type_getter[option.value_type](section_format.name, option.name))
for option in section_format.options
if parser.has_option(section_format.name, option.name)
)
def parse_configuration(config_filename, config_format):
'''
Given a config filename and an expected config file format, return the parsed configuration
as a namedtuple with one attribute for each parsed section.
Raise IOError if the file cannot be read, or ValueError if the format is not as expected.
'''
parser = RawConfigParser()
if not parser.read(config_filename):
raise ValueError(f'Configuration file cannot be opened: {config_filename}')
validate_configuration_format(parser, config_format)
# Describes a parsed configuration, where each attribute is the name of a configuration file
# section and each value is a dict of that section's parsed options.
Parsed_config = namedtuple(
'Parsed_config', (section_format.name for section_format in config_format)
)
return Parsed_config(
*(parse_section_options(parser, section_format) for section_format in config_format)
)

View File

@ -38,6 +38,37 @@ def include_configuration(loader, filename_node, include_directory):
return load_configuration(include_filename)
def raise_retain_node_error(loader, node):
'''
Given a ruamel.yaml.loader.Loader and a YAML node, raise an error about "!retain" usage.
Raise ValueError if a mapping or sequence node is given, as that indicates that "!retain" was
used in a configuration file without a merge. In configuration files with a merge, mapping and
sequence nodes with "!retain" tags are handled by deep_merge_nodes() below.
Also raise ValueError if a scalar node is given, as "!retain" is not supported on scalar nodes.
'''
if isinstance(node, (ruamel.yaml.nodes.MappingNode, ruamel.yaml.nodes.SequenceNode)):
raise ValueError(
'The !retain tag may only be used within a configuration file containing a merged !include tag.'
)
raise ValueError('The !retain tag may only be used on a YAML mapping or sequence.')
def raise_omit_node_error(loader, node):
'''
Given a ruamel.yaml.loader.Loader and a YAML node, raise an error about "!omit" usage.
Raise ValueError unconditionally, as an "!omit" node here indicates it was used in a
configuration file without a merge. In configuration files with a merge, nodes with "!omit"
tags are handled by deep_merge_nodes() below.
'''
raise ValueError(
'The !omit tag may only be used on a scalar (e.g., string) list element within a configuration file containing a merged !include tag.'
)
class Include_constructor(ruamel.yaml.SafeConstructor):
'''
A YAML "constructor" (a ruamel.yaml concept) that supports a custom "!include" tag for including
@ -50,6 +81,8 @@ class Include_constructor(ruamel.yaml.SafeConstructor):
'!include',
functools.partial(include_configuration, include_directory=include_directory),
)
self.add_constructor('!retain', raise_retain_node_error)
self.add_constructor('!omit', raise_omit_node_error)
def flatten_mapping(self, node):
'''
@ -64,8 +97,8 @@ class Include_constructor(ruamel.yaml.SafeConstructor):
```
These includes are deep merged into the current configuration file. For instance, in this
example, any "retention" options in common.yaml will get merged into the "retention" section
in the example configuration file.
example, any "option" with sub-options in common.yaml will get merged into the corresponding
"option" with sub-options in the example configuration file.
'''
representer = ruamel.yaml.representer.SafeRepresenter()
@ -83,11 +116,12 @@ def load_configuration(filename):
'''
Load the given configuration file and return its contents as a data structure of nested dicts
and lists. Also, replace any "{constant}" strings with the value of the "constant" key in the
"constants" section of the configuration file.
"constants" option of the configuration file.
Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError
if there are too many recursive includes.
'''
# Use an embedded derived class for the include constructor so as to capture the filename
# value. (functools.partial doesn't work for this use case because yaml.Constructor has to be
# an actual class.)
@ -115,6 +149,16 @@ def load_configuration(filename):
return config
def filter_omitted_nodes(nodes):
'''
Given a list of nodes, return a filtered list omitting any nodes with an "!omit" tag or with a
value matching such nodes.
'''
omitted_values = tuple(node.value for node in nodes if node.tag == '!omit')
return [node for node in nodes if node.value not in omitted_values]
DELETED_NODE = object()
@ -176,9 +220,13 @@ def deep_merge_nodes(nodes):
),
]
If a mapping or sequence node has a YAML "!retain" tag, then that node is not merged.
The purpose of deep merging like this is to support, for instance, merging one borgmatic
configuration file into another for reuse, such that a configuration section ("retention",
etc.) does not completely replace the corresponding section in a merged file.
configuration file into another for reuse, such that a configuration option with sub-options
does not completely replace the corresponding option in a merged file.
Raise ValueError if a merge is implied using two incompatible types.
'''
# Map from original node key/value to the replacement merged node. DELETED_NODE as a replacement
# node indications deletion.
@ -193,37 +241,52 @@ def deep_merge_nodes(nodes):
# If the keys match and the values are different, we need to merge these two A and B nodes.
if a_key.tag == b_key.tag and a_key.value == b_key.value and a_value != b_value:
if not type(a_value) is type(b_value):
raise ValueError(
f'Incompatible types found when trying to merge "{a_key.value}:" values across configuration files: {type(a_value).id} and {type(b_value).id}'
)
# Since we're merging into the B node, consider the A node a duplicate and remove it.
replaced_nodes[(a_key, a_value)] = DELETED_NODE
# If we're dealing with MappingNodes, recurse and merge its values as well.
if isinstance(b_value, ruamel.yaml.nodes.MappingNode):
replaced_nodes[(b_key, b_value)] = (
b_key,
ruamel.yaml.nodes.MappingNode(
tag=b_value.tag,
value=deep_merge_nodes(a_value.value + b_value.value),
start_mark=b_value.start_mark,
end_mark=b_value.end_mark,
flow_style=b_value.flow_style,
comment=b_value.comment,
anchor=b_value.anchor,
),
)
# A "!retain" tag says to skip deep merging for this node. Replace the tag so
# downstream schema validation doesn't break on our application-specific tag.
if b_value.tag == '!retain':
b_value.tag = 'tag:yaml.org,2002:map'
else:
replaced_nodes[(b_key, b_value)] = (
b_key,
ruamel.yaml.nodes.MappingNode(
tag=b_value.tag,
value=deep_merge_nodes(a_value.value + b_value.value),
start_mark=b_value.start_mark,
end_mark=b_value.end_mark,
flow_style=b_value.flow_style,
comment=b_value.comment,
anchor=b_value.anchor,
),
)
# If we're dealing with SequenceNodes, merge by appending one sequence to the other.
elif isinstance(b_value, ruamel.yaml.nodes.SequenceNode):
replaced_nodes[(b_key, b_value)] = (
b_key,
ruamel.yaml.nodes.SequenceNode(
tag=b_value.tag,
value=a_value.value + b_value.value,
start_mark=b_value.start_mark,
end_mark=b_value.end_mark,
flow_style=b_value.flow_style,
comment=b_value.comment,
anchor=b_value.anchor,
),
)
# A "!retain" tag says to skip deep merging for this node. Replace the tag so
# downstream schema validation doesn't break on our application-specific tag.
if b_value.tag == '!retain':
b_value.tag = 'tag:yaml.org,2002:seq'
else:
replaced_nodes[(b_key, b_value)] = (
b_key,
ruamel.yaml.nodes.SequenceNode(
tag=b_value.tag,
value=filter_omitted_nodes(a_value.value + b_value.value),
start_mark=b_value.start_mark,
end_mark=b_value.end_mark,
flow_style=b_value.flow_style,
comment=b_value.comment,
anchor=b_value.anchor,
),
)
return [
replaced_nodes.get(node, node) for node in nodes if replaced_nodes.get(node) != DELETED_NODE

View File

@ -2,31 +2,108 @@ import logging
import os
def normalize_sections(config_filename, config):
'''
Given a configuration filename and a configuration dict of its loaded contents, airlift any
options out of sections ("location:", etc.) to the global scope and delete those sections.
Return any log message warnings produced based on the normalization performed.
Raise ValueError if the "prefix" option is set in both "location" and "consistency" sections.
'''
location = config.get('location') or {}
storage = config.get('storage') or {}
consistency = config.get('consistency') or {}
hooks = config.get('hooks') or {}
if (
location.get('prefix')
and consistency.get('prefix')
and location.get('prefix') != consistency.get('prefix')
):
raise ValueError(
'The retention prefix and the consistency prefix cannot have different values (unless one is not set).'
)
if storage.get('umask') and hooks.get('umask') and storage.get('umask') != hooks.get('umask'):
raise ValueError(
'The storage umask and the hooks umask cannot have different values (unless one is not set).'
)
any_section_upgraded = False
# Move any options from deprecated sections into the global scope.
for section_name in ('location', 'storage', 'retention', 'consistency', 'output', 'hooks'):
section_config = config.get(section_name)
if section_config:
any_section_upgraded = True
del config[section_name]
config.update(section_config)
if any_section_upgraded:
return [
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: Configuration sections (like location: and storage:) are deprecated and support will be removed from a future release. To prepare for this, move your options out of sections to the global scope.',
)
)
]
return []
def normalize(config_filename, config):
'''
Given a configuration filename and a configuration dict of its loaded contents, apply particular
hard-coded rules to normalize the configuration to adhere to the current schema. Return any log
message warnings produced based on the normalization performed.
Raise ValueError the configuration cannot be normalized.
'''
logs = []
location = config.get('location') or {}
storage = config.get('storage') or {}
consistency = config.get('consistency') or {}
hooks = config.get('hooks') or {}
logs = normalize_sections(config_filename, config)
# Upgrade exclude_if_present from a string to a list.
exclude_if_present = location.get('exclude_if_present')
exclude_if_present = config.get('exclude_if_present')
if isinstance(exclude_if_present, str):
config['location']['exclude_if_present'] = [exclude_if_present]
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The exclude_if_present option now expects a list value. String values for this option are deprecated and support will be removed from a future release.',
)
)
)
config['exclude_if_present'] = [exclude_if_present]
# Upgrade various monitoring hooks from a string to a dict.
healthchecks = hooks.get('healthchecks')
healthchecks = config.get('healthchecks')
if isinstance(healthchecks, str):
config['hooks']['healthchecks'] = {'ping_url': healthchecks}
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The healthchecks hook now expects a mapping value. String values for this option are deprecated and support will be removed from a future release.',
)
)
)
config['healthchecks'] = {'ping_url': healthchecks}
cronitor = hooks.get('cronitor')
cronitor = config.get('cronitor')
if isinstance(cronitor, str):
config['hooks']['cronitor'] = {'ping_url': cronitor}
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The healthchecks hook now expects key/value pairs. String values for this option are deprecated and support will be removed from a future release.',
)
)
)
config['cronitor'] = {'ping_url': cronitor}
if isinstance(cronitor, dict) and 'ping_url' in cronitor:
config['hooks']['cronitor'] = {
'create': cronitor['ping_url'],
@ -35,67 +112,158 @@ def normalize(config_filename, config):
'check': cronitor['ping_url'],
}
pagerduty = hooks.get('pagerduty')
pagerduty = config.get('pagerduty')
if isinstance(pagerduty, str):
config['hooks']['pagerduty'] = {'integration_key': pagerduty}
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The healthchecks hook now expects key/value pairs. String values for this option are deprecated and support will be removed from a future release.',
)
)
)
config['pagerduty'] = {'integration_key': pagerduty}
cronhub = hooks.get('cronhub')
cronhub = config.get('cronhub')
if isinstance(cronhub, str):
config['hooks']['cronhub'] = {'ping_url': cronhub}
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The healthchecks hook now expects key/value pairs. String values for this option are deprecated and support will be removed from a future release.',
)
)
)
config['cronhub'] = {'ping_url': cronhub}
# Upgrade consistency checks from a list of strings to a list of dicts.
checks = consistency.get('checks')
checks = config.get('checks')
if isinstance(checks, list) and len(checks) and isinstance(checks[0], str):
config['consistency']['checks'] = [{'name': check_type} for check_type in checks]
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The checks option now expects a list of key/value pairs. Lists of strings for this option are deprecated and support will be removed from a future release.',
)
)
)
config['checks'] = [{'name': check_type} for check_type in checks]
# Rename various configuration options.
numeric_owner = location.pop('numeric_owner', None)
numeric_owner = config.pop('numeric_owner', None)
if numeric_owner is not None:
config['location']['numeric_ids'] = numeric_owner
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The numeric_owner option has been renamed to numeric_ids. numeric_owner is deprecated and support will be removed from a future release.',
)
)
)
config['numeric_ids'] = numeric_owner
bsd_flags = location.pop('bsd_flags', None)
bsd_flags = config.pop('bsd_flags', None)
if bsd_flags is not None:
config['location']['flags'] = bsd_flags
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The bsd_flags option has been renamed to flags. bsd_flags is deprecated and support will be removed from a future release.',
)
)
)
config['flags'] = bsd_flags
remote_rate_limit = storage.pop('remote_rate_limit', None)
remote_rate_limit = config.pop('remote_rate_limit', None)
if remote_rate_limit is not None:
config['storage']['upload_rate_limit'] = remote_rate_limit
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The remote_rate_limit option has been renamed to upload_rate_limit. remote_rate_limit is deprecated and support will be removed from a future release.',
)
)
)
config['upload_rate_limit'] = remote_rate_limit
# Upgrade remote repositories to ssh:// syntax, required in Borg 2.
repositories = location.get('repositories')
repositories = config.get('repositories')
if repositories:
config['location']['repositories'] = []
for repository in repositories:
if '~' in repository:
if isinstance(repositories[0], str):
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The repositories option now expects a list of key/value pairs. Lists of strings for this option are deprecated and support will be removed from a future release.',
)
)
)
config['repositories'] = [{'path': repository} for repository in repositories]
repositories = config['repositories']
config['repositories'] = []
for repository_dict in repositories:
repository_path = repository_dict['path']
if '~' in repository_path:
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: Repository paths containing "~" are deprecated in borgmatic and no longer work in Borg 2.x+.',
msg=f'{config_filename}: Repository paths containing "~" are deprecated in borgmatic and support will be removed from a future release.',
)
)
)
if ':' in repository:
if repository.startswith('file://'):
config['location']['repositories'].append(
os.path.abspath(repository.partition('file://')[-1])
if ':' in repository_path:
if repository_path.startswith('file://'):
updated_repository_path = os.path.abspath(
repository_path.partition('file://')[-1]
)
elif repository.startswith('ssh://'):
config['location']['repositories'].append(repository)
config['repositories'].append(
dict(
repository_dict,
path=updated_repository_path,
)
)
elif repository_path.startswith('ssh://'):
config['repositories'].append(repository_dict)
else:
rewritten_repository = f"ssh://{repository.replace(':~', '/~').replace(':/', '/').replace(':', '/./')}"
rewritten_repository_path = f"ssh://{repository_path.replace(':~', '/~').replace(':/', '/').replace(':', '/./')}"
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: Remote repository paths without ssh:// syntax are deprecated. Interpreting "{repository}" as "{rewritten_repository}"',
msg=f'{config_filename}: Remote repository paths without ssh:// syntax are deprecated and support will be removed from a future release. Interpreting "{repository_path}" as "{rewritten_repository_path}"',
)
)
)
config['location']['repositories'].append(rewritten_repository)
config['repositories'].append(
dict(
repository_dict,
path=rewritten_repository_path,
)
)
else:
config['location']['repositories'].append(repository)
config['repositories'].append(repository_dict)
if config.get('prefix'):
logs.append(
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: The prefix option is deprecated and support will be removed from a future release. Use archive_name_format or match_archives instead.',
)
)
)
return logs

View File

@ -32,19 +32,33 @@ def convert_value_type(value):
return ruamel.yaml.YAML(typ='safe').load(io.StringIO(value))
LEGACY_SECTION_NAMES = {'location', 'storage', 'retention', 'consistency', 'output', 'hooks'}
def strip_section_names(parsed_override_key):
'''
Given a parsed override key as a tuple of option and suboption names, strip out any initial
legacy section names, since configuration file normalization also strips them out.
'''
if parsed_override_key[0] in LEGACY_SECTION_NAMES:
return parsed_override_key[1:]
return parsed_override_key
def parse_overrides(raw_overrides):
'''
Given a sequence of configuration file override strings in the form of "section.option=value",
Given a sequence of configuration file override strings in the form of "option.suboption=value",
parse and return a sequence of tuples (keys, values), where keys is a sequence of strings. For
instance, given the following raw overrides:
['section.my_option=value1', 'section.other_option=value2']
['my_option.suboption=value1', 'other_option=value2']
... return this:
(
(('section', 'my_option'), 'value1'),
(('section', 'other_option'), 'value2'),
(('my_option', 'suboption'), 'value1'),
(('other_option'), 'value2'),
)
Raise ValueError if an override can't be parsed.
@ -57,10 +71,15 @@ def parse_overrides(raw_overrides):
for raw_override in raw_overrides:
try:
raw_keys, value = raw_override.split('=', 1)
parsed_overrides.append((tuple(raw_keys.split('.')), convert_value_type(value),))
parsed_overrides.append(
(
strip_section_names(tuple(raw_keys.split('.'))),
convert_value_type(value),
)
)
except ValueError:
raise ValueError(
f"Invalid override '{raw_override}'. Make sure you use the form: SECTION.OPTION=VALUE"
f"Invalid override '{raw_override}'. Make sure you use the form: OPTION=VALUE or OPTION.SUBOPTION=VALUE"
)
except ruamel.yaml.error.YAMLError as error:
raise ValueError(f"Invalid override '{raw_override}': {error.problem}")
@ -71,9 +90,9 @@ def parse_overrides(raw_overrides):
def apply_overrides(config, raw_overrides):
'''
Given a configuration dict and a sequence of configuration file override strings in the form of
"section.option=value", parse each override and set it the configuration dict.
"option.suboption=value", parse each override and set it the configuration dict.
'''
overrides = parse_overrides(raw_overrides)
for (keys, value) in overrides:
for keys, value in overrides:
set_values(config, keys, value)

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +1,9 @@
import os
import jsonschema
import pkg_resources
import ruamel.yaml
import borgmatic.config
from borgmatic.config import environment, load, normalize, override
@ -11,8 +11,13 @@ def schema_filename():
'''
Path to the installed YAML configuration schema file, used to validate and parse the
configuration.
Raise FileNotFoundError when the schema path does not exist.
'''
return pkg_resources.resource_filename('borgmatic', 'config/schema.yaml')
schema_path = os.path.join(os.path.dirname(borgmatic.config.__file__), 'schema.yaml')
with open(schema_path):
return schema_path
def format_json_error_path_element(path_element):
@ -66,15 +71,15 @@ def apply_logical_validation(config_filename, parsed_configuration):
below), run through any additional logical validation checks. If there are any such validation
problems, raise a Validation_error.
'''
location_repositories = parsed_configuration.get('location', {}).get('repositories')
check_repositories = parsed_configuration.get('consistency', {}).get('check_repositories', [])
repositories = parsed_configuration.get('repositories')
check_repositories = parsed_configuration.get('check_repositories', [])
for repository in check_repositories:
if repository not in location_repositories:
if not any(
repositories_match(repository, config_repository) for config_repository in repositories
):
raise Validation_error(
config_filename,
(
f'Unknown repository in the "consistency" section\'s "check_repositories": {repository}',
),
(f'Unknown repository in "check_repositories": {repository}',),
)
@ -82,11 +87,15 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
'''
Given the path to a config filename in YAML format, the path to a schema filename in a YAML
rendition of JSON Schema format, a sequence of configuration file override strings in the form
of "section.option=value", return the parsed configuration as a data structure of nested dicts
of "option.suboption=value", return the parsed configuration as a data structure of nested dicts
and lists corresponding to the schema. Example return value:
{'location': {'source_directories': ['/home', '/etc'], 'repository': 'hostname.borg'},
'retention': {'keep_daily': 7}, 'consistency': {'checks': ['repository', 'archives']}}
{
'source_directories': ['/home', '/etc'],
'repository': 'hostname.borg',
'keep_daily': 7,
'checks': ['repository', 'archives'],
}
Also return a sequence of logging.LogRecord instances containing any warnings about the
configuration.
@ -137,9 +146,17 @@ def normalize_repository_path(repository):
def repositories_match(first, second):
'''
Given two repository paths (relative and/or absolute), return whether they match.
Given two repository dicts with keys 'path' (relative and/or absolute),
and 'label', or two repository paths, return whether they match.
'''
return normalize_repository_path(first) == normalize_repository_path(second)
if isinstance(first, str):
first = {'path': first, 'label': first}
if isinstance(second, str):
second = {'path': second, 'label': second}
return (first.get('label') == second.get('label')) or (
normalize_repository_path(first.get('path'))
== normalize_repository_path(second.get('path'))
)
def guard_configuration_contains_repository(repository, configurations):
@ -158,8 +175,8 @@ def guard_configuration_contains_repository(repository, configurations):
tuple(
config_repository
for config in configurations.values()
for config_repository in config['location']['repositories']
if repositories_match(repository, config_repository)
for config_repository in config['repositories']
if repositories_match(config_repository, repository)
)
)
@ -182,7 +199,7 @@ def guard_single_repository_selected(repository, configurations):
tuple(
config_repository
for config in configurations.values()
for config_repository in config['location']['repositories']
for config_repository in config['repositories']
)
)

View File

@ -43,6 +43,23 @@ def output_buffer_for_process(process, exclude_stdouts):
return process.stderr if process.stdout in exclude_stdouts else process.stdout
def append_last_lines(last_lines, captured_output, line, output_log_level):
'''
Given a rolling list of last lines, a list of captured output, a line to append, and an output
log level, append the line to the last lines and (if necessary) the captured output. Then log
the line at the requested output log level.
'''
last_lines.append(line)
if len(last_lines) > ERROR_OUTPUT_MAX_LINE_COUNT:
last_lines.pop(0)
if output_log_level is None:
captured_output.append(line)
else:
logger.log(output_log_level, line)
def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
'''
Given a sequence of subprocess.Popen() instances for multiple processes, log the output for each
@ -98,15 +115,12 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
# Keep the last few lines of output in case the process errors, and we need the output for
# the exception below.
last_lines = buffer_last_lines[ready_buffer]
last_lines.append(line)
if len(last_lines) > ERROR_OUTPUT_MAX_LINE_COUNT:
last_lines.pop(0)
if output_log_level is None:
captured_outputs[ready_process].append(line)
else:
logger.log(output_log_level, line)
append_last_lines(
buffer_last_lines[ready_buffer],
captured_outputs[ready_process],
line,
output_log_level,
)
if not still_running:
break
@ -125,8 +139,18 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
# If an error occurs, include its output in the raised exception so that we don't
# inadvertently hide error output.
output_buffer = output_buffer_for_process(process, exclude_stdouts)
last_lines = buffer_last_lines[output_buffer] if output_buffer else []
# Collect any straggling output lines that came in since we last gathered output.
while output_buffer: # pragma: no cover
line = output_buffer.readline().rstrip().decode()
if not line:
break
append_last_lines(
last_lines, captured_outputs[process], line, output_log_level=logging.ERROR
)
if len(last_lines) == ERROR_OUTPUT_MAX_LINE_COUNT:
last_lines.insert(0, '...')
@ -212,14 +236,21 @@ def execute_command(
def execute_command_and_capture_output(
full_command, capture_stderr=False, shell=False, extra_environment=None, working_directory=None,
full_command,
capture_stderr=False,
shell=False,
extra_environment=None,
working_directory=None,
borg_local_path=None,
):
'''
Execute the given command (a sequence of command/argument strings), capturing and returning its
output (stdout). If capture stderr is True, then capture and return stderr in addition to
stdout. If shell is True, execute the command within a shell. If an extra environment dict is
given, then use it to augment the current environment, and pass the result into the command. If
a working directory is given, use that as the present working directory when running the command.
a working directory is given, use that as the present working directory when running the
command. If a Borg local path is given, and the command matches it (regardless of arguments),
treat exit code 1 as a warning instead of an error.
Raise subprocesses.CalledProcessError if an error occurs while running the command.
'''
@ -235,12 +266,10 @@ def execute_command_and_capture_output(
env=environment,
cwd=working_directory,
)
logger.warning(f'Command output: {output}')
except subprocess.CalledProcessError as error:
if exit_code_indicates_error(command, error.returncode):
if exit_code_indicates_error(command, error.returncode, borg_local_path):
raise
output = error.output
logger.warning(f'Command output: {output}')
return output.decode() if output is not None else None

View File

@ -14,7 +14,7 @@ MONITOR_STATE_TO_CRONHUB = {
def initialize_monitor(
ping_url, config_filename, monitoring_log_level, dry_run
ping_url, config, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No initialization is necessary for this monitor.
@ -22,7 +22,7 @@ def initialize_monitor(
pass
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run, action_name):
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run, action_name):
'''
Ping the configured Cronhub URL, modified with the monitor.State. Use the given configuration
filename in any log entries. If this is a dry run, then don't actually ping anything.
@ -55,7 +55,7 @@ def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_
def destroy_monitor(
ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
ping_url_or_uuid, config, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No destruction is necessary for this monitor.

View File

@ -14,7 +14,7 @@ MONITOR_STATE_TO_CRONITOR = {
def initialize_monitor(
ping_url, config_filename, monitoring_log_level, dry_run
ping_url, config, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No initialization is necessary for this monitor.
@ -22,7 +22,7 @@ def initialize_monitor(
pass
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run, action_name):
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run, action_name):
'''
Ping the configured Cronitor URL, modified with the monitor.State. Use the given configuration
filename in any log entries. If this is a dry run, then don't actually ping anything.
@ -56,7 +56,7 @@ def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_
def destroy_monitor(
ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
ping_url_or_uuid, config, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No destruction is necessary for this monitor.

View File

@ -27,18 +27,17 @@ HOOK_NAME_TO_MODULE = {
}
def call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs):
def call_hook(function_name, config, log_prefix, hook_name, *args, **kwargs):
'''
Given the hooks configuration dict and a prefix to use in log entries, call the requested
function of the Python module corresponding to the given hook name. Supply that call with the
configuration for this hook (if any), the log prefix, and any given args and kwargs. Return any
return value.
Given a configuration dict and a prefix to use in log entries, call the requested function of
the Python module corresponding to the given hook name. Supply that call with the configuration
for this hook (if any), the log prefix, and any given args and kwargs. Return any return value.
Raise ValueError if the hook name is unknown.
Raise AttributeError if the function name is not found in the module.
Raise anything else that the called function raises.
'''
config = hooks.get(hook_name, {})
hook_config = config.get(hook_name, {})
try:
module = HOOK_NAME_TO_MODULE[hook_name]
@ -46,15 +45,15 @@ def call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs):
raise ValueError(f'Unknown hook name: {hook_name}')
logger.debug(f'{log_prefix}: Calling {hook_name} hook function {function_name}')
return getattr(module, function_name)(config, log_prefix, *args, **kwargs)
return getattr(module, function_name)(hook_config, config, log_prefix, *args, **kwargs)
def call_hooks(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
def call_hooks(function_name, config, log_prefix, hook_names, *args, **kwargs):
'''
Given the hooks configuration dict and a prefix to use in log entries, call the requested
function of the Python module corresponding to each given hook name. Supply each call with the
configuration for that hook, the log prefix, and any given args and kwargs. Collect any return
values into a dict from hook name to return value.
Given a configuration dict and a prefix to use in log entries, call the requested function of
the Python module corresponding to each given hook name. Supply each call with the configuration
for that hook, the log prefix, and any given args and kwargs. Collect any return values into a
dict from hook name to return value.
If the hook name is not present in the hooks configuration, then don't call the function for it
and omit it from the return values.
@ -64,23 +63,23 @@ def call_hooks(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
Raise anything else that a called function raises. An error stops calls to subsequent functions.
'''
return {
hook_name: call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs)
hook_name: call_hook(function_name, config, log_prefix, hook_name, *args, **kwargs)
for hook_name in hook_names
if hooks.get(hook_name)
if config.get(hook_name)
}
def call_hooks_even_if_unconfigured(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
def call_hooks_even_if_unconfigured(function_name, config, log_prefix, hook_names, *args, **kwargs):
'''
Given the hooks configuration dict and a prefix to use in log entries, call the requested
function of the Python module corresponding to each given hook name. Supply each call with the
configuration for that hook, the log prefix, and any given args and kwargs. Collect any return
values into a dict from hook name to return value.
Given a configuration dict and a prefix to use in log entries, call the requested function of
the Python module corresponding to each given hook name. Supply each call with the configuration
for that hook, the log prefix, and any given args and kwargs. Collect any return values into a
dict from hook name to return value.
Raise AttributeError if the function name is not found in the module.
Raise anything else that a called function raises. An error stops calls to subsequent functions.
'''
return {
hook_name: call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs)
hook_name: call_hook(function_name, config, log_prefix, hook_name, *args, **kwargs)
for hook_name in hook_names
}

View File

@ -70,7 +70,7 @@ def format_buffered_logs_for_payload():
return payload
def initialize_monitor(hook_config, config_filename, monitoring_log_level, dry_run):
def initialize_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
'''
Add a handler to the root logger that stores in memory the most recent logs emitted. That way,
we can send them all to Healthchecks upon a finish or failure state. But skip this if the
@ -90,7 +90,7 @@ def initialize_monitor(hook_config, config_filename, monitoring_log_level, dry_r
)
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run, action_name):
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run, action_name):
'''
Ping the configured Healthchecks URL or UUID, modified with the monitor.State. Use the given
configuration filename in any log entries, and log to Healthchecks with the giving log level.
@ -133,7 +133,7 @@ def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_
logger.warning(f'{config_filename}: Healthchecks error: {error}')
def destroy_monitor(hook_config, config_filename, monitoring_log_level, dry_run):
def destroy_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
'''
Remove the monitor handler that was added to the root logger. This prevents the handler from
getting reused by other instances of this monitor.

View File

@ -6,21 +6,20 @@ from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
def make_dump_path(location_config): # pragma: no cover
def make_dump_path(config): # pragma: no cover
'''
Make the dump path from the given location configuration and the name of this hook.
Make the dump path from the given configuration dict and the name of this hook.
'''
return dump.make_database_dump_path(
location_config.get('borgmatic_source_directory'), 'mongodb_databases'
config.get('borgmatic_source_directory'), 'mongodb_databases'
)
def dump_databases(databases, log_prefix, location_config, dry_run):
def dump_databases(databases, config, log_prefix, dry_run):
'''
Dump the given MongoDB databases to a named pipe. The databases are supplied as a sequence of
dicts, one dict describing each database as per the configuration schema. Use the given log
prefix in any log entries. Use the given location configuration dict to construct the
destination path.
dicts, one dict describing each database as per the configuration schema. Use the configuration
dict to construct the destination path and the given log prefix in any log entries.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
@ -33,7 +32,7 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
for database in databases:
name = database['name']
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), name, database.get('hostname')
make_dump_path(config), name, database.get('hostname')
)
dump_format = database.get('format', 'archive')
@ -82,47 +81,57 @@ def build_dump_command(database, dump_filename, dump_format):
return command
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
def remove_database_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
'''
Remove all database dump files for this hook regardless of the given databases. Use the log
prefix in any log entries. Use the given location configuration dict to construct the
destination path. If this is a dry run, then don't actually remove anything.
prefix in any log entries. Use the given configuration dict to construct the destination path.
If this is a dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(location_config), 'MongoDB', log_prefix, dry_run)
dump.remove_database_dumps(make_dump_path(config), 'MongoDB', log_prefix, dry_run)
def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
def make_database_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
'''
Given a sequence of configurations dicts, a prefix to log with, a location configuration dict,
Given a sequence of database configurations dicts, a configuration dict, a prefix to log with,
and a database name to match, return the corresponding glob patterns to match the database dump
in an archive.
'''
return dump.make_database_dump_filename(make_dump_path(location_config), name, hostname='*')
return dump.make_database_dump_filename(make_dump_path(config), name, hostname='*')
def restore_database_dump(database_config, log_prefix, location_config, dry_run, extract_process):
def restore_database_dump(
databases_config, config, log_prefix, database_name, dry_run, extract_process, connection_params
):
'''
Restore the given MongoDB database from an extract stream. The database is supplied as a
one-element sequence containing a dict describing the database, as per the configuration schema.
Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
output to consume.
Restore the given MongoDB database from an extract stream. The databases are supplied as a
sequence containing one dict describing each database (as per the configuration schema), but
only the database corresponding to the given database name is restored. Use the configuration
dict to construct the destination path and the given log prefix in any log entries. If this is a
dry run, then don't actually restore anything. Trigger the given active extract process (an
instance of subprocess.Popen) to produce output to consume.
If the extract process is None, then restore the dump from the filesystem rather than from an
extract stream.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
try:
database = next(
database_config
for database_config in databases_config
if database_config.get('name') == database_name
)
except StopIteration:
raise ValueError(
f'A database named "{database_name}" could not be found in the configuration'
)
database = database_config[0]
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), database['name'], database.get('hostname')
make_dump_path(config), database['name'], database.get('hostname')
)
restore_command = build_restore_command(
extract_process, database, dump_filename, connection_params
)
restore_command = build_restore_command(extract_process, database, dump_filename)
logger.debug(f"{log_prefix}: Restoring MongoDB database {database['name']}{dry_run_label}")
if dry_run:
@ -138,10 +147,21 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
)
def build_restore_command(extract_process, database, dump_filename):
def build_restore_command(extract_process, database, dump_filename, connection_params):
'''
Return the mongorestore command from a single database configuration.
'''
hostname = connection_params['hostname'] or database.get(
'restore_hostname', database.get('hostname')
)
port = str(connection_params['port'] or database.get('restore_port', database.get('port', '')))
username = connection_params['username'] or database.get(
'restore_username', database.get('username')
)
password = connection_params['password'] or database.get(
'restore_password', database.get('password')
)
command = ['mongorestore']
if extract_process:
command.append('--archive')
@ -149,16 +169,19 @@ def build_restore_command(extract_process, database, dump_filename):
command.extend(('--dir', dump_filename))
if database['name'] != 'all':
command.extend(('--drop', '--db', database['name']))
if 'hostname' in database:
command.extend(('--host', database['hostname']))
if 'port' in database:
command.extend(('--port', str(database['port'])))
if 'username' in database:
command.extend(('--username', database['username']))
if 'password' in database:
command.extend(('--password', database['password']))
if hostname:
command.extend(('--host', hostname))
if port:
command.extend(('--port', str(port)))
if username:
command.extend(('--username', username))
if password:
command.extend(('--password', password))
if 'authentication_database' in database:
command.extend(('--authenticationDatabase', database['authentication_database']))
if 'restore_options' in database:
command.extend(database['restore_options'].split(' '))
if database['schemas']:
for schema in database['schemas']:
command.extend(('--nsInclude', schema))
return command

View File

@ -12,13 +12,11 @@ from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
def make_dump_path(location_config): # pragma: no cover
def make_dump_path(config): # pragma: no cover
'''
Make the dump path from the given location configuration and the name of this hook.
Make the dump path from the given configuration dict and the name of this hook.
'''
return dump.make_database_dump_path(
location_config.get('borgmatic_source_directory'), 'mysql_databases'
)
return dump.make_database_dump_path(config.get('borgmatic_source_directory'), 'mysql_databases')
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
@ -88,9 +86,7 @@ def execute_dump_command(
+ (('--user', database['username']) if 'username' in database else ())
+ ('--databases',)
+ database_names
# Use shell redirection rather than execute_command(output_file=open(...)) to prevent
# the open() call on a named pipe from hanging the main borgmatic process.
+ ('>', dump_filename)
+ ('--result-file', dump_filename)
)
logger.debug(
@ -102,16 +98,17 @@ def execute_dump_command(
dump.create_named_pipe_for_dump(dump_filename)
return execute_command(
dump_command, shell=True, extra_environment=extra_environment, run_to_completion=False,
dump_command,
extra_environment=extra_environment,
run_to_completion=False,
)
def dump_databases(databases, log_prefix, location_config, dry_run):
def dump_databases(databases, config, log_prefix, dry_run):
'''
Dump the given MySQL/MariaDB databases to a named pipe. The databases are supplied as a sequence
of dicts, one dict describing each database as per the configuration schema. Use the given log
prefix in any log entries. Use the given location configuration dict to construct the
destination path.
of dicts, one dict describing each database as per the configuration schema. Use the given
configuration dict to construct the destination path and the given log prefix in any log entries.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
@ -122,7 +119,7 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
logger.info(f'{log_prefix}: Dumping MySQL databases{dry_run_label}')
for database in databases:
dump_path = make_dump_path(location_config)
dump_path = make_dump_path(config)
extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
dump_database_names = database_names_to_dump(
database, extra_environment, log_prefix, dry_run
@ -165,49 +162,67 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
return [process for process in processes if process]
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
def remove_database_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
'''
Remove all database dump files for this hook regardless of the given databases. Use the log
prefix in any log entries. Use the given location configuration dict to construct the
destination path. If this is a dry run, then don't actually remove anything.
Remove all database dump files for this hook regardless of the given databases. Use the given
configuration dict to construct the destination path and the log prefix in any log entries. If
this is a dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(location_config), 'MySQL', log_prefix, dry_run)
dump.remove_database_dumps(make_dump_path(config), 'MySQL', log_prefix, dry_run)
def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
def make_database_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
'''
Given a sequence of configurations dicts, a prefix to log with, a location configuration dict,
and a database name to match, return the corresponding glob patterns to match the database dump
in an archive.
Given a sequence of configurations dicts, a configuration dict, a prefix to log with, and a
database name to match, return the corresponding glob patterns to match the database dump in an
archive.
'''
return dump.make_database_dump_filename(make_dump_path(location_config), name, hostname='*')
return dump.make_database_dump_filename(make_dump_path(config), name, hostname='*')
def restore_database_dump(database_config, log_prefix, location_config, dry_run, extract_process):
def restore_database_dump(
databases_config, config, log_prefix, database_name, dry_run, extract_process, connection_params
):
'''
Restore the given MySQL/MariaDB database from an extract stream. The database is supplied as a
one-element sequence containing a dict describing the database, as per the configuration schema.
Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
output to consume.
Restore the given MySQL/MariaDB database from an extract stream. The databases are supplied as a
sequence containing one dict describing each database (as per the configuration schema), but
only the database corresponding to the given database name is restored. Use the given log
prefix in any log entries. If this is a dry run, then don't actually restore anything. Trigger
the given active extract process (an instance of subprocess.Popen) to produce output to consume.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
try:
database = next(
database_config
for database_config in databases_config
if database_config.get('name') == database_name
)
except StopIteration:
raise ValueError(
f'A database named "{database_name}" could not be found in the configuration'
)
hostname = connection_params['hostname'] or database.get(
'restore_hostname', database.get('hostname')
)
port = str(connection_params['port'] or database.get('restore_port', database.get('port', '')))
username = connection_params['username'] or database.get(
'restore_username', database.get('username')
)
password = connection_params['password'] or database.get(
'restore_password', database.get('password')
)
database = database_config[0]
restore_command = (
('mysql', '--batch')
+ (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
+ (('--host', hostname) if hostname else ())
+ (('--port', str(port)) if port else ())
+ (('--protocol', 'tcp') if hostname or port else ())
+ (('--user', username) if username else ())
)
extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
extra_environment = {'MYSQL_PWD': password} if password else None
logger.debug(f"{log_prefix}: Restoring MySQL database {database['name']}{dry_run_label}")
if dry_run:

View File

@ -6,7 +6,7 @@ logger = logging.getLogger(__name__)
def initialize_monitor(
ping_url, config_filename, monitoring_log_level, dry_run
ping_url, config, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No initialization is necessary for this monitor.
@ -14,7 +14,7 @@ def initialize_monitor(
pass
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run, action_name):
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run, action_name):
'''
Ping the configured Ntfy topic. Use the given configuration filename in any log entries.
If this is a dry run, then don't actually ping anything.
@ -28,8 +28,8 @@ def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_
state_config = hook_config.get(
state.name.lower(),
{
'title': f'A Borgmatic {state.name} event happened',
'message': f'A Borgmatic {state.name} event happened',
'title': f'A borgmatic {state.name} event happened',
'message': f'A borgmatic {state.name} event happened',
'priority': 'default',
'tags': 'borgmatic',
},
@ -75,7 +75,7 @@ def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_
def destroy_monitor(
ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
ping_url_or_uuid, config, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No destruction is necessary for this monitor.

View File

@ -13,7 +13,7 @@ EVENTS_API_URL = 'https://events.pagerduty.com/v2/enqueue'
def initialize_monitor(
integration_key, config_filename, monitoring_log_level, dry_run
integration_key, config, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No initialization is necessary for this monitor.
@ -21,7 +21,7 @@ def initialize_monitor(
pass
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run, action_name):
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run, action_name):
'''
If this is an error state, create a PagerDuty event with the configured integration key. Use
the given configuration filename in any log entries. If this is a dry run, then don't actually
@ -75,7 +75,7 @@ def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_
def destroy_monitor(
ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
ping_url_or_uuid, config, config_filename, monitoring_log_level, dry_run
): # pragma: no cover
'''
No destruction is necessary for this monitor.

View File

@ -1,6 +1,8 @@
import csv
import itertools
import logging
import os
import shlex
from borgmatic.execute import (
execute_command,
@ -12,22 +14,32 @@ from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
def make_dump_path(location_config): # pragma: no cover
def make_dump_path(config): # pragma: no cover
'''
Make the dump path from the given location configuration and the name of this hook.
Make the dump path from the given configuration dict and the name of this hook.
'''
return dump.make_database_dump_path(
location_config.get('borgmatic_source_directory'), 'postgresql_databases'
config.get('borgmatic_source_directory'), 'postgresql_databases'
)
def make_extra_environment(database):
def make_extra_environment(database, restore_connection_params=None):
'''
Make the extra_environment dict from the given database configuration.
If restore connection params are given, this is for a restore operation.
'''
extra = dict()
if 'password' in database:
extra['PGPASSWORD'] = database['password']
try:
if restore_connection_params:
extra['PGPASSWORD'] = restore_connection_params.get('password') or database.get(
'restore_password', database['password']
)
else:
extra['PGPASSWORD'] = database['password']
except (AttributeError, KeyError):
pass
extra['PGSSLMODE'] = database.get('ssl_mode', 'disable')
if 'ssl_cert' in database:
extra['PGSSLCERT'] = database['ssl_cert']
@ -59,8 +71,10 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
if dry_run:
return ()
psql_command = shlex.split(database.get('psql_command') or 'psql')
list_command = (
('psql', '--list', '--no-password', '--csv', '--tuples-only')
tuple(psql_command)
+ ('--list', '--no-password', '--no-psqlrc', '--csv', '--tuples-only')
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
@ -78,12 +92,12 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
)
def dump_databases(databases, log_prefix, location_config, dry_run):
def dump_databases(databases, config, log_prefix, dry_run):
'''
Dump the given PostgreSQL databases to a named pipe. The databases are supplied as a sequence of
dicts, one dict describing each database as per the configuration schema. Use the given log
prefix in any log entries. Use the given location configuration dict to construct the
destination path.
dicts, one dict describing each database as per the configuration schema. Use the given
configuration dict to construct the destination path and the given log prefix in any log
entries.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
@ -97,7 +111,7 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
for database in databases:
extra_environment = make_extra_environment(database)
dump_path = make_dump_path(location_config)
dump_path = make_dump_path(config)
dump_database_names = database_names_to_dump(
database, extra_environment, log_prefix, dry_run
)
@ -122,10 +136,16 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
continue
command = (
(dump_command, '--no-password', '--clean', '--if-exists',)
(
dump_command,
'--no-password',
'--clean',
'--if-exists',
)
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
+ (('--no-owner',) if database.get('no_owner', False) else ())
+ (('--format', dump_format) if dump_format else ())
+ (('--file', dump_filename) if dump_format == 'directory' else ())
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
@ -145,7 +165,9 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
if dump_format == 'directory':
dump.create_parent_directory_for_dump(dump_filename)
execute_command(
command, shell=True, extra_environment=extra_environment,
command,
shell=True,
extra_environment=extra_environment,
)
else:
dump.create_named_pipe_for_dump(dump_filename)
@ -161,72 +183,100 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
return processes
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
def remove_database_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
'''
Remove all database dump files for this hook regardless of the given databases. Use the log
prefix in any log entries. Use the given location configuration dict to construct the
destination path. If this is a dry run, then don't actually remove anything.
Remove all database dump files for this hook regardless of the given databases. Use the given
configuration dict to construct the destination path and the log prefix in any log entries. If
this is a dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(location_config), 'PostgreSQL', log_prefix, dry_run)
dump.remove_database_dumps(make_dump_path(config), 'PostgreSQL', log_prefix, dry_run)
def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
def make_database_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
'''
Given a sequence of configurations dicts, a prefix to log with, a location configuration dict,
and a database name to match, return the corresponding glob patterns to match the database dump
in an archive.
Given a sequence of configurations dicts, a configuration dict, a prefix to log with, and a
database name to match, return the corresponding glob patterns to match the database dump in an
archive.
'''
return dump.make_database_dump_filename(make_dump_path(location_config), name, hostname='*')
return dump.make_database_dump_filename(make_dump_path(config), name, hostname='*')
def restore_database_dump(database_config, log_prefix, location_config, dry_run, extract_process):
def restore_database_dump(
databases_config, config, log_prefix, database_name, dry_run, extract_process, connection_params
):
'''
Restore the given PostgreSQL database from an extract stream. The database is supplied as a
one-element sequence containing a dict describing the database, as per the configuration schema.
Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
output to consume.
Restore the given PostgreSQL database from an extract stream. The databases are supplied as a
sequence containing one dict describing each database (as per the configuration schema), but
only the database corresponding to the given database name is restored. Use the given
configuration dict to construct the destination path and the given log prefix in any log
entries. If this is a dry run, then don't actually restore anything. Trigger the given active
extract process (an instance of subprocess.Popen) to produce output to consume.
If the extract process is None, then restore the dump from the filesystem rather than from an
extract stream.
Use the given connection parameters to connect to the database. The connection parameters are
hostname, port, username, and password.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
try:
database = next(
database_config
for database_config in databases_config
if database_config.get('name') == database_name
)
except StopIteration:
raise ValueError(
f'A database named "{database_name}" could not be found in the configuration'
)
hostname = connection_params['hostname'] or database.get(
'restore_hostname', database.get('hostname')
)
port = str(connection_params['port'] or database.get('restore_port', database.get('port', '')))
username = connection_params['username'] or database.get(
'restore_username', database.get('username')
)
database = database_config[0]
all_databases = bool(database['name'] == 'all')
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), database['name'], database.get('hostname')
make_dump_path(config), database['name'], database.get('hostname')
)
psql_command = database.get('psql_command') or 'psql'
psql_command = shlex.split(database.get('psql_command') or 'psql')
analyze_command = (
(psql_command, '--no-password', '--quiet')
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
tuple(psql_command)
+ ('--no-password', '--no-psqlrc', '--quiet')
+ (('--host', hostname) if hostname else ())
+ (('--port', port) if port else ())
+ (('--username', username) if username else ())
+ (('--dbname', database['name']) if not all_databases else ())
+ (tuple(database['analyze_options'].split(' ')) if 'analyze_options' in database else ())
+ ('--command', 'ANALYZE')
)
pg_restore_command = database.get('pg_restore_command') or 'pg_restore'
use_psql_command = all_databases or database.get('format') == 'plain'
pg_restore_command = shlex.split(database.get('pg_restore_command') or 'pg_restore')
restore_command = (
(psql_command if all_databases else pg_restore_command, '--no-password')
+ (
('--if-exists', '--exit-on-error', '--clean', '--dbname', database['name'])
if not all_databases
else ()
)
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--username', database['username']) if 'username' in database else ())
tuple(psql_command if use_psql_command else pg_restore_command)
+ ('--no-password',)
+ (('--no-psqlrc',) if use_psql_command else ('--if-exists', '--exit-on-error', '--clean'))
+ (('--dbname', database['name']) if not all_databases else ())
+ (('--host', hostname) if hostname else ())
+ (('--port', port) if port else ())
+ (('--username', username) if username else ())
+ (('--no-owner',) if database.get('no_owner', False) else ())
+ (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
+ (() if extract_process else (dump_filename,))
+ tuple(
itertools.chain.from_iterable(('--schema', schema) for schema in database['schemas'])
if database.get('schemas')
else ()
)
)
extra_environment = make_extra_environment(
database, restore_connection_params=connection_params
)
extra_environment = make_extra_environment(database)
logger.debug(f"{log_prefix}: Restoring PostgreSQL database {database['name']}{dry_run_label}")
if dry_run:

View File

@ -7,21 +7,21 @@ from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
def make_dump_path(location_config): # pragma: no cover
def make_dump_path(config): # pragma: no cover
'''
Make the dump path from the given location configuration and the name of this hook.
Make the dump path from the given configuration dict and the name of this hook.
'''
return dump.make_database_dump_path(
location_config.get('borgmatic_source_directory'), 'sqlite_databases'
config.get('borgmatic_source_directory'), 'sqlite_databases'
)
def dump_databases(databases, log_prefix, location_config, dry_run):
def dump_databases(databases, config, log_prefix, dry_run):
'''
Dump the given SQLite3 databases to a file. The databases are supplied as a sequence of
configuration dicts, as per the configuration schema. Use the given log prefix in any log
entries. Use the given location configuration dict to construct the destination path. If this
is a dry run, then don't actually dump anything.
configuration dicts, as per the configuration schema. Use the given configuration dict to
construct the destination path and the given log prefix in any log entries. If this is a dry
run, then don't actually dump anything.
'''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = []
@ -38,7 +38,7 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
f'{log_prefix}: No SQLite database at {database_path}; An empty database will be created and dumped'
)
dump_path = make_dump_path(location_config)
dump_path = make_dump_path(config)
dump_filename = dump.make_database_dump_filename(dump_path, database['name'])
if os.path.exists(dump_filename):
logger.warning(
@ -65,40 +65,50 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
return processes
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
def remove_database_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
'''
Remove the given SQLite3 database dumps from the filesystem. The databases are supplied as a
sequence of configuration dicts, as per the configuration schema. Use the given log prefix in
any log entries. Use the given location configuration dict to construct the destination path.
If this is a dry run, then don't actually remove anything.
sequence of configuration dicts, as per the configuration schema. Use the given configuration
dict to construct the destination path and the given log prefix in any log entries. If this is a
dry run, then don't actually remove anything.
'''
dump.remove_database_dumps(make_dump_path(location_config), 'SQLite', log_prefix, dry_run)
dump.remove_database_dumps(make_dump_path(config), 'SQLite', log_prefix, dry_run)
def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
def make_database_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
'''
Make a pattern that matches the given SQLite3 databases. The databases are supplied as a
sequence of configuration dicts, as per the configuration schema.
'''
return dump.make_database_dump_filename(make_dump_path(location_config), name)
return dump.make_database_dump_filename(make_dump_path(config), name)
def restore_database_dump(database_config, log_prefix, location_config, dry_run, extract_process):
def restore_database_dump(
databases_config, config, log_prefix, database_name, dry_run, extract_process, connection_params
):
'''
Restore the given SQLite3 database from an extract stream. The database is supplied as a
one-element sequence containing a dict describing the database, as per the configuration schema.
Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
output to consume.
Restore the given SQLite3 database from an extract stream. The databases are supplied as a
sequence containing one dict describing each database (as per the configuration schema), but
only the database corresponding to the given database name is restored. Use the given log prefix
in any log entries. If this is a dry run, then don't actually restore anything. Trigger the
given active extract process (an instance of subprocess.Popen) to produce output to consume.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
try:
database = next(
database_config
for database_config in databases_config
if database_config.get('name') == database_name
)
except StopIteration:
raise ValueError(
f'A database named "{database_name}" could not be found in the configuration'
)
database_path = database_config[0]['path']
database_path = connection_params['restore_path'] or database.get(
'restore_path', database.get('path')
)
logger.debug(f'{log_prefix}: Restoring SQLite database at {database_path}{dry_run_label}')
if dry_run:

View File

@ -68,7 +68,7 @@ class Multi_stream_handler(logging.Handler):
def emit(self, record):
'''
Dispatch the log record to the approriate stream handler for the record's log level.
Dispatch the log record to the appropriate stream handler for the record's log level.
'''
self.log_level_to_handler[record.levelno].emit(record)
@ -141,6 +141,7 @@ def add_logging_level(level_name, level_number):
ANSWER = logging.WARN - 5
DISABLED = logging.CRITICAL + 10
def add_custom_log_levels(): # pragma: no cover
@ -148,6 +149,7 @@ def add_custom_log_levels(): # pragma: no cover
Add a custom log level between WARN and INFO for user-requested answers.
'''
add_logging_level('ANSWER', ANSWER)
add_logging_level('DISABLED', DISABLED)
def configure_logging(
@ -156,6 +158,7 @@ def configure_logging(
log_file_log_level=None,
monitoring_log_level=None,
log_file=None,
log_file_format=None,
):
'''
Configure logging to go to both the console and (syslog or log file). Use the given log levels,
@ -174,10 +177,12 @@ def configure_logging(
# Log certain log levels to console stderr and others to stdout. This supports use cases like
# grepping (non-error) output.
console_disabled = logging.NullHandler()
console_error_handler = logging.StreamHandler(sys.stderr)
console_standard_handler = logging.StreamHandler(sys.stdout)
console_handler = Multi_stream_handler(
{
logging.DISABLED: console_disabled,
logging.CRITICAL: console_error_handler,
logging.ERROR: console_error_handler,
logging.WARN: console_error_handler,
@ -190,7 +195,7 @@ def configure_logging(
console_handler.setLevel(console_log_level)
syslog_path = None
if log_file is None:
if log_file is None and syslog_log_level != logging.DISABLED:
if os.path.exists('/dev/log'):
syslog_path = '/dev/log'
elif os.path.exists('/var/run/syslog'):
@ -200,12 +205,18 @@ def configure_logging(
if syslog_path and not interactive_console():
syslog_handler = logging.handlers.SysLogHandler(address=syslog_path)
syslog_handler.setFormatter(logging.Formatter('borgmatic: %(levelname)s %(message)s'))
syslog_handler.setFormatter(
logging.Formatter('borgmatic: {levelname} {message}', style='{') # noqa: FS003
)
syslog_handler.setLevel(syslog_log_level)
handlers = (console_handler, syslog_handler)
elif log_file:
elif log_file and log_file_log_level != logging.DISABLED:
file_handler = logging.handlers.WatchedFileHandler(log_file)
file_handler.setFormatter(logging.Formatter('[%(asctime)s] %(levelname)s: %(message)s'))
file_handler.setFormatter(
logging.Formatter(
log_file_format or '[{asctime}] {levelname}: {message}', style='{' # noqa: FS003
)
)
file_handler.setLevel(log_file_log_level)
handlers = (console_handler, file_handler)
else:

View File

@ -2,6 +2,7 @@ import logging
import borgmatic.logger
VERBOSITY_DISABLED = -2
VERBOSITY_ERROR = -1
VERBOSITY_ANSWER = 0
VERBOSITY_SOME = 1
@ -15,6 +16,7 @@ def verbosity_to_log_level(verbosity):
borgmatic.logger.add_custom_log_levels()
return {
VERBOSITY_DISABLED: logging.DISABLED,
VERBOSITY_ERROR: logging.ERROR,
VERBOSITY_ANSWER: logging.ANSWER,
VERBOSITY_SOME: logging.INFO,

View File

@ -1,14 +1,14 @@
FROM alpine:3.17.1 as borgmatic
FROM docker.io/alpine:3.17.1 as borgmatic
COPY . /app
RUN apk add --no-cache py3-pip py3-ruamel.yaml py3-ruamel.yaml.clib
RUN pip install --no-cache /app && generate-borgmatic-config && chmod +r /etc/borgmatic/config.yaml
RUN borgmatic --help > /command-line.txt \
&& for action in rcreate transfer create prune compact check extract export-tar mount umount restore rlist list rinfo info break-lock borg; do \
&& for action in rcreate transfer create prune compact check extract config "config bootstrap" "config generate" "config validate" export-tar mount umount restore rlist list rinfo info break-lock borg; do \
echo -e "\n--------------------------------------------------------------------------------\n" >> /command-line.txt \
&& borgmatic "$action" --help >> /command-line.txt; done
&& borgmatic $action --help >> /command-line.txt; done
FROM node:19.5.0-alpine as html
FROM docker.io/node:19.5.0-alpine as html
ARG ENVIRONMENT=production
@ -28,7 +28,7 @@ COPY . /source
RUN NODE_ENV=${ENVIRONMENT} npx eleventy --input=/source/docs --output=/output/docs \
&& mv /output/docs/index.html /output/index.html
FROM nginx:1.22.1-alpine
FROM docker.io/nginx:1.22.1-alpine
COPY --from=html /output /usr/share/nginx/html
COPY --from=borgmatic /etc/borgmatic/config.yaml /usr/share/nginx/html/docs/reference/config.yaml

View File

@ -16,4 +16,4 @@ each.
If you find a security vulnerability, please [file a
ticket](https://torsion.org/borgmatic/#issues) or [send email
directly](mailto:witten@torsion.org) as appropriate. You should expect to hear
back within a few days at most, and generally sooner.
back within a few days at most and generally sooner.

5
docs/_data/borgmatic.js Normal file
View File

@ -0,0 +1,5 @@
module.exports = function() {
return {
environment: process.env.NODE_ENV || "development"
};
};

View File

@ -1,5 +1,5 @@
<h2>Improve this documentation</h2>
<p>Have an idea on how to make this documentation even better? Use our <a
href="https://projects.torsion.org/borgmatic-collective/borgmatic/issues">issue tracker</a> to send your
feedback!</p>
href="https://torsion.org/borgmatic/#support-and-contributing">issue
tracker</a> to send your feedback!</p>

View File

@ -94,7 +94,7 @@
display: block;
}
/* Footer catgory navigation */
/* Footer category navigation */
.elv-cat-list-active {
font-weight: 600;
}

View File

@ -3,6 +3,7 @@
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="icon" href="docs/static/borgmatic.png" type="image/x-icon">
<title>{{ subtitle + ' - ' if subtitle}}{{ title }}</title>
{%- set css %}
{% include 'index.css' %}

View File

@ -11,7 +11,7 @@ headerClass: elv-header-default
{% set navPages = collections.all | eleventyNavigation %}
{% macro renderNavListItem(entry) -%}
<li{% if entry.url == page.url %} class="elv-toc-active"{% endif %}>
<a {% if entry.url %}href="https://torsion.org/borgmatic/docs{{ entry.url | url }}"{% endif %}>{{ entry.title }}</a>
<a {% if entry.url %}href="{% if borgmatic.environment == "production" %}https://torsion.org/borgmatic/docs{% else %}http://localhost:8080/docs{% endif %}{{ entry.url | url }}"{% endif %}>{{ entry.title }}</a>
{%- if entry.children.length -%}
<ul>
{%- for child in entry.children %}{{ renderNavListItem(child) }}{% endfor -%}

22
docs/docker-compose.yaml Normal file
View File

@ -0,0 +1,22 @@
version: '3'
services:
docs:
image: borgmatic-docs
container_name: docs
ports:
- 8080:80
build:
dockerfile: docs/Dockerfile
context: ..
args:
ENVIRONMENT: development
message:
image: alpine
container_name: message
command:
- sh
- -c
- |
echo; echo "You can view dev docs at http://localhost:8080"; echo
depends_on:
- docs

View File

@ -17,17 +17,38 @@ feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
instead.
You can specify `before_backup` hooks to perform preparation steps before
running backups, and specify `after_backup` hooks to perform cleanup steps
running backups and specify `after_backup` hooks to perform cleanup steps
afterwards. Here's an example:
```yaml
hooks:
before_backup:
- mount /some/filesystem
after_backup:
- umount /some/filesystem
before_backup:
- mount /some/filesystem
after_backup:
- umount /some/filesystem
```
If your command contains a special YAML character such as a colon, you may
need to quote the entire string (or use a [multiline
string](https://yaml-multiline.info/)) to avoid an error:
```yaml
before_backup:
- "echo Backup: start"
```
There are additional hooks that run before/after other actions as well. For
instance, `before_prune` runs before a `prune` action for a repository, while
`after_prune` runs after it.
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `hooks:` section of your configuration.
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
`before_actions` and `after_actions` hooks run before/after all the actions
(like `create`, `prune`, etc.) for each repository. These hooks are a good
place to run per-repository steps like mounting/unmounting a remote
filesystem.
<span class="minilink minilink-addedin">New in version 1.6.0</span> The
`before_backup` and `after_backup` hooks each run once per repository in a
configuration file. `before_backup` hooks runs right before the `create`
@ -36,16 +57,6 @@ but not if an error occurs in a previous hook or in the backups themselves.
(Prior to borgmatic 1.6.0, these hooks instead ran once per configuration file
rather than once per repository.)
There are additional hooks that run before/after other actions as well. For
instance, `before_prune` runs before a `prune` action for a repository, while
`after_prune` runs after it.
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
`before_actions` and `after_actions` hooks run before/after all the actions
(like `create`, `prune`, etc.) for each repository. These hooks are a good
place to run per-repository steps like mounting/unmounting a remote
filesystem.
## Variable interpolation
@ -54,11 +65,13 @@ variables into the hook command. Here's an example that assumes you provide a
separate shell script:
```yaml
hooks:
after_prune:
- record-prune.sh "{configuration_filename}" "{repository}"
after_prune:
- record-prune.sh "{configuration_filename}" "{repository}"
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
In this example, when the hook is triggered, borgmatic interpolates runtime
values into the hook command: the borgmatic configuration filename and the
paths of the current Borg repository. Here's the full set of supported
@ -66,6 +79,9 @@ variables you can use here:
* `configuration_filename`: borgmatic configuration filename in which the
hook was defined
* `log_file`
<span class="minilink minilink-addedin">New in version 1.7.12</span>:
path of the borgmatic log file, only set when the `--log-file` flag is used
* `repository`: path of the current repository as configured in the current
borgmatic configuration file
@ -79,13 +95,15 @@ You can also use `before_everything` and `after_everything` hooks to perform
global setup or cleanup:
```yaml
hooks:
before_everything:
- set-up-stuff-globally
after_everything:
- clean-up-stuff-globally
before_everything:
- set-up-stuff-globally
after_everything:
- clean-up-stuff-globally
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `hooks:` section of your configuration.
`before_everything` hooks collected from all borgmatic configuration files run
once before all configuration files (prior to all actions), but only if there
is a `create` action. An error encountered during a `before_everything` hook
@ -96,6 +114,7 @@ but only if there is a `create` action. It runs even if an error occurs during
a backup or a backup hook, but not if an error occurs during a
`before_everything` hook.
## Error hooks
borgmatic also runs `on_error` hooks if an error occurs, either when creating
@ -103,6 +122,7 @@ a backup or running a backup hook. See the [monitoring and alerting
documentation](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/)
for more information.
## Hook output
Any output produced by your hooks shows up both at the console and in syslog
@ -110,6 +130,7 @@ Any output produced by your hooks shows up both at the console and in syslog
href="https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/">inspecting
your backups</a>.
## Security
An important security note about hooks: borgmatic executes all hook commands

View File

@ -44,24 +44,31 @@ file](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/),
say at `/etc/borgmatic.d/removable.yaml`:
```yaml
location:
source_directories:
- /home
source_directories:
- /home
repositories:
- /mnt/removable/backup.borg
repositories:
- path: /mnt/removable/backup.borg
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `location:` section of your configuration.
<span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit
the `path:` portion of the `repositories` list.
Then, write a `before_backup` hook in that same configuration file that uses
the external `findmnt` utility to see whether the drive is mounted before
proceeding.
```yaml
hooks:
before_backup:
- findmnt /mnt/removable > /dev/null || exit 75
before_backup:
- findmnt /mnt/removable > /dev/null || exit 75
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put this
option in the `hooks:` section of your configuration.
What this does is check if the `findmnt` command errors when probing for a
particular mount point. If it does error, then it returns exit code 75 to
borgmatic. borgmatic logs the soft failure, skips all further actions in that
@ -74,24 +81,21 @@ optionally using `before_actions` instead.
You can imagine a similar check for the sometimes-online server case:
```yaml
location:
source_directories:
- /home
source_directories:
- /home
repositories:
- ssh://me@buddys-server.org/./backup.borg
repositories:
- path: ssh://me@buddys-server.org/./backup.borg
hooks:
before_backup:
- ping -q -c 1 buddys-server.org > /dev/null || exit 75
before_backup:
- ping -q -c 1 buddys-server.org > /dev/null || exit 75
```
Or to only run backups if the battery level is high enough:
```yaml
hooks:
before_backup:
- is_battery_percent_at_least.sh 25
before_backup:
- is_battery_percent_at_least.sh 25
```
(Writing the battery script is left as an exercise to the reader.)
@ -110,8 +114,8 @@ There are some caveats you should be aware of with this feature.
* You'll generally want to put a soft failure command in the `before_backup`
hook, so as to gate whether the backup action occurs. While a soft failure is
also supported in the `after_backup` hook, returning a soft failure there
won't prevent any actions from occuring, because they've already occurred!
Similiarly, you can return a soft failure from an `on_error` hook, but at
won't prevent any actions from occurring, because they've already occurred!
Similarly, you can return a soft failure from an `on_error` hook, but at
that point it's too late to prevent the error.
* Returning a soft failure does prevent further commands in the same hook from
executing. So, like a standard error, it is an "early out". Unlike a standard

View File

@ -18,31 +18,32 @@ prior to running backups. For example, here is everything you need to dump and
backup a couple of local PostgreSQL databases and a MySQL/MariaDB database.
```yaml
hooks:
postgresql_databases:
- name: users
- name: orders
mysql_databases:
- name: posts
postgresql_databases:
- name: users
- name: orders
mysql_databases:
- name: posts
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these and other database options in the `hooks:` section of your
configuration.
<span class="minilink minilink-addedin">New in version 1.5.22</span> You can
also dump MongoDB databases. For example:
```yaml
hooks:
mongodb_databases:
- name: messages
mongodb_databases:
- name: messages
```
<span class="minilink minilink-addedin">New in version 1.7.9</span>
Additionally, you can dump SQLite databases. For example:
```yaml
hooks:
sqlite_databases:
- name: mydb
path: /var/lib/sqlite3/mydb.sqlite
sqlite_databases:
- name: mydb
path: /var/lib/sqlite3/mydb.sqlite
```
As part of each backup, borgmatic streams a database dump for each configured
@ -54,7 +55,7 @@ temporary disk space.)
To support this, borgmatic creates temporary named pipes in `~/.borgmatic` by
default. To customize this path, set the `borgmatic_source_directory` option
in the `location` section of borgmatic's configuration.
in borgmatic's configuration.
Also note that using a database hook implicitly enables both the
`read_special` and `one_file_system` configuration settings (even if they're
@ -64,35 +65,34 @@ See Limitations below for more on this.
Here's a more involved example that connects to remote databases:
```yaml
hooks:
postgresql_databases:
- name: users
hostname: database1.example.org
- name: orders
hostname: database2.example.org
port: 5433
username: postgres
password: trustsome1
format: tar
options: "--role=someone"
mysql_databases:
- name: posts
hostname: database3.example.org
port: 3307
username: root
password: trustsome1
options: "--skip-comments"
mongodb_databases:
- name: messages
hostname: database4.example.org
port: 27018
username: dbuser
password: trustsome1
authentication_database: mongousers
options: "--ssl"
sqlite_databases:
- name: mydb
path: /var/lib/sqlite3/mydb.sqlite
postgresql_databases:
- name: users
hostname: database1.example.org
- name: orders
hostname: database2.example.org
port: 5433
username: postgres
password: trustsome1
format: tar
options: "--role=someone"
mysql_databases:
- name: posts
hostname: database3.example.org
port: 3307
username: root
password: trustsome1
options: "--skip-comments"
mongodb_databases:
- name: messages
hostname: database4.example.org
port: 27018
username: dbuser
password: trustsome1
authentication_database: mongousers
options: "--ssl"
sqlite_databases:
- name: mydb
path: /var/lib/sqlite3/mydb.sqlite
```
See your [borgmatic configuration
@ -106,13 +106,12 @@ listing databases, restoring databases, etc.).
If you want to dump all databases on a host, use `all` for the database name:
```yaml
hooks:
postgresql_databases:
- name: all
mysql_databases:
- name: all
mongodb_databases:
- name: all
postgresql_databases:
- name: all
mysql_databases:
- name: all
mongodb_databases:
- name: all
```
Note that you may need to use a `username` of the `postgres` superuser for
@ -120,6 +119,9 @@ this to work with PostgreSQL.
The SQLite hook in particular does not consider "all" a special database name.
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `hooks:` section of your configuration.
<span class="minilink minilink-addedin">New in version 1.7.6</span> With
PostgreSQL and MySQL, you can optionally dump "all" databases to separate
files instead of one combined dump file, allowing more convenient restores of
@ -127,37 +129,38 @@ individual databases. Enable this by specifying your desired database dump
`format`:
```yaml
hooks:
postgresql_databases:
- name: all
format: custom
mysql_databases:
- name: all
format: sql
postgresql_databases:
- name: all
format: custom
mysql_databases:
- name: all
format: sql
```
### Containers
If your database is running within a Docker container and borgmatic is too, no
problem—simply configure borgmatic to connect to the container's name on its
exposed port. For instance:
If your database is running within a container and borgmatic is too, no
problem—configure borgmatic to connect to the container's name on its exposed
port. For instance:
```yaml
hooks:
postgresql_databases:
- name: users
hostname: your-database-container-name
port: 5433
username: postgres
password: trustsome1
postgresql_databases:
- name: users
hostname: your-database-container-name
port: 5433
username: postgres
password: trustsome1
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `hooks:` section of your configuration.
But what if borgmatic is running on the host? You can still connect to a
database container if its ports are properly exposed to the host. For
instance, when running the database container with Docker, you can specify
`--publish 127.0.0.1:5433:5432` so that it exposes the container's port 5432
to port 5433 on the host (only reachable on localhost, in this case). Or the
same thing with Docker Compose:
instance, when running the database container, you can specify `--publish
127.0.0.1:5433:5432` so that it exposes the container's port 5432 to port 5433
on the host (only reachable on localhost, in this case). Or the same thing
with Docker Compose:
```yaml
services:
@ -179,8 +182,7 @@ hooks:
password: trustsome1
```
Of course, alter the ports in these examples to suit your particular database
system.
Alter the ports in these examples to suit your particular database system.
### No source directories
@ -189,12 +191,14 @@ system.
would like to backup databases only and not source directories, you can omit
`source_directories` entirely.
In older versions of borgmatic, instead specify an empty `source_directories`
value, as it is a mandatory option prior to version 1.7.1:
<span class="minilink minilink-addedin">Prior to version 1.7.1</span> In older
versions of borgmatic, instead specify an empty `source_directories` value, as
it is a mandatory option there:
```yaml
location:
source_directories: []
hooks:
mysql_databases:
- name: all
@ -277,7 +281,8 @@ If you have a single repository in your borgmatic configuration file(s), no
problem: the `restore` action figures out which repository to use.
But if you have multiple repositories configured, then you'll need to specify
the repository path containing the archive to restore. Here's an example:
the repository to use via the `--repository` flag. This can be done either
with the repository's path or its label as configured in your borgmatic configuration file.
```bash
borgmatic restore --repository repo.borg --archive host-2023-...
@ -290,7 +295,7 @@ restore one of them, use the `--database` flag to select one or more
databases. For instance:
```bash
borgmatic restore --archive host-2023-... --database users
borgmatic restore --archive host-2023-... --database users --database orders
```
<span class="minilink minilink-addedin">New in version 1.7.6</span> You can
@ -323,6 +328,17 @@ includes any combined dump file named "all" and any other individual database
dumps found in the archive.
### Restore particular schemas
<span class="minilink minilink-addedin">New in version 1.7.13</span> With
PostgreSQL and MongoDB, you can limit the restore to a single schema found
within the database dump:
```bash
borgmatic restore --archive latest --database users --schema tentant1
```
### Limitations
There are a few important limitations with borgmatic's current database
@ -336,17 +352,19 @@ borgmatic's own configuration file. So include your configuration file in
backups to avoid getting caught without a way to restore a database.
3. borgmatic does not currently support backing up or restoring multiple
databases that share the exact same name on different hosts.
4. Because database hooks implicitly enable the `read_special` configuration
setting to support dump and restore streaming, you'll need to ensure that any
special files are excluded from backups (named pipes, block devices,
character devices, and sockets) to prevent hanging. Try a command like
`find /your/source/path -type b -or -type c -or -type p -or -type s` to find
such files. Common directories to exclude are `/dev` and `/run`, but that may
not be exhaustive. <span class="minilink minilink-addedin">New in version
4. Because database hooks implicitly enable the `read_special` configuration,
any special files are excluded from backups (named pipes, block devices,
character devices, and sockets) to prevent hanging. Try a command like `find
/your/source/path -type b -or -type c -or -type p -or -type s` to find such
files. Common directories to exclude are `/dev` and `/run`, but that may not
be exhaustive. <span class="minilink minilink-addedin">New in version
1.7.3</span> When database hooks are enabled, borgmatic automatically excludes
special files that may cause Borg to hang, so you no longer need to manually
exclude them. (This includes symlinks with special files as a destination.) You
can override/prevent this behavior by explicitly setting `read_special` to true.
special files (and symlinks to special files) that may cause Borg to hang, so
generally you no longer need to manually exclude them. There are potential
edge cases though in which applications on your system create new special files
*after* borgmatic constructs its exclude list, resulting in Borg hangs. If that
occurs, you can resort to the manual excludes described above. And to opt out
of the auto-exclude feature entirely, explicitly set `read_special` to true.
### Manual restoration
@ -385,9 +403,9 @@ dumps with any database system.
With PostgreSQL and MySQL/MariaDB, if you're getting authentication errors
when borgmatic tries to connect to your database, a natural reaction is to
increase your borgmatic verbosity with `--verbosity 2` and go looking in the
logs. You'll notice however that your database password does not show up in
the logs. This is likely not the cause of the authentication problem unless
you mistyped your password, however; borgmatic passes your password to the
logs. You'll notice though that your database password does not show up in the
logs. This is likely not the cause of the authentication problem unless you
mistyped your password, however; borgmatic passes your password to the
database via an environment variable that does not appear in the logs.
The cause of an authentication error is often on the database side—in the
@ -396,6 +414,12 @@ authenticated. For instance, with PostgreSQL, check your
[pg_hba.conf](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html)
file for that configuration.
Additionally, MySQL/MariaDB may be picking up some of your credentials from a
defaults file like `~/.my.cnf`. If that's the case, then it's possible
MySQL/MariaDB ends up using, say, a username from borgmatic's configuration
and a password from `~/.my.cnf`. This may result in authentication errors if
this combination of credentials is not what you intend.
### MySQL table lock errors
@ -406,10 +430,9 @@ You can add any additional flags to the `options:` in your database
configuration. Here's an example:
```yaml
hooks:
mysql_databases:
- name: posts
options: "--single-transaction --quick"
mysql_databases:
- name: posts
options: "--single-transaction --quick"
```
### borgmatic hangs during backup

View File

@ -65,19 +65,20 @@ configure borgmatic to run repository checks only. Configure this in the
`consistency` section of borgmatic configuration:
```yaml
consistency:
checks:
- name: repository
checks:
- name: repository
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `consistency:` section of your configuration.
<span class="minilink minilink-addedin">Prior to version 1.6.2</span> The
`checks` option was a plain list of strings without the `name:` part, and
borgmatic ran each configured check every time checks were run. For example:
```yaml
consistency:
checks:
- repository
checks:
- repository
```
@ -95,6 +96,7 @@ See [Borg's check
documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html)
for more information.
### Check frequency
<span class="minilink minilink-addedin">New in version 1.6.2</span> You can
@ -102,14 +104,16 @@ optionally configure checks to run on a periodic basis rather than every time
borgmatic runs checks. For instance:
```yaml
consistency:
checks:
- name: repository
frequency: 2 weeks
- name: archives
frequency: 1 month
checks:
- name: repository
frequency: 2 weeks
- name: archives
frequency: 1 month
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `consistency:` section of your configuration.
This tells borgmatic to run the `repository` consistency check at most once
every two weeks for a given repository and the `archives` check at most once a
month. The `frequency` value is a number followed by a unit of time, e.g. "3
@ -136,6 +140,23 @@ If you want to temporarily ignore your configured frequencies, you can invoke
`borgmatic check --force` to run checks unconditionally.
### Running only checks
<span class="minilink minilink-addedin">New in version 1.7.1</span> If you
would like to only run consistency checks without creating backups (for
instance with the `check` action on the command-line), you can omit
the `source_directories` option entirely.
<span class="minilink minilink-addedin">Prior to version 1.7.1</span> In older
versions of borgmatic, instead specify an empty `source_directories` value, as
it is a mandatory option there:
```yaml
location:
source_directories: []
```
### Disabling checks
If that's still too slow, you can disable consistency checks entirely,
@ -144,18 +165,19 @@ either for a single repository or for all repositories.
Disabling all consistency checks looks like this:
```yaml
consistency:
checks:
- name: disabled
checks:
- name: disabled
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `consistency:` section of your configuration.
<span class="minilink minilink-addedin">Prior to version 1.6.2</span> `checks`
was a plain list of strings without the `name:` part. For instance:
```yaml
consistency:
checks:
- disabled
checks:
- disabled
```
If you have multiple repositories in your borgmatic configuration file,
@ -163,12 +185,11 @@ you can keep running consistency checks, but only against a subset of the
repositories:
```yaml
consistency:
check_repositories:
- path/of/repository_to_check.borg
check_repositories:
- path/of/repository_to_check.borg
```
Finally, you can override your configuration file's consistency checks, and
Finally, you can override your configuration file's consistency checks and
run particular checks via the command-line. For instance:
```bash

View File

@ -7,7 +7,7 @@ eleventyNavigation:
---
## Source code
To get set up to hack on borgmatic, first clone master via HTTPS or SSH:
To get set up to develop on borgmatic, first clone it via HTTPS or SSH:
```bash
git clone https://projects.torsion.org/borgmatic-collective/borgmatic.git
@ -21,11 +21,11 @@ git clone ssh://git@projects.torsion.org:3022/borgmatic-collective/borgmatic.git
Then, install borgmatic
"[editable](https://pip.pypa.io/en/stable/cli/pip_install/#editable-installs)"
so that you can run borgmatic commands while you're hacking on them to
make sure your changes work.
so that you can run borgmatic actions during development to make sure your
changes work.
```bash
cd borgmatic/
cd borgmatic
pip3 install --user --editable .
```
@ -51,7 +51,6 @@ pip3 install --user tox
Finally, to actually run tests, run:
```bash
cd borgmatic
tox
```
@ -74,24 +73,58 @@ can ask isort to order your imports for you:
tox -e isort
```
Similarly, if you get errors about spelling mistakes in source code, you can
ask [codespell](https://github.com/codespell-project/codespell) to correct
them:
```bash
tox -e codespell
```
### End-to-end tests
borgmatic additionally includes some end-to-end tests that integration test
with Borg and supported databases for a few representative scenarios. These
tests don't run by default when running `tox`, because they're relatively slow
and depend on Docker containers for runtime dependencies. These tests tests do
run on the continuous integration (CI) server, and running them on your
developer machine is the closest thing to CI test parity.
and depend on containers for runtime dependencies. These tests do run on the
continuous integration (CI) server, and running them on your developer machine
is the closest thing to CI-test parity.
If you would like to run the full test suite, first install Docker and [Docker
Compose](https://docs.docker.com/compose/install/). Then run:
If you would like to run the full test suite, first install Docker (or Podman;
see below) and [Docker Compose](https://docs.docker.com/compose/install/).
Then run:
```bash
scripts/run-end-to-end-dev-tests
```
Note that this scripts assumes you have permission to run Docker. If you
don't, then you may need to run with `sudo`.
This script assumes you have permission to run `docker`. If you don't, then
you may need to run with `sudo`.
#### Podman
<span class="minilink minilink-addedin">New in version 1.7.12</span>
borgmatic's end-to-end tests optionally support using
[rootless](https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md)
[Podman](https://podman.io/) instead of Docker.
Setting up Podman is outside the scope of this documentation, but here are
some key points to double-check:
* Install Podman and your desired networking support.
* Configure `/etc/subuid` and `/etc/subgid` to map users/groups for the
non-root user who will run tests.
* Create a non-root Podman socket for that user:
```bash
systemctl --user enable --now podman.socket
systemctl --user start --now podman.socket
```
Then you'll be able to run end-to-end tests as per normal, and the test script
will automatically use your non-root Podman socket instead of a Docker socket.
## Code style
@ -101,10 +134,10 @@ the following deviations from it:
* For strings, prefer single quotes over double quotes.
* Limit all lines to a maximum of 100 characters.
* Use trailing commas within multiline values or argument lists.
* For multiline constructs, put opening and closing delimeters on lines
* For multiline constructs, put opening and closing delimiters on lines
separate from their contents.
* Within multiline constructs, use standard four-space indentation. Don't align
indentation with an opening delimeter.
indentation with an opening delimiter.
borgmatic code uses the [Black](https://black.readthedocs.io/en/stable/) code
formatter, the [Flake8](http://flake8.pycqa.org/en/latest/) code checker, and
@ -116,8 +149,8 @@ See the Black, Flake8, and isort documentation for more information.
Each pull request triggers a continuous integration build which runs the test
suite. You can view these builds on
[build.torsion.org](https://build.torsion.org/borgmatic-collective/borgmatic), and they're
also linked from the commits list on each pull request.
[build.torsion.org](https://build.torsion.org/borgmatic-collective/borgmatic),
and they're also linked from the commits list on each pull request.
## Documentation development
@ -129,11 +162,12 @@ To build and view a copy of the documentation with your local changes, run the
following from the root of borgmatic's source code:
```bash
sudo scripts/dev-docs
scripts/dev-docs
```
This requires Docker to be installed on your system. You may not need to use
sudo if your non-root user has permissions to run Docker.
This requires Docker (or Podman; see below) to be installed on your system.
This script assumes you have permission to run `docker`. If you don't, then
you may need to run with `sudo`.
After you run the script, you can point your web browser at
http://localhost:8080 to view the documentation with your changes.
@ -141,3 +175,15 @@ http://localhost:8080 to view the documentation with your changes.
To close the documentation server, ctrl-C the script. Note that it does not
currently auto-reload, so you'll need to stop it and re-run it for any
additional documentation changes to take effect.
#### Podman
<span class="minilink minilink-addedin">New in version 1.7.12</span>
borgmatic's developer build for documentation optionally supports using
[rootless](https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md)
[Podman](https://podman.io/) instead of Docker.
Setting up Podman is outside the scope of this documentation. But once you
install and configure Podman, then `scripts/dev-docs` should automatically use
Podman instead of Docker.

View File

@ -51,7 +51,8 @@ If you have a single repository in your borgmatic configuration file(s), no
problem: the `extract` action figures out which repository to use.
But if you have multiple repositories configured, then you'll need to specify
the repository path containing the archive to extract. Here's an example:
the repository to use via the `--repository` flag. This can be done either
with the repository's path or its label as configured in your borgmatic configuration file.
```bash
borgmatic extract --repository repo.borg --archive host-2023-...
@ -64,7 +65,7 @@ everything from an archive. To do that, tack on one or more `--path` values.
For instance:
```bash
borgmatic extract --archive latest --path path/1 path/2
borgmatic extract --archive latest --path path/1 --path path/2
```
Note that the specified restore paths should not have a leading slash. Like a

View File

@ -1,5 +1,6 @@
---
eleventyNavigation:
key: How-to guides
order: 1
permalink: false
---

View File

@ -24,6 +24,15 @@ Or, for even more progress and debug spew:
borgmatic --verbosity 2
```
The full set of verbosity levels are:
* `-2`: disable output entirely <span class="minilink minilink-addedin">New in borgmatic 1.7.14</span>
* `-1`: only show errors
* `0`: default output
* `1`: some additional output (informational level)
* `2`: lots of additional output (debug level)
## Backup summary
If you're less concerned with progress during a backup, and you only want to
@ -51,7 +60,7 @@ with `--format`. Refer to the [borg list --format
documentation](https://borgbackup.readthedocs.io/en/stable/usage/list.html#the-format-specifier-syntax)
for available values.
*(No borgmatic `list` or `info` actions? Upgrade borgmatic!)*
(No borgmatic `list` or `info` actions? Upgrade borgmatic!)
<span class="minilink minilink-addedin">New in borgmatic version 1.7.0</span>
There are also `rlist` and `rinfo` actions for displaying repository
@ -111,7 +120,7 @@ By default, borgmatic logs to a local syslog-compatible daemon if one is
present and borgmatic is running in a non-interactive console. Where those
logs show up depends on your particular system. If you're using systemd, try
running `journalctl -xe`. Otherwise, try viewing `/var/log/syslog` or
similiar.
similar.
You can customize the log level used for syslog logging with the
`--syslog-verbosity` flag, and this is independent from the console logging
@ -154,5 +163,39 @@ borgmatic --log-file /path/to/file.log
Note that if you use the `--log-file` flag, you are responsible for rotating
the log file so it doesn't grow too large, for example with
[logrotate](https://wiki.archlinux.org/index.php/Logrotate). Also, there is a
`--log-file-verbosity` flag to customize the log file's log level.
[logrotate](https://wiki.archlinux.org/index.php/Logrotate).
You can the `--log-file-verbosity` flag to customize the log file's log level:
```bash
borgmatic --log-file /path/to/file.log --log-file-verbosity 2
```
<span class="minilink minilink-addedin">New in version 1.7.11</span> Use the
`--log-file-format` flag to override the default log message format. This
format string can contain a series of named placeholders wrapped in curly
brackets. For instance, the default log format is: `[{asctime}] {levelname}:
{message}`. This means each log message is recorded as the log time (in square
brackets), a logging level name, a colon, and the actual log message.
So if you only want each log message to get logged *without* a timestamp or a
logging level name:
```bash
borgmatic --log-file /path/to/file.log --log-file-format "{message}"
```
Here is a list of available placeholders:
* `{asctime}`: time the log message was created
* `{levelname}`: level of the log message (`INFO`, `DEBUG`, etc.)
* `{lineno}`: line number in the source file where the log message originated
* `{message}`: actual log message
* `{pathname}`: path of the source file where the log message originated
See the [Python logging
documentation](https://docs.python.org/3/library/logging.html#logrecord-attributes)
for additional placeholders.
Note that this `--log-file-format` flg only applies to the specified
`--log-file` and not to syslog or other logging.

View File

@ -12,19 +12,23 @@ it. borgmatic supports this in its configuration by specifying multiple backup
repositories. Here's an example:
```yaml
location:
# List of source directories to backup.
source_directories:
- /home
- /etc
# List of source directories to backup.
source_directories:
- /home
- /etc
# Paths of local or remote repositories to backup to.
repositories:
- ssh://1234@usw-s001.rsync.net/./backups.borg
- ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo
- /var/lib/backups/local.borg
# Paths of local or remote repositories to backup to.
repositories:
- path: ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo
- path: /var/lib/backups/local.borg
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `location:` section of your configuration.
<span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit
the `path:` portion of the `repositories` list.
When you run borgmatic with this configuration, it invokes Borg once for each
configured repository in sequence. (So, not in parallel.) That means—in each
repository—borgmatic creates a single new backup archive containing all of
@ -32,9 +36,8 @@ your source directories.
Here's a way of visualizing what borgmatic does with the above configuration:
1. Backup `/home` and `/etc` to `1234@usw-s001.rsync.net:backups.borg`
2. Backup `/home` and `/etc` to `k8pDxu32@k8pDxu32.repo.borgbase.com:repo`
3. Backup `/home` and `/etc` to `/var/lib/backups/local.borg`
1. Backup `/home` and `/etc` to `k8pDxu32@k8pDxu32.repo.borgbase.com:repo`
2. Backup `/home` and `/etc` to `/var/lib/backups/local.borg`
This gives you redundancy of your data across repositories and even
potentially across providers.

View File

@ -20,18 +20,22 @@ instance, for applications:
```bash
sudo mkdir /etc/borgmatic.d
sudo generate-borgmatic-config --destination /etc/borgmatic.d/app1.yaml
sudo generate-borgmatic-config --destination /etc/borgmatic.d/app2.yaml
sudo borgmatic config generate --destination /etc/borgmatic.d/app1.yaml
sudo borgmatic config generate --destination /etc/borgmatic.d/app2.yaml
```
Or, for repositories:
```bash
sudo mkdir /etc/borgmatic.d
sudo generate-borgmatic-config --destination /etc/borgmatic.d/repo1.yaml
sudo generate-borgmatic-config --destination /etc/borgmatic.d/repo2.yaml
sudo borgmatic config generate --destination /etc/borgmatic.d/repo1.yaml
sudo borgmatic config generate --destination /etc/borgmatic.d/repo2.yaml
```
<span class="minilink minilink-addedin">Prior to version 1.7.15</span> The
command to generate configuration files was `generate-borgmatic-config`
instead of `borgmatic config generate`.
When you set up multiple configuration files like this, borgmatic will run
each one in turn from a single borgmatic invocation. This includes, by
default, the traditional `/etc/borgmatic/config.yaml` as well.
@ -54,31 +58,126 @@ choice](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#autopilot),
each entry using borgmatic's `--config` flag instead of relying on
`/etc/borgmatic.d`.
## Archive naming
If you've got multiple borgmatic configuration files, you might want to create
archives with different naming schemes for each one. This is especially handy
if each configuration file is backing up to the same Borg repository but you
still want to be able to distinguish backup archives for one application from
another.
borgmatic supports this use case with an `archive_name_format` option. The
idea is that you define a string format containing a number of [Borg
placeholders](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-placeholders),
and borgmatic uses that format to name any new archive it creates. For
instance:
```yaml
archive_name_format: home-directories-{now}
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `storage:` section of your configuration.
This example means that when borgmatic creates an archive, its name will start
with the string `home-directories-` and end with a timestamp for its creation
time. If `archive_name_format` is unspecified, the default is
`{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}`, meaning your system hostname plus a
timestamp in a particular format.
### Archive filtering
<span class="minilink minilink-addedin">New in version 1.7.11</span> borgmatic
uses the `archive_name_format` option to automatically limit which archives
get used for actions operating on multiple archives. This prevents, for
instance, duplicate archives from showing up in `rlist` or `info` results—even
if the same repository appears in multiple borgmatic configuration files. To
take advantage of this feature, use a different `archive_name_format` in each
configuration file.
Under the hood, borgmatic accomplishes this by substituting globs for certain
ephemeral data placeholders in your `archive_name_format`—and using the result
to filter archives when running supported actions.
For instance, let's say that you have this in your configuration:
```yaml
archive_name_format: {hostname}-user-data-{now}
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `storage:` section of your configuration.
borgmatic considers `{now}` an emphemeral data placeholder that will probably
change per archive, while `{hostname}` won't. So it turns the example value
into `{hostname}-user-data-*` and applies it to filter down the set of
archives used for actions like `rlist`, `info`, `prune`, `check`, etc.
The end result is that when borgmatic runs the actions for a particular
application-specific configuration file, it only operates on the archives
created for that application. But this doesn't apply to actions like `compact`
that operate on an entire repository.
If this behavior isn't quite smart enough for your needs, you can use the
`match_archives` option to override the pattern that borgmatic uses for
filtering archives. For example:
```yaml
archive_name_format: {hostname}-user-data-{now}
match_archives: sh:myhost-user-data-*
```
For Borg 1.x, use a shell pattern for the `match_archives` value and see the
[Borg patterns
documentation](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-help-patterns)
for more information. For Borg 2.x, see the [match archives
documentation](https://borgbackup.readthedocs.io/en/2.0.0b5/usage/help.html#borg-help-match-archives).
Some borgmatic command-line actions also have a `--match-archives` flag that
overrides both the auto-matching behavior and the `match_archives`
configuration option.
<span class="minilink minilink-addedin">Prior to 1.7.11</span> The way to
limit the archives used for the `prune` action was a `prefix` option in the
`retention` section for matching against the start of archive names. And the
option for limiting the archives used for the `check` action was a separate
`prefix` in the `consistency` section. Both of these options are deprecated in
favor of the auto-matching behavior (or `match_archives`/`--match-archives`)
in newer versions of borgmatic.
## Configuration includes
Once you have multiple different configuration files, you might want to share
common configuration options across these files with having to copy and paste
them. To achieve this, you can put fragments of common configuration options
into a file, and then include or inline that file into one or more borgmatic
into a file and then include or inline that file into one or more borgmatic
configuration files.
Let's say that you want to include common retention configuration across all
Let's say that you want to include common consistency check configuration across all
of your configuration files. You could do that in each configuration file with
the following:
```yaml
location:
...
repositories:
- path: repo.borg
retention:
!include /etc/borgmatic/common_retention.yaml
checks:
!include /etc/borgmatic/common_checks.yaml
```
And then the contents of `common_retention.yaml` could be:
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> These
options were organized into sections like `location:` and `consistency:`.
The contents of `common_checks.yaml` could be:
```yaml
keep_hourly: 24
keep_daily: 7
- name: repository
frequency: 3 weeks
- name: archives
frequency: 2 weeks
```
To prevent borgmatic from trying to load these configuration fragments by
@ -90,18 +189,18 @@ When a configuration include is a relative path, borgmatic loads it from either
the current working directory or from the directory containing the file doing
the including.
Note that this form of include must be a YAML value rather than a key. For
Note that this form of include must be a value rather than an option name. For
example, this will not work:
```yaml
location:
...
repositories:
- path: repo.borg
# Don't do this. It won't work!
!include /etc/borgmatic/common_retention.yaml
!include /etc/borgmatic/common_checks.yaml
```
But if you do want to merge in a YAML key *and* its values, keep reading!
But if you do want to merge in a option name *and* its values, keep reading!
## Include merging
@ -109,45 +208,43 @@ But if you do want to merge in a YAML key *and* its values, keep reading!
If you need to get even fancier and merge in common configuration options, you
can perform a YAML merge of included configuration using the YAML `<<` key.
For instance, here's an example of a main configuration file that pulls in
retention and consistency options via a single include:
retention and consistency checks options via a single include:
```yaml
<<: !include /etc/borgmatic/common.yaml
repositories:
- path: repo.borg
location:
...
<<: !include /etc/borgmatic/common.yaml
```
This is what `common.yaml` might look like:
```yaml
retention:
keep_hourly: 24
keep_daily: 7
keep_hourly: 24
keep_daily: 7
consistency:
checks:
- name: repository
checks:
- name: repository
frequency: 3 weeks
- name: archives
frequency: 2 weeks
```
Once this include gets merged in, the resulting configuration would have all
of the `location` options from the original configuration file *and* the
`retention` and `consistency` options from the include.
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> These
options were organized into sections like `retention:` and `consistency:`.
Prior to borgmatic version 1.6.0, when there's a section collision between the
local file and the merged include, the local file's section takes precedence.
So if the `retention` section appears in both the local file and the include
file, the included `retention` is ignored in favor of the local `retention`.
But see below about deep merge in version 1.6.0+.
Once this include gets merged in, the resulting configuration would have all
of the options from the original configuration file *and* the options from the
include.
Note that this `<<` include merging syntax is only for merging in mappings
(configuration options and their values). But if you'd like to include a
single value directly, please see the section above about standard includes.
single value directly, please see the above about standard includes.
Additionally, there is a limitation preventing multiple `<<` include merges
per section. So for instance, that means you can do one `<<` merge at the
global level, another `<<` within each configuration section, etc. (This is a
YAML limitation.)
per file or option value. So for instance, that means you can do one `<<`
merge at the global level, another `<<` within each nested option value, etc.
(This is a YAML limitation.)
### Deep merge
@ -158,36 +255,176 @@ at all levels in the two configuration files. This allows you to include
common configuration—up to full borgmatic configuration files—while overriding
only the parts you want to customize.
For instance, here's an example of a main configuration file that pulls in two
retention options via an include and then overrides one of them locally:
For instance, here's an example of a main configuration file that pulls in
options via an include and then overrides one of them locally:
```yaml
<<: !include /etc/borgmatic/common.yaml
location:
...
constants:
hostname: myhostname
retention:
keep_daily: 5
repositories:
- path: repo.borg
```
This is what `common.yaml` might look like:
```yaml
retention:
keep_hourly: 24
keep_daily: 7
constants:
prefix: myprefix
hostname: otherhost
```
Once this include gets merged in, the resulting configuration would have a
`keep_hourly` value of `24` and an overridden `keep_daily` value of `5`.
`prefix` value of `myprefix` and an overridden `hostname` value of
`myhostname`.
When there's an option collision between the local file and the merged
include, the local file's option takes precedence.
#### List merge
<span class="minilink minilink-addedin">New in version 1.6.1</span> Colliding
list values are appended together.
<span class="minilink minilink-addedin">New in version 1.7.12</span> If there
is a list value from an include that you *don't* want in your local
configuration file, you can omit it with an `!omit` tag. For instance:
```yaml
<<: !include /etc/borgmatic/common.yaml
source_directories:
- !omit /home
- /var
```
And `common.yaml` like this:
```yaml
source_directories:
- /home
- /etc
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `location:` section of your configuration.
Once this include gets merged in, the resulting configuration will have a
`source_directories` value of `/etc` and `/var`—with `/home` omitted.
This feature currently only works on scalar (e.g. string or number) list items
and will not work elsewhere in a configuration file. Be sure to put the
`!omit` tag *before* the list item (after the dash). Putting `!omit` after the
list item will not work, as it gets interpreted as part of the string. Here's
an example of some things not to do:
```yaml
<<: !include /etc/borgmatic/common.yaml
source_directories:
# Do not do this! It will not work. "!omit" belongs before "/home".
- /home !omit
# Do not do this either! "!omit" only works on scalar list items.
repositories: !omit
# Also do not do this for the same reason! This is a list item, but it's
# not a scalar.
- !omit path: repo.borg
```
Additionally, the `!omit` tag only works in a configuration file that also
performs a merge include with `<<: !include`. It doesn't make sense within,
for instance, an included configuration file itself (unless it in turn
performs its own merge include). That's because `!omit` only applies to the
file doing the include; it doesn't work in reverse or propagate through
includes.
### Shallow merge
Even though deep merging is generally pretty handy for included files,
sometimes you want specific options in the local file to take precedence over
included options—without any merging occurring for them.
<span class="minilink minilink-addedin">New in version 1.7.12</span> That's
where the `!retain` tag comes in. Whenever you're merging an included file
into your configuration file, you can optionally add the `!retain` tag to
particular local mappings or lists to retain the local values and ignore
included values.
For instance, start with this configuration file containing the `!retain` tag
on the `retention` mapping:
```yaml
<<: !include /etc/borgmatic/common.yaml
repositories:
- path: repo.borg
checks: !retain
- name: repository
```
And `common.yaml` like this:
```yaml
repositories:
- path: common.borg
checks:
- name: archives
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> These
options were organized into sections like `location:` and `consistency:`.
Once this include gets merged in, the resulting configuration will have a
`checks` value with a name of `repository` and no other values. That's because
the `!retain` tag says to retain the local version of `checks` and ignore any
values coming in from the include. But because the `repositories` list doesn't
have a `!retain` tag, it still gets merged together to contain both
`common.borg` and `repo.borg`.
The `!retain` tag can only be placed on mappings (keys/values) and lists, and
it goes right after the name of the option (and its colon) on the same line.
The effects of `!retain` are recursive, meaning that if you place a `!retain`
tag on a top-level mapping, even deeply nested values within it will not be
merged.
Additionally, the `!retain` tag only works in a configuration file that also
performs a merge include with `<<: !include`. It doesn't make sense within,
for instance, an included configuration file itself (unless it in turn
performs its own merge include). That's because `!retain` only applies to the
file doing the include; it doesn't work in reverse or propagate through
includes.
## Debugging includes
<span class="minilink minilink-addedin">New in version 1.7.15</span> If you'd
like to see what the loaded configuration looks like after includes get merged
in, run the `validate` action on your configuration file:
```bash
sudo borgmatic config validate --show
```
<span class="minilink minilink-addedin">In version 1.7.12 through
1.7.14</span> Use this command instead:
```bash
sudo validate-borgmatic-config --show
```
You'll need to specify your configuration file with `--config` if it's not in
a default location.
This will output the merged configuration as borgmatic sees it, which can be
helpful for understanding how your includes work in practice.
## Configuration overrides
@ -202,43 +439,41 @@ Whatever the reason, you can override borgmatic configuration options at the
command-line via the `--override` flag. Here's an example:
```bash
borgmatic create --override location.remote_path=/usr/local/bin/borg1
borgmatic create --override remote_path=/usr/local/bin/borg1
```
What this does is load your configuration files, and for each one, disregard
the configured value for the `remote_path` option in the `location` section,
and use the value of `/usr/local/bin/borg1` instead.
What this does is load your configuration files and for each one, disregard
the configured value for the `remote_path` option and use the value of
`/usr/local/bin/borg1` instead.
You can even override multiple values at once. For instance:
You can even override nested values or multiple values at once. For instance:
```bash
borgmatic create --override section.option1=value1 section.option2=value2
borgmatic create --override parent_option.option1=value1 --override parent_option.option2=value2
```
This will accomplish the same thing:
```bash
borgmatic create --override section.option1=value1 --override section.option2=value2
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Don't
forget to specify the section that an option is in. That looks like a prefix
on the option name, e.g. `location.repositories`.
Note that each value is parsed as an actual YAML string, so you can even set
list values by using brackets. For instance:
```bash
borgmatic create --override location.repositories=[test1.borg,test2.borg]
borgmatic create --override repositories=[test1.borg,test2.borg]
```
Or even a single list element:
```bash
borgmatic create --override location.repositories=[/root/test.borg]
borgmatic create --override repositories=[/root/test.borg]
```
If your override value contains special YAML characters like colons, then
you'll need quotes for it to parse correctly:
```bash
borgmatic create --override location.repositories="['user@server:test.borg']"
borgmatic create --override repositories="['user@server:test.borg']"
```
There is not currently a way to override a single element of a list without
@ -254,7 +489,9 @@ indentation and a leading dash.)
Be sure to quote your overrides if they contain spaces or other characters
that your shell may interpret.
An alternate to command-line overrides is passing in your values via [environment variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
An alternate to command-line overrides is passing in your values via
[environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
## Constant interpolation
@ -272,45 +509,35 @@ Here's an example usage:
```yaml
constants:
user: foo
my_prefix: bar-
archive_prefix: bar
location:
source_directories:
- /home/{user}/.config
- /home/{user}/.ssh
...
source_directories:
- /home/{user}/.config
- /home/{user}/.ssh
storage:
archive_name_format: '{my_prefix}{now}'
...
retention:
prefix: {my_prefix}
consistency:
prefix: {my_prefix}
archive_name_format: '{archive_prefix}-{now}'
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Don't
forget to specify the section (like `location:` or `storage:`) that any option
is in.
In this example, when borgmatic runs, all instances of `{user}` get replaced
with `foo` and all instances of `{my_prefix}` get replaced with `bar-`. (And
in this particular example, `{now}` doesn't get replaced with anything, but
gets passed directly to Borg.) After substitution, the logical result looks
something like this:
with `foo` and all instances of `{archive-prefix}` get replaced with `bar-`.
(And in this particular example, `{now}` doesn't get replaced with anything,
but gets passed directly to Borg.) After substitution, the logical result
looks something like this:
```yaml
location:
source_directories:
- /home/foo/.config
- /home/foo/.ssh
...
source_directories:
- /home/foo/.config
- /home/foo/.ssh
storage:
archive_name_format: 'bar-{now}'
...
retention:
prefix: bar-
consistency:
prefix: bar-
archive_name_format: 'bar-{now}'
```
An alternate to constants is passing in your values via [environment

View File

@ -73,7 +73,7 @@ from borgmatic for a configured interval.
### Consistency checks
While not strictly part of monitoring, if you really want confidence that your
While not strictly part of monitoring, if you want confidence that your
backups are not only running but are restorable as well, you can configure
particular [consistency
checks](https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/#consistency-check-configuration)
@ -89,23 +89,24 @@ notifications or take other actions, so you can get alerted as soon as
something goes wrong. Here's a not-so-useful example:
```yaml
hooks:
on_error:
- echo "Error while creating a backup or running a backup hook."
on_error:
- echo "Error while creating a backup or running a backup hook."
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
The `on_error` hook supports interpolating particular runtime variables into
the hook command. Here's an example that assumes you provide a separate shell
script to handle the alerting:
```yaml
hooks:
on_error:
- send-text-message.sh "{configuration_filename}" "{repository}"
on_error:
- send-text-message.sh "{configuration_filename}" "{repository}"
```
In this example, when the error occurs, borgmatic interpolates runtime values
into the hook command: the borgmatic configuration filename, and the path of
into the hook command: the borgmatic configuration filename and the path of
the repository. Here's the full set of supported variables you can use here:
* `configuration_filename`: borgmatic configuration filename in which the
@ -117,7 +118,7 @@ the repository. Here's the full set of supported variables you can use here:
occurred without running a command)
Note that borgmatic runs the `on_error` hooks only for `create`, `prune`,
`compact`, or `check` actions or hooks in which an error occurs, and not other
`compact`, or `check` actions/hooks in which an error occurs and not other
actions. borgmatic does not run `on_error` hooks if an error occurs within a
`before_everything` or `after_everything` hook. For more about hooks, see the
[borgmatic hooks
@ -135,11 +136,13 @@ URL" for your project. Here's an example:
```yaml
hooks:
healthchecks:
ping_url: https://hc-ping.com/addffa72-da17-40ae-be9c-ff591afb942a
healthchecks:
ping_url: https://hc-ping.com/addffa72-da17-40ae-be9c-ff591afb942a
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
With this hook in place, borgmatic pings your Healthchecks project when a
backup begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
@ -147,7 +150,7 @@ hooks</a> run, borgmatic lets Healthchecks know that it has started if any of
the `create`, `prune`, `compact`, or `check` actions are run.
Then, if the actions complete successfully, borgmatic notifies Healthchecks of
the success after the `after_backup` hooks run, and includes borgmatic logs in
the success after the `after_backup` hooks run and includes borgmatic logs in
the payload data sent to Healthchecks. This means that borgmatic logs show up
in the Healthchecks UI, although be aware that Healthchecks currently has a
10-kilobyte limit for the logs in each ping.
@ -179,11 +182,13 @@ API URL" for your monitor. Here's an example:
```yaml
hooks:
cronitor:
ping_url: https://cronitor.link/d3x0c1
cronitor:
ping_url: https://cronitor.link/d3x0c1
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
With this hook in place, borgmatic pings your Cronitor monitor when a backup
begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
@ -208,11 +213,13 @@ URL" for your monitor. Here's an example:
```yaml
hooks:
cronhub:
ping_url: https://cronhub.io/start/1f5e3410-254c-11e8-b61d-55875966d031
cronhub:
ping_url: https://cronhub.io/start/1f5e3410-254c-11e8-b61d-55875966d031
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
With this hook in place, borgmatic pings your Cronhub monitor when a backup
begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
@ -251,11 +258,13 @@ Here's an example:
```yaml
hooks:
pagerduty:
integration_key: a177cad45bd374409f78906a810a3074
pagerduty:
integration_key: a177cad45bd374409f78906a810a3074
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
With this hook in place, borgmatic creates a PagerDuty event for your service
whenever backups fail. Specifically, if an error occurs during a `create`,
`prune`, `compact`, or `check` action, borgmatic sends an event to PagerDuty
@ -291,31 +300,34 @@ An example configuration is shown here, with all the available options, includin
[tags](https://ntfy.sh/docs/publish/#tags-emojis):
```yaml
hooks:
ntfy:
topic: my-unique-topic
server: https://ntfy.my-domain.com
start:
title: A Borgmatic backup started
message: Watch this space...
tags: borgmatic
priority: min
finish:
title: A Borgmatic backup completed successfully
message: Nice!
tags: borgmatic,+1
priority: min
fail:
title: A Borgmatic backup failed
message: You should probably fix it
tags: borgmatic,-1,skull
priority: max
states:
- start
- finish
- fail
ntfy:
topic: my-unique-topic
server: https://ntfy.my-domain.com
start:
title: A borgmatic backup started
message: Watch this space...
tags: borgmatic
priority: min
finish:
title: A borgmatic backup completed successfully
message: Nice!
tags: borgmatic,+1
priority: min
fail:
title: A borgmatic backup failed
message: You should probably fix it
tags: borgmatic,-1,skull
priority: max
states:
- start
- finish
- fail
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
the `ntfy:` option in the `hooks:` section of your configuration.
## Scripting borgmatic
To consume the output of borgmatic in other software, you can include an
@ -324,7 +336,7 @@ output formatted as JSON.
Note that when you specify the `--json` flag, Borg's other non-JSON output is
suppressed so as not to interfere with the captured JSON. Also note that JSON
output only shows up at the console, and not in syslog.
output only shows up at the console and not in syslog.
### Latest backups

View File

@ -20,10 +20,12 @@ pull your repository passphrase, your database passwords, or any other option
values from environment variables. For instance:
```yaml
storage:
encryption_passphrase: ${MY_PASSPHRASE}
encryption_passphrase: ${MY_PASSPHRASE}
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `storage:` section of your configuration.
This uses the `MY_PASSPHRASE` environment variable as your encryption
passphrase. Note that the `{` `}` brackets are required. `$MY_PASSPHRASE` by
itself will not work.
@ -38,12 +40,14 @@ configuration](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
the same approach applies. For example:
```yaml
hooks:
postgresql_databases:
- name: users
password: ${MY_DATABASE_PASSWORD}
postgresql_databases:
- name: users
password: ${MY_DATABASE_PASSWORD}
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
This uses the `MY_DATABASE_PASSWORD` environment variable as your database
password.
@ -53,8 +57,7 @@ password.
If you'd like to set a default for your environment variables, you can do so with the following syntax:
```yaml
storage:
encryption_passphrase: ${MY_PASSPHRASE:-defaultpass}
encryption_passphrase: ${MY_PASSPHRASE:-defaultpass}
```
Here, "`defaultpass`" is the default passphrase if the `MY_PASSPHRASE`
@ -72,8 +75,7 @@ can escape it with a backslash. For instance, if your password is literally
`${A}@!`:
```yaml
storage:
encryption_passphrase: \${A}@!
encryption_passphrase: \${A}@!
```
### Related features

View File

@ -7,7 +7,7 @@ eleventyNavigation:
---
## Running Borg with borgmatic
Borg has several commands (and options) that borgmatic does not currently
Borg has several commands and options that borgmatic does not currently
support. Sometimes though, as a borgmatic user, you may find yourself wanting
to take advantage of these off-the-beaten-path Borg features. You could of
course drop down to running Borg directly. But then you'd give up all the
@ -17,11 +17,11 @@ request](https://torsion.org/borgmatic/#contributing) to add the feature. But
what if you need it *now*?
That's where borgmatic's support for running "arbitrary" Borg commands comes
in. Running Borg commands with borgmatic takes advantage of the following, all
based on your borgmatic configuration files or command-line arguments:
in. Running these Borg commands with borgmatic can take advantage of the
following, all based on your borgmatic configuration files or command-line
arguments:
* configured repositories (automatically runs your Borg command once for each
one)
* configured repositories, running your Borg command once for each one
* local and remote Borg binary paths
* SSH settings and Borg environment variables
* lock wait settings
@ -37,32 +37,82 @@ you run Borg with borgmatic is via the `borg` action. Here's a simple example:
borgmatic borg break-lock
```
(No `borg` action in borgmatic? Time to upgrade!)
This runs Borg's `break-lock` command once with each configured borgmatic
repository, passing the repository path in as a Borg-supported environment
variable named `BORG_REPO`. (The native `borgmatic break-lock` action should
be preferred though for most uses.)
This runs Borg's `break-lock` command once on each configured borgmatic
repository. Notice how the repository isn't present in the specified Borg
options, as that part is provided by borgmatic.
You can also specify Borg options for relevant commands:
You can also specify Borg options for relevant commands. For instance:
```bash
borgmatic borg rlist --short
```
This runs Borg's `rlist` command once on each configured borgmatic repository.
(The native `borgmatic rlist` action should be preferred for most use.)
What if you only want to run Borg on a single configured borgmatic repository
when you've got several configured? Not a problem.
when you've got several configured? Not a problem. The `--repository` argument
lets you specify the repository to use, either by its path or its label:
```bash
borgmatic borg --repository repo.borg break-lock
```
And what about a single archive?
And if you need to specify where the repository goes in the command because
there are positional arguments after it:
```bash
borgmatic borg --archive your-archive-name rlist
borgmatic borg debug dump-manifest :: root
```
The `::` is a Borg placeholder that means: Substitute the repository passed in
by environment variable here.
<span class="minilink minilink-addedin">Prior to version 1.8.0</span>borgmatic
attempted to inject the repository name directly into your Borg arguments in
the right place (which didn't always work). So your command-line in these
older versions didn't support the `::`
### Specifying an archive
For borg commands that expect an archive name, you have a few approaches.
Here's one:
```bash
borgmatic borg --archive latest list '::$ARCHIVE'
```
The single quotes are necessary in order to pass in a literal `$ARCHIVE`
string instead of trying to resolve it from borgmatic's shell where it's not
yet set.
Or if you don't need borgmatic to resolve an archive name like `latest`, you
can just do:
```bash
borgmatic borg list ::your-actual-archive-name
```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span>borgmatic
provided the archive name implicitly along with the repository, attempting to
inject it into your Borg arguments in the right place (which didn't always
work). So your command-line in these older versions of borgmatic looked more
like:
```bash
borgmatic borg --archive latest list
```
<span class="minilink minilink-addedin">With Borg version 2.x</span> Either of
these will list an archive:
```bash
borgmatic borg --archive latest list '$ARCHIVE'
```
```bash
borgmatic borg list your-actual-archive-name
```
### Limitations
@ -70,14 +120,10 @@ borgmatic borg --archive your-archive-name rlist
borgmatic's `borg` action is not without limitations:
* The Borg command you want to run (`create`, `list`, etc.) *must* come first
after the `borg` action. If you have any other Borg options to specify,
provide them after. For instance, `borgmatic borg list --progress` will work,
but `borgmatic borg --progress list` will not.
* borgmatic supplies the repository/archive name to Borg for you (based on
your borgmatic configuration or the `borgmatic borg --repository`/`--archive`
arguments), so do not specify the repository/archive otherwise.
* The `borg` action will not currently work for any Borg commands like `borg
serve` that do not accept a repository/archive name.
after the `borg` action (and any borgmatic-specific arguments). If you have
other Borg options to specify, provide them after. For instance,
`borgmatic borg list --progress ...` will work, but
`borgmatic borg --progress list ...` will not.
* Do not specify any global borgmatic arguments to the right of the `borg`
action. (They will be passed to Borg instead of borgmatic.) If you have
global borgmatic arguments, specify them *before* the `borg` action.
@ -87,9 +133,19 @@ borgmatic's `borg` action is not without limitations:
borgmatic action. In this case, only the Borg command is run.
* Unlike normal borgmatic actions that support JSON, the `borg` action will
not disable certain borgmatic logs to avoid interfering with JSON output.
* Unlike other borgmatic actions, the `borg` action captures (and logs) all
output, so interactive prompts or flags like `--progress` will not work as
expected.
* <span class="minilink minilink-addedin">Prior to version 1.8.0</span>
borgmatic implicitly injected the repository/archive arguments on the Borg
command-line for you (based on your borgmatic configuration or the
`borgmatic borg --repository`/`--archive` arguments)—which meant you
couldn't specify the repository/archive directly in the Borg command. Also,
in these older versions of borgmatic, the `borg` action didn't work for any
Borg commands like `borg serve` that do not accept a repository/archive
name.
* <span class="minilink minilink-addedin">Prior to version 1.7.13</span> Unlike
other borgmatic actions, the `borg` action captured (and logged) all output,
so interactive prompts and flags like `--progress` dit not work as expected.
In new versions, borgmatic runs the `borg` action without capturing output,
so interactive prompts work.
In general, this `borgmatic borg` feature should be considered an escape
valve—a feature of second resort. In the long run, it's preferable to wrap

Some files were not shown because too many files have changed in this diff Show More