Compare commits

..

No commits in common. "main" and "env_check_i_know_what_i_am_doing" have entirely different histories.

200 changed files with 9117 additions and 16543 deletions

86
.drone.yml Normal file
View File

@ -0,0 +1,86 @@
---
kind: pipeline
name: python-3-8-alpine-3-13
services:
- name: postgresql
image: docker.io/postgres:13.1-alpine
environment:
POSTGRES_PASSWORD: test
POSTGRES_DB: test
- name: postgresql2
image: docker.io/postgres:13.1-alpine
environment:
POSTGRES_PASSWORD: test2
POSTGRES_DB: test
POSTGRES_USER: postgres2
commands:
- docker-entrypoint.sh -p 5433
- name: mysql
image: docker.io/mariadb:10.5
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
- name: mysql2
image: docker.io/mariadb:10.5
environment:
MYSQL_ROOT_PASSWORD: test2
MYSQL_DATABASE: test
commands:
- docker-entrypoint.sh --port=3307
- name: mongodb
image: docker.io/mongo:5.0.5
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: test
- name: mongodb2
image: docker.io/mongo:5.0.5
environment:
MONGO_INITDB_ROOT_USERNAME: root2
MONGO_INITDB_ROOT_PASSWORD: test2
commands:
- docker-entrypoint.sh --port=27018
clone:
skip_verify: true
steps:
- name: build
image: docker.io/alpine:3.13
environment:
TEST_CONTAINER: true
pull: always
commands:
- scripts/run-full-tests
---
kind: pipeline
name: documentation
type: exec
platform:
os: linux
arch: amd64
clone:
skip_verify: true
steps:
- name: build
environment:
USERNAME:
from_secret: docker_username
PASSWORD:
from_secret: docker_password
IMAGE_NAME: projects.torsion.org/borgmatic-collective/borgmatic:docs
commands:
- podman login --username "$USERNAME" --password "$PASSWORD" projects.torsion.org
- podman build --tag "$IMAGE_NAME" --file docs/Dockerfile --storage-opt "overlay.mount_program=/usr/bin/fuse-overlayfs" .
- podman push "$IMAGE_NAME"
trigger:
repo:
- borgmatic-collective/borgmatic
branch:
- main
event:
- push

35
.gitea/issue_template.md Normal file
View File

@ -0,0 +1,35 @@
#### What I'm trying to do and why
#### Steps to reproduce (if a bug)
Include (sanitized) borgmatic configuration files if applicable.
#### Actual behavior (if a bug)
Include (sanitized) `--verbosity 2` output if applicable.
#### Expected behavior (if a bug)
#### Other notes / implementation ideas
#### Environment
**borgmatic version:** [version here]
Use `sudo borgmatic --version` or `sudo pip show borgmatic | grep ^Version`
**borgmatic installation method:** [e.g., Debian package, Docker container, etc.]
**Borg version:** [version here]
Use `sudo borg --version`
**Python version:** [version here]
Use `python3 --version`
**Database version (if applicable):** [version here]
Use `psql --version` or `mysql --version` on client and server.
**operating system and version:** [OS here]

View File

@ -1,77 +0,0 @@
name: "Bug or question/support"
about: "For filing a bug or getting support"
body:
- type: textarea
id: problem
attributes:
label: What I'm trying to do and why
validations:
required: true
- type: textarea
id: repro_steps
attributes:
label: Steps to reproduce
description: Include (sanitized) borgmatic configuration files if applicable.
validations:
required: false
- type: textarea
id: actual_behavior
attributes:
label: Actual behavior
description: Include (sanitized) `--verbosity 2` output if applicable.
validations:
required: false
- type: textarea
id: expected_behavior
attributes:
label: Expected behavior
validations:
required: false
- type: textarea
id: notes
attributes:
label: Other notes / implementation ideas
validations:
required: false
- type: input
id: borgmatic_version
attributes:
label: borgmatic version
description: Use `sudo borgmatic --version` or `sudo pip show borgmatic | grep ^Version`
validations:
required: false
- type: input
id: borgmatic_install_method
attributes:
label: borgmatic installation method
description: e.g., pip install, Debian package, container, etc.
validations:
required: false
- type: input
id: borg_version
attributes:
label: Borg version
description: Use `sudo borg --version`
validations:
required: false
- type: input
id: python_version
attributes:
label: Python version
description: Use `python3 --version`
validations:
required: false
- type: input
id: database_version
attributes:
label: Database version (if applicable)
description: Use `psql --version` / `mysql --version` / `mongodump --version` / `sqlite3 --version`
validations:
required: false
- type: input
id: operating_system_version
attributes:
label: Operating system and version
description: On Linux, use `cat /etc/os-release`
validations:
required: false

View File

@ -1 +0,0 @@
blank_issues_enabled: true

View File

@ -1,15 +0,0 @@
name: "Feature"
about: "For filing a feature request or idea"
body:
- type: textarea
id: request
attributes:
label: What I'd like to do and why
validations:
required: true
- type: textarea
id: notes
attributes:
label: Other notes / implementation ideas
validations:
required: false

View File

@ -1,28 +0,0 @@
name: build
run-name: ${{ gitea.actor }} is building
on:
push:
branches: [main]
jobs:
test:
runs-on: host
steps:
- uses: actions/checkout@v4
- run: scripts/run-end-to-end-tests
docs:
needs: [test]
runs-on: host
env:
IMAGE_NAME: projects.torsion.org/borgmatic-collective/borgmatic:docs
steps:
- uses: actions/checkout@v4
- run: podman login --username "$USERNAME" --password "$PASSWORD" projects.torsion.org
env:
USERNAME: "${{ secrets.REGISTRY_USERNAME }}"
PASSWORD: "${{ secrets.REGISTRY_PASSWORD }}"
- run: podman build --tag "$IMAGE_NAME" --file docs/Dockerfile --storage-opt "overlay.mount_program=/usr/bin/fuse-overlayfs" .
- run: podman push "$IMAGE_NAME"

199
NEWS
View File

@ -1,213 +1,18 @@
1.8.13.dev0 1.8.0.dev0
* #886: Fix PagerDuty hook traceback with Python < 3.10.
1.8.12
* #817: Add a "--max-duration" flag to the "check" action and a "max_duration" option to the
repository check configuration. This tells Borg to interrupt a repository check after a certain
duration.
* #860: Fix interaction between environment variable interpolation in constants and shell escaping.
* #863: When color output is disabled (explicitly or implicitly), don't prefix each log line with
the log level.
* #865: Add an "upload_buffer_size" option to set the size of the upload buffer used in "create"
action.
* #866: Fix "Argument list too long" error in the "spot" check when checking hundreds of thousands
of files at once.
* #874: Add the configured repository label as "repository_label" to the interpolated variables
passed to before/after command hooks.
* #881: Fix "Unrecognized argument" error when the same value is used with different command-line
flags.
* In the "spot" check, don't try to hash symlinked directories.
1.8.11
* #815: Add optional Healthchecks auto-provisioning via "create_slug" option.
* #851: Fix lack of file extraction when using "extract --strip-components all" on a path with a
leading slash.
* #854: Fix a traceback when the "data" consistency check is used.
* #857: Fix a traceback with "check --only spot" when the "spot" check is unconfigured.
1.8.10
* #656 (beta): Add a "spot" consistency check that compares file counts and contents between your
source files and the latest archive, ensuring they fall within configured tolerances. This can
catch problems like incorrect excludes, inadvertent deletes, files changed by malware, etc. See
the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/#spot-check
* #779: When "--match-archives *" is used with "check" action, don't skip Borg's orphaned objects
check.
* #842: When a command hook exits with a soft failure, ping the log and finish states for any
configured monitoring hooks.
* #843: Add documentation link to Loki dashboard for borgmatic:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#loki-hook
* #847: Fix "--json" error when Borg includes non-JSON warnings in JSON output.
* #848: SECURITY: Mask the password when logging a MongoDB dump or restore command.
* Fix handling of the NO_COLOR environment variable to ignore an empty value.
* Add documentation about backing up containerized databases by configuring borgmatic to exec into
a container to run a dump command:
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#containers
1.8.9
* #311: Add custom dump/restore command options for MySQL and MariaDB.
* #811: Add an "access_token" option to the ntfy monitoring hook for authenticating
without username/password.
* #827: When the "--json" flag is given, suppress console escape codes so as not to
interfere with JSON output.
* #829: Fix "--override" values containing deprecated section headers not actually overriding
configuration options under deprecated section headers.
* #835: Add support for the NO_COLOR environment variable. See the documentation for more
information:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#colored-output
* #839: Add log sending for the Apprise logging hook, enabled by default. See the documentation for
more information:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#apprise-hook
* #839: Document a potentially breaking shell quoting edge case within error hooks:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#error-hooks
* #840: When running the "rcreate" action and the repository already exists but with a different
encryption mode than requested, error.
* Switch from Drone to Gitea Actions for continuous integration.
* Rename scripts/run-end-to-end-dev-tests to scripts/run-end-to-end-tests and use it in both dev
and CI for better dev-CI parity.
* Clarify documentation about restoring a database: borgmatic does not create the database upon
restore.
1.8.8
* #370: For the PostgreSQL hook, pass the "PGSSLMODE" environment variable through to Borg when the
database's configuration omits the "ssl_mode" option.
* #818: Allow the "--repository" flag to match across multiple configuration files.
* #820: Fix broken repository detection in the "rcreate" action with Borg 1.4. The issue did not
occur with other versions of Borg.
* #822: Fix broken escaping logic in the PostgreSQL hook's "pg_dump_command" option.
* SECURITY: Prevent additional shell injection attacks within the PostgreSQL hook.
1.8.7
* #736: Store included configuration files within each backup archive in support of the "config
bootstrap" action. Previously, only top-level configuration files were stored.
* #798: Elevate specific Borg warnings to errors or squash errors to
* warnings. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/customize-warnings-and-errors/
* #810: SECURITY: Prevent shell injection attacks within the PostgreSQL hook, the MongoDB hook, the
SQLite hook, the "borgmatic borg" action, and command hook variable/constant interpolation.
* #814: Fix a traceback when providing an invalid "--override" value for a list option.
1.8.6
* #767: Add an "--ssh-command" flag to the "config bootstrap" action for setting a custom SSH
command, as no configuration is available (including the "ssh_command" option) until
bootstrapping completes.
* #794: Fix a traceback when the "repositories" option contains both strings and key/value pairs.
* #800: Add configured repository labels to the JSON output for all actions.
* #802: The "check --force" flag now runs checks even if "check" is in "skip_actions".
* #804: Validate the configured action names in the "skip_actions" option.
* #807: Stream SQLite databases directly to Borg instead of dumping to an intermediate file.
* When logging commands that borgmatic executes, log the environment variables that
borgmatic sets for those commands. (But don't log their values, since they often contain
passwords.)
1.8.5
* #701: Add a "skip_actions" option to skip running particular actions, handy for append-only or
checkless configurations. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#skipping-actions
* #701: Deprecate the "disabled" value for the "checks" option in favor of the new "skip_actions"
option.
* #745: Constants now apply to included configuration, not just the file doing the includes. As a
side effect of this change, constants no longer apply to option names and only substitute into
configuration values.
* #779: Add a "--match-archives" flag to the "check" action for selecting the archives to check,
overriding the existing "archive_name_format" and "match_archives" options in configuration.
* #779: Only parse "--override" values as complex data types when they're for options of those
types.
* #782: Fix environment variable interpolation within configured repository paths.
* #782: Add configuration constant overriding via the existing "--override" flag.
* #783: Upgrade ruamel.yaml dependency to support version 0.18.x.
* #784: Drop support for Python 3.7, which has been end-of-lifed.
1.8.4
* #715: Add a monitoring hook for sending backup status to a variety of monitoring services via the
Apprise library. See the documentation for more information:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#apprise-hook
* #748: When an archive filter causes no matching archives for the "rlist" or "info"
actions, warn the user and suggest how to remove the filter.
* #768: Fix a traceback when an invalid command-line flag or action is used.
* #771: Fix normalization of deprecated sections ("location:", "storage:", "hooks:", etc.) to
support empty sections without erroring.
* #774: Disallow the "--dry-run" flag with the "borg" action, as borgmatic can't guarantee the Borg
command won't have side effects.
1.8.3
* #665: BREAKING: Simplify logging logic as follows: Syslog verbosity is now disabled by
default, but setting the "--syslog-verbosity" flag enables it regardless of whether you're at an
interactive console. Additionally, "--log-file-verbosity" and "--monitoring-verbosity" now
default to 1 (info about steps borgmatic is taking) instead of 0. And both syslog logging and
file logging can be enabled simultaneously.
* #743: Add a monitoring hook for sending backup status and logs to Grafana Loki. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#loki-hook
* #753: When "archive_name_format" is not set, filter archives using the default archive name
format.
* #754: Fix error handling to log command output as one record per line instead of truncating
too-long output and swallowing the end of some Borg error messages.
* #757: Update documentation so "sudo borgmatic" works for pipx borgmatic installations.
* #761: Fix for borgmatic not stopping Borg immediately when the user presses ctrl-C.
* Update documentation to recommend installing/upgrading borgmatic with pipx instead of pip. See the
documentation for more information:
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#installation
https://torsion.org/borgmatic/docs/how-to/upgrade/#upgrading-borgmatic
1.8.2
* #345: Add "key export" action to export a copy of the repository key for safekeeping in case
the original goes missing or gets damaged.
* #727: Add a MariaDB database hook that uses native MariaDB commands instead of the deprecated
MySQL ones. Be aware though that any existing backups made with the "mysql_databases:" hook are
only restorable with a "mysql_databases:" configuration.
* #738: Fix for potential data loss (data not getting restored) in which the database "restore"
action didn't actually restore anything and indicated success anyway.
* Remove the deprecated use of the MongoDB hook's "--db" flag for database restoration.
* Add source code reference documentation for getting oriented with the borgmatic code as a
developer: https://torsion.org/borgmatic/docs/reference/source-code/
1.8.1
* #326: Add documentation for restoring a database to an alternate host:
https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#restore-to-an-alternate-host
* #697: Add documentation for "bootstrap" action:
https://torsion.org/borgmatic/docs/how-to/extract-a-backup/#extract-the-configuration-files-used-to-create-an-archive
* #725: Add "store_config_files" option for disabling the automatic backup of configuration files
used by the "config bootstrap" action.
* #728: Fix for "prune" action error when using the "keep_exclude_tags" option.
* #730: Fix for Borg's interactive prompt on the "check --repair" action automatically getting
answered "NO" even when the "check_i_know_what_i_am_doing" option isn't set.
* #732: Include multiple configuration files with a single "!include". See the documentation for
more information:
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#multiple-merge-includes
* #734: Omit "--glob-archives" or "--match-archives" Borg flag when its value would be "*" (meaning
all archives).
1.8.0
* #575: BREAKING: For the "borgmatic borg" action, instead of implicitly injecting * #575: BREAKING: For the "borgmatic borg" action, instead of implicitly injecting
repository/archive into the resulting Borg command-line, pass repository to Borg via an repository/archive into the resulting Borg command-line, pass repository to Borg via an
environment variable and make archive available for explicit use in your commands. See the environment variable and make archive available for explicit use in your commands. See the
documentation for more information: documentation for more information:
https://torsion.org/borgmatic/docs/how-to/run-arbitrary-borg-commands/ https://torsion.org/borgmatic/docs/how-to/run-arbitrary-borg-commands/
* #719: Fix an error when running "borg key export" through borgmatic. * #719: Fix an error when running "borg key export" through borgmatic.
* #720: Fix an error when dumping a database and the "exclude_nodump" option is set. * #720: Fix an error when dumping a MySQL database and the "exclude_nodump" option is set.
* #724: Add "check_i_know_what_i_am_doing" option to bypass Borg confirmation prompt when running
"check --repair".
* When merging two configuration files, error gracefully if the two files do not adhere to the same * When merging two configuration files, error gracefully if the two files do not adhere to the same
format. format.
* #721: Remove configuration sections ("location:", "storage:", "hooks:", etc.), while still
keeping deprecated support for them. Now, all options are at the same level, and you don't need
to worry about commenting/uncommenting section headers when you change an option (if you remove
your sections first).
* #721: BREAKING: The retention prefix and the consistency prefix can no longer have different
values (unless one is not set).
* #721: BREAKING: The storage umask and the hooks umask can no longer have different values (unless
one is not set).
* BREAKING: Flags like "--config" that previously took multiple values now need to be given once
per value, e.g. "--config first.yaml --config second.yaml" instead of "--config first.yaml
second.yaml". This prevents argument parsing errors on ambiguous commands.
* BREAKING: Remove the deprecated (and silently ignored) "--successful" flag on the "list" action, * BREAKING: Remove the deprecated (and silently ignored) "--successful" flag on the "list" action,
as newer versions of Borg list successful (non-checkpoint) archives by default. as newer versions of Borg list successful (non-checkpoint) archives by default.
* All deprecated configuration option values now generate warning logs. * All deprecated configuration option values now generate warning logs.
* Remove the deprecated (and non-functional) "--excludes" flag in favor of excludes within * Remove the deprecated (and non-functional) "--excludes" flag in favor of excludes within
configuration. configuration.
* Fix an error when logging too-long command output during error handling. Now, long command output
is truncated before logging.
1.7.15 1.7.15
* #326: Add configuration options and command-line flags for backing up a database from one * #326: Add configuration options and command-line flags for backing up a database from one

View File

@ -16,59 +16,65 @@ The canonical home of borgmatic is at <a href="https://torsion.org/borgmatic">ht
Here's an example configuration file: Here's an example configuration file:
```yaml ```yaml
# List of source directories to backup. location:
source_directories: # List of source directories to backup.
- /home source_directories:
- /etc - /home
- /etc
# Paths of local or remote repositories to backup to. # Paths of local or remote repositories to backup to.
repositories: repositories:
- path: ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo - path: ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo
label: borgbase label: borgbase
- path: /var/lib/backups/local.borg - path: /var/lib/backups/local.borg
label: local label: local
# Retention policy for how many backups to keep. retention:
keep_daily: 7 # Retention policy for how many backups to keep.
keep_weekly: 4 keep_daily: 7
keep_monthly: 6 keep_weekly: 4
keep_monthly: 6
# List of checks to run to validate your backups. consistency:
checks: # List of checks to run to validate your backups.
- name: repository checks:
- name: archives - name: repository
frequency: 2 weeks - name: archives
frequency: 2 weeks
# Custom preparation scripts to run. hooks:
before_backup: # Custom preparation scripts to run.
- prepare-for-backup.sh before_backup:
- prepare-for-backup.sh
# Databases to dump and include in backups. # Databases to dump and include in backups.
postgresql_databases: postgresql_databases:
- name: users - name: users
# Third-party services to notify you if backups aren't happening. # Third-party services to notify you if backups aren't happening.
healthchecks: healthchecks: https://hc-ping.com/be067061-cf96-4412-8eae-62b0c50d6a8c
ping_url: https://hc-ping.com/be067061-cf96-4412-8eae-62b0c50d6a8c
``` ```
Want to see borgmatic in action? Check out the <a
href="https://asciinema.org/a/203761?autoplay=1" target="_blank">screencast</a>.
<a href="https://asciinema.org/a/203761?autoplay=1" target="_blank"><img src="https://asciinema.org/a/203761.png" width="480"></a>
borgmatic is powered by [Borg Backup](https://www.borgbackup.org/). borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
## Integrations ## Integrations
<a href="https://www.postgresql.org/"><img src="docs/static/postgresql.png" alt="PostgreSQL" height="60px" style="margin-bottom:20px; margin-right:20px;"></a> <a href="https://www.postgresql.org/"><img src="docs/static/postgresql.png" alt="PostgreSQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px; margin-right:20px;"></a> <a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px; margin-right:20px;"></a> <a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.mongodb.com/"><img src="docs/static/mongodb.png" alt="MongoDB" height="60px" style="margin-bottom:20px; margin-right:20px;"></a> <a href="https://www.mongodb.com/"><img src="docs/static/mongodb.png" alt="MongoDB" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://sqlite.org/"><img src="docs/static/sqlite.png" alt="SQLite" height="60px" style="margin-bottom:20px; margin-right:20px;"></a> <a href="https://sqlite.org/"><img src="docs/static/sqlite.png" alt="SQLite" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px; margin-right:20px;"></a> <a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px; margin-right:20px;"></a> <a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://cronhub.io/"><img src="docs/static/cronhub.png" alt="Cronhub" height="60px" style="margin-bottom:20px; margin-right:20px;"></a> <a href="https://cronhub.io/"><img src="docs/static/cronhub.png" alt="Cronhub" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://www.pagerduty.com/"><img src="docs/static/pagerduty.png" alt="PagerDuty" height="60px" style="margin-bottom:20px; margin-right:20px;"></a> <a href="https://www.pagerduty.com/"><img src="docs/static/pagerduty.png" alt="PagerDuty" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://ntfy.sh/"><img src="docs/static/ntfy.png" alt="ntfy" height="60px" style="margin-bottom:20px; margin-right:20px;"></a> <a href="https://ntfy.sh/"><img src="docs/static/ntfy.png" alt="ntfy" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://grafana.com/oss/loki/"><img src="docs/static/loki.png" alt="Loki" height="60px" style="margin-bottom:20px; margin-right:20px;"></a> <a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px;"></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://github.com/caronc/apprise/wiki"><img src="docs/static/apprise.png" alt="Apprise" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
## Getting started ## Getting started
@ -154,3 +160,6 @@ general, contributions are very welcome. We don't bite!
Also, please check out the [borgmatic development Also, please check out the [borgmatic development
how-to](https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/) for how-to](https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/) for
info on cloning source code, running tests, etc. info on cloning source code, running tests, etc.
<a href="https://build.torsion.org/borgmatic-collective/borgmatic" alt="build status">![Build Status](https://build.torsion.org/api/badges/borgmatic-collective/borgmatic/status.svg?ref=refs/heads/main)</a>

View File

@ -9,7 +9,7 @@ logger = logging.getLogger(__name__)
def run_borg( def run_borg(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
borg_arguments, borg_arguments,
global_arguments, global_arguments,
@ -28,7 +28,7 @@ def run_borg(
archive_name = borgmatic.borg.rlist.resolve_archive_name( archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository['path'], repository['path'],
borg_arguments.archive, borg_arguments.archive,
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
@ -36,7 +36,7 @@ def run_borg(
) )
borgmatic.borg.borg.run_arbitrary_borg( borgmatic.borg.borg.run_arbitrary_borg(
repository['path'], repository['path'],
config, storage,
local_borg_version, local_borg_version,
options=borg_arguments.options, options=borg_arguments.options,
archive=archive_name, archive=archive_name,

View File

@ -8,7 +8,7 @@ logger = logging.getLogger(__name__)
def run_break_lock( def run_break_lock(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
break_lock_arguments, break_lock_arguments,
global_arguments, global_arguments,
@ -26,7 +26,7 @@ def run_break_lock(
) )
borgmatic.borg.break_lock.break_lock( borgmatic.borg.break_lock.break_lock(
repository['path'], repository['path'],
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path=local_path, local_path=local_path,

View File

@ -1,606 +1,19 @@
import datetime
import hashlib
import itertools
import logging import logging
import os
import pathlib
import random
import borgmatic.borg.check import borgmatic.borg.check
import borgmatic.borg.create
import borgmatic.borg.environment
import borgmatic.borg.extract
import borgmatic.borg.list
import borgmatic.borg.rlist
import borgmatic.borg.state
import borgmatic.config.validate import borgmatic.config.validate
import borgmatic.execute
import borgmatic.hooks.command import borgmatic.hooks.command
DEFAULT_CHECKS = (
{'name': 'repository', 'frequency': '1 month'},
{'name': 'archives', 'frequency': '1 month'},
)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def parse_checks(config, only_checks=None):
'''
Given a configuration dict with a "checks" sequence of dicts and an optional list of override
checks, return a tuple of named checks to run.
For example, given a config of:
{'checks': ({'name': 'repository'}, {'name': 'archives'})}
This will be returned as:
('repository', 'archives')
If no "checks" option is present in the config, return the DEFAULT_CHECKS. If a checks value
has a name of "disabled", return an empty tuple, meaning that no checks should be run.
'''
checks = only_checks or tuple(
check_config['name'] for check_config in (config.get('checks', None) or DEFAULT_CHECKS)
)
checks = tuple(check.lower() for check in checks)
if 'disabled' in checks:
logger.warning(
'The "disabled" value for the "checks" option is deprecated and will be removed from a future release; use "skip_actions" instead'
)
if len(checks) > 1:
logger.warning(
'Multiple checks are configured, but one of them is "disabled"; not running any checks'
)
return ()
return checks
def parse_frequency(frequency):
'''
Given a frequency string with a number and a unit of time, return a corresponding
datetime.timedelta instance or None if the frequency is None or "always".
For instance, given "3 weeks", return datetime.timedelta(weeks=3)
Raise ValueError if the given frequency cannot be parsed.
'''
if not frequency:
return None
frequency = frequency.strip().lower()
if frequency == 'always':
return None
try:
number, time_unit = frequency.split(' ')
number = int(number)
except ValueError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
if not time_unit.endswith('s'):
time_unit += 's'
if time_unit == 'months':
number *= 30
time_unit = 'days'
elif time_unit == 'years':
number *= 365
time_unit = 'days'
try:
return datetime.timedelta(**{time_unit: number})
except TypeError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
def filter_checks_on_frequency(
config,
borg_repository_id,
checks,
force,
archives_check_id=None,
):
'''
Given a configuration dict with a "checks" sequence of dicts, a Borg repository ID, a sequence
of checks, whether to force checks to run, and an ID for the archives check potentially being
run (if any), filter down those checks based on the configured "frequency" for each check as
compared to its check time file.
In other words, a check whose check time file's timestamp is too new (based on the configured
frequency) will get cut from the returned sequence of checks. Example:
config = {
'checks': [
{
'name': 'archives',
'frequency': '2 weeks',
},
]
}
When this function is called with that config and "archives" in checks, "archives" will get
filtered out of the returned result if its check time file is newer than 2 weeks old, indicating
that it's not yet time to run that check again.
Raise ValueError if a frequency cannot be parsed.
'''
if not checks:
return checks
filtered_checks = list(checks)
if force:
return tuple(filtered_checks)
for check_config in config.get('checks', DEFAULT_CHECKS):
check = check_config['name']
if checks and check not in checks:
continue
frequency_delta = parse_frequency(check_config.get('frequency'))
if not frequency_delta:
continue
check_time = probe_for_check_time(config, borg_repository_id, check, archives_check_id)
if not check_time:
continue
# If we've not yet reached the time when the frequency dictates we're ready for another
# check, skip this check.
if datetime.datetime.now() < check_time + frequency_delta:
remaining = check_time + frequency_delta - datetime.datetime.now()
logger.info(
f'Skipping {check} check due to configured frequency; {remaining} until next check (use --force to check anyway)'
)
filtered_checks.remove(check)
return tuple(filtered_checks)
def make_archives_check_id(archive_filter_flags):
'''
Given a sequence of flags to filter archives, return a unique hash corresponding to those
particular flags. If there are no flags, return None.
'''
if not archive_filter_flags:
return None
return hashlib.sha256(' '.join(archive_filter_flags).encode()).hexdigest()
def make_check_time_path(config, borg_repository_id, check_type, archives_check_id=None):
'''
Given a configuration dict, a Borg repository ID, the name of a check type ("repository",
"archives", etc.), and a unique hash of the archives filter flags, return a path for recording
that check's time (the time of that check last occurring).
'''
borgmatic_source_directory = os.path.expanduser(
config.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
)
if check_type in ('archives', 'data'):
return os.path.join(
borgmatic_source_directory,
'checks',
borg_repository_id,
check_type,
archives_check_id if archives_check_id else 'all',
)
return os.path.join(
borgmatic_source_directory,
'checks',
borg_repository_id,
check_type,
)
def write_check_time(path): # pragma: no cover
'''
Record a check time of now as the modification time of the given path.
'''
logger.debug(f'Writing check time at {path}')
os.makedirs(os.path.dirname(path), mode=0o700, exist_ok=True)
pathlib.Path(path, mode=0o600).touch()
def read_check_time(path):
'''
Return the check time based on the modification time of the given path. Return None if the path
doesn't exist.
'''
logger.debug(f'Reading check time from {path}')
try:
return datetime.datetime.fromtimestamp(os.stat(path).st_mtime)
except FileNotFoundError:
return None
def probe_for_check_time(config, borg_repository_id, check, archives_check_id):
'''
Given a configuration dict, a Borg repository ID, the name of a check type ("repository",
"archives", etc.), and a unique hash of the archives filter flags, return a the corresponding
check time or None if such a check time does not exist.
When the check type is "archives" or "data", this function probes two different paths to find
the check time, e.g.:
~/.borgmatic/checks/1234567890/archives/9876543210
~/.borgmatic/checks/1234567890/archives/all
... and returns the maximum modification time of the files found (if any). The first path
represents a more specific archives check time (a check on a subset of archives), and the second
is a fallback to the last "all" archives check.
For other check types, this function reads from a single check time path, e.g.:
~/.borgmatic/checks/1234567890/repository
'''
check_times = (
read_check_time(group[0])
for group in itertools.groupby(
(
make_check_time_path(config, borg_repository_id, check, archives_check_id),
make_check_time_path(config, borg_repository_id, check),
)
)
)
try:
return max(check_time for check_time in check_times if check_time)
except ValueError:
return None
def upgrade_check_times(config, borg_repository_id):
'''
Given a configuration dict and a Borg repository ID, upgrade any corresponding check times on
disk from old-style paths to new-style paths.
Currently, the only upgrade performed is renaming an archive or data check path that looks like:
~/.borgmatic/checks/1234567890/archives
to:
~/.borgmatic/checks/1234567890/archives/all
'''
for check_type in ('archives', 'data'):
new_path = make_check_time_path(config, borg_repository_id, check_type, 'all')
old_path = os.path.dirname(new_path)
temporary_path = f'{old_path}.temp'
if not os.path.isfile(old_path) and not os.path.isfile(temporary_path):
continue
logger.debug(f'Upgrading archives check time from {old_path} to {new_path}')
try:
os.rename(old_path, temporary_path)
except FileNotFoundError:
pass
os.mkdir(old_path)
os.rename(temporary_path, new_path)
def collect_spot_check_source_paths(
repository, config, local_borg_version, global_arguments, local_path, remote_path
):
'''
Given a repository configuration dict, a configuration dict, the local Borg version, global
arguments as an argparse.Namespace instance, the local Borg path, and the remote Borg path,
collect the source paths that Borg would use in an actual create (but only include files).
'''
stream_processes = any(
borgmatic.hooks.dispatch.call_hooks(
'use_streaming',
config,
repository['path'],
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES,
).values()
)
(create_flags, create_positional_arguments, pattern_file, exclude_file) = (
borgmatic.borg.create.make_base_create_command(
dry_run=True,
repository_path=repository['path'],
config=config,
config_paths=(),
local_borg_version=local_borg_version,
global_arguments=global_arguments,
borgmatic_source_directories=(),
local_path=local_path,
remote_path=remote_path,
list_files=True,
stream_processes=stream_processes,
)
)
borg_environment = borgmatic.borg.environment.make_environment(config)
try:
working_directory = os.path.expanduser(config.get('working_directory'))
except TypeError:
working_directory = None
paths_output = borgmatic.execute.execute_command_and_capture_output(
create_flags + create_positional_arguments,
capture_stderr=True,
working_directory=working_directory,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)
paths = tuple(
path_line.split(' ', 1)[1]
for path_line in paths_output.split('\n')
if path_line and path_line.startswith('- ') or path_line.startswith('+ ')
)
return tuple(path for path in paths if os.path.isfile(path))
BORG_DIRECTORY_FILE_TYPE = 'd'
def collect_spot_check_archive_paths(
repository, archive, config, local_borg_version, global_arguments, local_path, remote_path
):
'''
Given a repository configuration dict, the name of the latest archive, a configuration dict, the
local Borg version, global arguments as an argparse.Namespace instance, the local Borg path, and
the remote Borg path, collect the paths from the given archive (but only include files and
symlinks).
'''
borgmatic_source_directory = os.path.expanduser(
config.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
)
)
return tuple(
path
for line in borgmatic.borg.list.capture_archive_listing(
repository['path'],
archive,
config,
local_borg_version,
global_arguments,
path_format='{type} /{path}{NL}', # noqa: FS003
local_path=local_path,
remote_path=remote_path,
)
for (file_type, path) in (line.split(' ', 1),)
if file_type != BORG_DIRECTORY_FILE_TYPE
if pathlib.Path(borgmatic_source_directory) not in pathlib.Path(path).parents
)
SAMPLE_PATHS_SUBSET_COUNT = 10000
def compare_spot_check_hashes(
repository,
archive,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
log_label,
source_paths,
):
'''
Given a repository configuration dict, the name of the latest archive, a configuration dict, the
local Borg version, global arguments as an argparse.Namespace instance, the local Borg path, the
remote Borg path, a log label, and spot check source paths, compare the hashes for a sampling of
the source paths with hashes from corresponding paths in the given archive. Return a sequence of
the paths that fail that hash comparison.
'''
# Based on the configured sample percentage, come up with a list of random sample files from the
# source directories.
spot_check_config = next(check for check in config['checks'] if check['name'] == 'spot')
sample_count = max(
int(len(source_paths) * (min(spot_check_config['data_sample_percentage'], 100) / 100)), 1
)
source_sample_paths = tuple(random.sample(source_paths, sample_count))
existing_source_sample_paths = {
source_path for source_path in source_sample_paths if os.path.exists(source_path)
}
logger.debug(
f'{log_label}: Sampling {sample_count} source paths (~{spot_check_config["data_sample_percentage"]}%) for spot check'
)
source_sample_paths_iterator = iter(source_sample_paths)
source_hashes = {}
archive_hashes = {}
# Only hash a few thousand files at a time (a subset of the total paths) to avoid an "Argument
# list too long" shell error.
while True:
# Hash each file in the sample paths (if it exists).
source_sample_paths_subset = tuple(
itertools.islice(source_sample_paths_iterator, SAMPLE_PATHS_SUBSET_COUNT)
)
if not source_sample_paths_subset:
break
hash_output = borgmatic.execute.execute_command_and_capture_output(
(spot_check_config.get('xxh64sum_command', 'xxh64sum'),)
+ tuple(
path for path in source_sample_paths_subset if path in existing_source_sample_paths
)
)
source_hashes.update(
**dict(
(reversed(line.split(' ', 1)) for line in hash_output.splitlines()),
# Represent non-existent files as having empty hashes so the comparison below still works.
**{
path: ''
for path in source_sample_paths_subset
if path not in existing_source_sample_paths
},
)
)
# Get the hash for each file in the archive.
archive_hashes.update(
**dict(
reversed(line.split(' ', 1))
for line in borgmatic.borg.list.capture_archive_listing(
repository['path'],
archive,
config,
local_borg_version,
global_arguments,
list_paths=source_sample_paths_subset,
path_format='{xxh64} /{path}{NL}', # noqa: FS003
local_path=local_path,
remote_path=remote_path,
)
if line
)
)
# Compare the source hashes with the archive hashes to see how many match.
failing_paths = []
for path, source_hash in source_hashes.items():
archive_hash = archive_hashes.get(path)
if archive_hash is not None and archive_hash == source_hash:
continue
failing_paths.append(path)
return tuple(failing_paths)
def spot_check(
repository,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
):
'''
Given a repository dict, a loaded configuration dict, the local Borg version, global arguments
as an argparse.Namespace instance, the local Borg path, and the remote Borg path, perform a spot
check for the latest archive in the given repository.
A spot check compares file counts and also the hashes for a random sampling of source files on
disk to those stored in the latest archive. If any differences are beyond configured tolerances,
then the check fails.
'''
log_label = f'{repository.get("label", repository["path"])}'
logger.debug(f'{log_label}: Running spot check')
try:
spot_check_config = next(
check for check in config.get('checks', ()) if check.get('name') == 'spot'
)
except StopIteration:
raise ValueError('Cannot run spot check because it is unconfigured')
if spot_check_config['data_tolerance_percentage'] > spot_check_config['data_sample_percentage']:
raise ValueError(
'The data_tolerance_percentage must be less than or equal to the data_sample_percentage'
)
source_paths = collect_spot_check_source_paths(
repository,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
logger.debug(f'{log_label}: {len(source_paths)} total source paths for spot check')
archive = borgmatic.borg.rlist.resolve_archive_name(
repository['path'],
'latest',
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
logger.debug(f'{log_label}: Using archive {archive} for spot check')
archive_paths = collect_spot_check_archive_paths(
repository,
archive,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
logger.debug(f'{log_label}: {len(archive_paths)} total archive paths for spot check')
# Calculate the percentage delta between the source paths count and the archive paths count, and
# compare that delta to the configured count tolerance percentage.
count_delta_percentage = abs(len(source_paths) - len(archive_paths)) / len(source_paths) * 100
if count_delta_percentage > spot_check_config['count_tolerance_percentage']:
logger.debug(
f'{log_label}: Paths in source paths but not latest archive: {", ".join(set(source_paths) - set(archive_paths)) or "none"}'
)
logger.debug(
f'{log_label}: Paths in latest archive but not source paths: {", ".join(set(archive_paths) - set(source_paths)) or "none"}'
)
raise ValueError(
f'Spot check failed: {count_delta_percentage:.2f}% file count delta between source paths and latest archive (tolerance is {spot_check_config["count_tolerance_percentage"]}%)'
)
failing_paths = compare_spot_check_hashes(
repository,
archive,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
log_label,
source_paths,
)
# Error if the percentage of failing hashes exceeds the configured tolerance percentage.
logger.debug(f'{log_label}: {len(failing_paths)} non-matching spot check hashes')
data_tolerance_percentage = spot_check_config['data_tolerance_percentage']
failing_percentage = (len(failing_paths) / len(source_paths)) * 100
if failing_percentage > data_tolerance_percentage:
logger.debug(
f'{log_label}: Source paths with data not matching the latest archive: {", ".join(failing_paths)}'
)
raise ValueError(
f'Spot check failed: {failing_percentage:.2f}% of source paths with data not matching the latest archive (tolerance is {data_tolerance_percentage}%)'
)
logger.info(
f'{log_label}: Spot check passed with a {count_delta_percentage:.2f}% file count delta and a {failing_percentage:.2f}% file data delta'
)
def run_check( def run_check(
config_filename, config_filename,
repository, repository,
config, location,
storage,
consistency,
hooks,
hook_context, hook_context,
local_borg_version, local_borg_version,
check_arguments, check_arguments,
@ -610,8 +23,6 @@ def run_check(
): ):
''' '''
Run the "check" action for the given repository. Run the "check" action for the given repository.
Raise ValueError if the Borg repository ID cannot be determined.
''' '''
if check_arguments.repository and not borgmatic.config.validate.repositories_match( if check_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, check_arguments.repository repository, check_arguments.repository
@ -619,79 +30,31 @@ def run_check(
return return
borgmatic.hooks.command.execute_hook( borgmatic.hooks.command.execute_hook(
config.get('before_check'), hooks.get('before_check'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'pre-check', 'pre-check',
global_arguments.dry_run, global_arguments.dry_run,
**hook_context, **hook_context,
) )
logger.info(f'{repository.get("label", repository["path"])}: Running consistency checks') logger.info(f'{repository.get("label", repository["path"])}: Running consistency checks')
repository_id = borgmatic.borg.check.get_repository_id( borgmatic.borg.check.check_archives(
repository['path'], repository['path'],
config, location,
storage,
consistency,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path=local_path, local_path=local_path,
remote_path=remote_path, remote_path=remote_path,
progress=check_arguments.progress,
repair=check_arguments.repair,
only_checks=check_arguments.only,
force=check_arguments.force,
) )
upgrade_check_times(config, repository_id)
configured_checks = parse_checks(config, check_arguments.only_checks)
archive_filter_flags = borgmatic.borg.check.make_archive_filter_flags(
local_borg_version, config, configured_checks, check_arguments
)
archives_check_id = make_archives_check_id(archive_filter_flags)
checks = filter_checks_on_frequency(
config,
repository_id,
configured_checks,
check_arguments.force,
archives_check_id,
)
borg_specific_checks = set(checks).intersection({'repository', 'archives', 'data'})
if borg_specific_checks:
borgmatic.borg.check.check_archives(
repository['path'],
config,
local_borg_version,
check_arguments,
global_arguments,
borg_specific_checks,
archive_filter_flags,
local_path=local_path,
remote_path=remote_path,
)
for check in borg_specific_checks:
write_check_time(make_check_time_path(config, repository_id, check, archives_check_id))
if 'extract' in checks:
borgmatic.borg.extract.extract_last_archive_dry_run(
config,
local_borg_version,
global_arguments,
repository['path'],
config.get('lock_wait'),
local_path,
remote_path,
)
write_check_time(make_check_time_path(config, repository_id, 'extract'))
if 'spot' in checks:
spot_check(
repository,
config,
local_borg_version,
global_arguments,
local_path,
remote_path,
)
write_check_time(make_check_time_path(config, repository_id, 'spot'))
borgmatic.hooks.command.execute_hook( borgmatic.hooks.command.execute_hook(
config.get('after_check'), hooks.get('after_check'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'post-check', 'post-check',
global_arguments.dry_run, global_arguments.dry_run,

View File

@ -11,7 +11,9 @@ logger = logging.getLogger(__name__)
def run_compact( def run_compact(
config_filename, config_filename,
repository, repository,
config, storage,
retention,
hooks,
hook_context, hook_context,
local_borg_version, local_borg_version,
compact_arguments, compact_arguments,
@ -29,8 +31,8 @@ def run_compact(
return return
borgmatic.hooks.command.execute_hook( borgmatic.hooks.command.execute_hook(
config.get('before_compact'), hooks.get('before_compact'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'pre-compact', 'pre-compact',
global_arguments.dry_run, global_arguments.dry_run,
@ -43,7 +45,7 @@ def run_compact(
borgmatic.borg.compact.compact_segments( borgmatic.borg.compact.compact_segments(
global_arguments.dry_run, global_arguments.dry_run,
repository['path'], repository['path'],
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path=local_path, local_path=local_path,
@ -57,8 +59,8 @@ def run_compact(
f'{repository.get("label", repository["path"])}: Skipping compact (only available/needed in Borg 1.2+)' f'{repository.get("label", repository["path"])}: Skipping compact (only available/needed in Borg 1.2+)'
) )
borgmatic.hooks.command.execute_hook( borgmatic.hooks.command.execute_hook(
config.get('after_compact'), hooks.get('after_compact'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'post-compact', 'post-compact',
global_arguments.dry_run, global_arguments.dry_run,

View File

@ -13,11 +13,14 @@ logger = logging.getLogger(__name__)
def get_config_paths(bootstrap_arguments, global_arguments, local_borg_version): def get_config_paths(bootstrap_arguments, global_arguments, local_borg_version):
''' '''
Given the bootstrap arguments as an argparse.Namespace (containing the repository and archive Given:
name, borgmatic source directory, destination directory, and whether to strip components), the The bootstrap arguments, which include the repository and archive name, borgmatic source directory,
global arguments as an argparse.Namespace (containing the dry run flag and the local borg destination directory, and whether to strip components.
version), return the config paths from the manifest.json file in the borgmatic source directory The global arguments, which include the dry run flag
after extracting it from the repository. and the local borg version,
Return:
The config paths from the manifest.json file in the borgmatic source directory after extracting it from the
repository.
Raise ValueError if the manifest JSON is missing, can't be decoded, or doesn't contain the Raise ValueError if the manifest JSON is missing, can't be decoded, or doesn't contain the
expected configuration path data. expected configuration path data.
@ -28,26 +31,25 @@ def get_config_paths(bootstrap_arguments, global_arguments, local_borg_version):
borgmatic_manifest_path = os.path.expanduser( borgmatic_manifest_path = os.path.expanduser(
os.path.join(borgmatic_source_directory, 'bootstrap', 'manifest.json') os.path.join(borgmatic_source_directory, 'bootstrap', 'manifest.json')
) )
config = {'ssh_command': bootstrap_arguments.ssh_command}
extract_process = borgmatic.borg.extract.extract_archive( extract_process = borgmatic.borg.extract.extract_archive(
global_arguments.dry_run, global_arguments.dry_run,
bootstrap_arguments.repository, bootstrap_arguments.repository,
borgmatic.borg.rlist.resolve_archive_name( borgmatic.borg.rlist.resolve_archive_name(
bootstrap_arguments.repository, bootstrap_arguments.repository,
bootstrap_arguments.archive, bootstrap_arguments.archive,
config, {},
local_borg_version, local_borg_version,
global_arguments, global_arguments,
), ),
[borgmatic_manifest_path], [borgmatic_manifest_path],
config, {},
{},
local_borg_version, local_borg_version,
global_arguments, global_arguments,
extract_to_stdout=True, extract_to_stdout=True,
) )
manifest_json = extract_process.stdout.read()
manifest_json = extract_process.stdout.read()
if not manifest_json: if not manifest_json:
raise ValueError( raise ValueError(
'Cannot read configuration paths from archive due to missing bootstrap manifest' 'Cannot read configuration paths from archive due to missing bootstrap manifest'
@ -78,7 +80,6 @@ def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
manifest_config_paths = get_config_paths( manifest_config_paths = get_config_paths(
bootstrap_arguments, global_arguments, local_borg_version bootstrap_arguments, global_arguments, local_borg_version
) )
config = {'ssh_command': bootstrap_arguments.ssh_command}
logger.info(f"Bootstrapping config paths: {', '.join(manifest_config_paths)}") logger.info(f"Bootstrapping config paths: {', '.join(manifest_config_paths)}")
@ -88,12 +89,13 @@ def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
borgmatic.borg.rlist.resolve_archive_name( borgmatic.borg.rlist.resolve_archive_name(
bootstrap_arguments.repository, bootstrap_arguments.repository,
bootstrap_arguments.archive, bootstrap_arguments.archive,
config, {},
local_borg_version, local_borg_version,
global_arguments, global_arguments,
), ),
[config_path.lstrip(os.path.sep) for config_path in manifest_config_paths], [config_path.lstrip(os.path.sep) for config_path in manifest_config_paths],
config, {},
{},
local_borg_version, local_borg_version,
global_arguments, global_arguments,
extract_to_stdout=False, extract_to_stdout=False,

View File

@ -2,7 +2,6 @@ import logging
import borgmatic.config.generate import borgmatic.config.generate
import borgmatic.config.validate import borgmatic.config.validate
import borgmatic.logger
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -15,7 +14,6 @@ def run_generate(generate_arguments, global_arguments):
Raise FileExistsError if a file already exists at the destination path and the generate Raise FileExistsError if a file already exists at the destination path and the generate
arguments do not have overwrite set. arguments do not have overwrite set.
''' '''
borgmatic.logger.add_custom_log_levels()
dry_run_label = ' (dry run; not actually writing anything)' if global_arguments.dry_run else '' dry_run_label = ' (dry run; not actually writing anything)' if global_arguments.dry_run else ''
logger.answer( logger.answer(

View File

@ -1,7 +1,6 @@
import logging import logging
import borgmatic.config.generate import borgmatic.config.generate
import borgmatic.logger
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -15,8 +14,6 @@ def run_validate(validate_arguments, configs):
loading machinery prior to here, so this function mainly exists to support additional validate loading machinery prior to here, so this function mainly exists to support additional validate
flags like "--show". flags like "--show".
''' '''
borgmatic.logger.add_custom_log_levels()
if validate_arguments.show: if validate_arguments.show:
for config_path, config in configs.items(): for config_path, config in configs.items():
if len(configs) > 1: if len(configs) > 1:

View File

@ -1,9 +1,12 @@
import importlib.metadata
import json import json
import logging import logging
import os import os
import borgmatic.actions.json try:
import importlib_metadata
except ModuleNotFoundError: # pragma: nocover
import importlib.metadata as importlib_metadata
import borgmatic.borg.create import borgmatic.borg.create
import borgmatic.borg.state import borgmatic.borg.state
import borgmatic.config.validate import borgmatic.config.validate
@ -14,7 +17,7 @@ import borgmatic.hooks.dump
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def create_borgmatic_manifest(config, config_paths, dry_run): def create_borgmatic_manifest(location, config_paths, dry_run):
''' '''
Create a borgmatic manifest file to store the paths to the configuration files used to create Create a borgmatic manifest file to store the paths to the configuration files used to create
the archive. the archive.
@ -22,7 +25,7 @@ def create_borgmatic_manifest(config, config_paths, dry_run):
if dry_run: if dry_run:
return return
borgmatic_source_directory = config.get( borgmatic_source_directory = location.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY 'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
) )
@ -36,7 +39,7 @@ def create_borgmatic_manifest(config, config_paths, dry_run):
with open(borgmatic_manifest_path, 'w') as config_list_file: with open(borgmatic_manifest_path, 'w') as config_list_file:
json.dump( json.dump(
{ {
'borgmatic_version': importlib.metadata.version('borgmatic'), 'borgmatic_version': importlib_metadata.version('borgmatic'),
'config_paths': config_paths, 'config_paths': config_paths,
}, },
config_list_file, config_list_file,
@ -46,8 +49,9 @@ def create_borgmatic_manifest(config, config_paths, dry_run):
def run_create( def run_create(
config_filename, config_filename,
repository, repository,
config, location,
config_paths, storage,
hooks,
hook_context, hook_context,
local_borg_version, local_borg_version,
create_arguments, create_arguments,
@ -67,8 +71,8 @@ def run_create(
return return
borgmatic.hooks.command.execute_hook( borgmatic.hooks.command.execute_hook(
config.get('before_backup'), hooks.get('before_backup'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'pre-backup', 'pre-backup',
global_arguments.dry_run, global_arguments.dry_run,
@ -76,32 +80,31 @@ def run_create(
) )
logger.info(f'{repository.get("label", repository["path"])}: Creating archive{dry_run_label}') logger.info(f'{repository.get("label", repository["path"])}: Creating archive{dry_run_label}')
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured( borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_data_source_dumps', 'remove_database_dumps',
config, hooks,
repository['path'], repository['path'],
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES, borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run, global_arguments.dry_run,
) )
active_dumps = borgmatic.hooks.dispatch.call_hooks( active_dumps = borgmatic.hooks.dispatch.call_hooks(
'dump_data_sources', 'dump_databases',
config, hooks,
repository['path'], repository['path'],
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES, borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run, global_arguments.dry_run,
) )
if config.get('store_config_files', True): create_borgmatic_manifest(
create_borgmatic_manifest( location, global_arguments.used_config_paths, global_arguments.dry_run
config, )
config_paths,
global_arguments.dry_run,
)
stream_processes = [process for processes in active_dumps.values() for process in processes] stream_processes = [process for processes in active_dumps.values() for process in processes]
json_output = borgmatic.borg.create.create_archive( json_output = borgmatic.borg.create.create_archive(
global_arguments.dry_run, global_arguments.dry_run,
repository['path'], repository['path'],
config, location,
config_paths, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path=local_path, local_path=local_path,
@ -112,19 +115,20 @@ def run_create(
list_files=create_arguments.list_files, list_files=create_arguments.list_files,
stream_processes=stream_processes, stream_processes=stream_processes,
) )
if json_output: if json_output: # pragma: nocover
yield borgmatic.actions.json.parse_json(json_output, repository.get('label')) yield json.loads(json_output)
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured( borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_data_source_dumps', 'remove_database_dumps',
config, hooks,
config_filename, config_filename,
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES, borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run, global_arguments.dry_run,
) )
borgmatic.hooks.command.execute_hook( borgmatic.hooks.command.execute_hook(
config.get('after_backup'), hooks.get('after_backup'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'post-backup', 'post-backup',
global_arguments.dry_run, global_arguments.dry_run,

View File

@ -1,33 +0,0 @@
import logging
import borgmatic.borg.export_key
import borgmatic.config.validate
logger = logging.getLogger(__name__)
def run_export_key(
repository,
config,
local_borg_version,
export_arguments,
global_arguments,
local_path,
remote_path,
):
'''
Run the "key export" action for the given repository.
'''
if export_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, export_arguments.repository
):
logger.info(f'{repository.get("label", repository["path"])}: Exporting repository key')
borgmatic.borg.export_key.export_key(
repository['path'],
config,
local_borg_version,
export_arguments,
global_arguments,
local_path=local_path,
remote_path=remote_path,
)

View File

@ -9,7 +9,7 @@ logger = logging.getLogger(__name__)
def run_export_tar( def run_export_tar(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
export_tar_arguments, export_tar_arguments,
global_arguments, global_arguments,
@ -31,7 +31,7 @@ def run_export_tar(
borgmatic.borg.rlist.resolve_archive_name( borgmatic.borg.rlist.resolve_archive_name(
repository['path'], repository['path'],
export_tar_arguments.archive, export_tar_arguments.archive,
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
@ -39,7 +39,7 @@ def run_export_tar(
), ),
export_tar_arguments.paths, export_tar_arguments.paths,
export_tar_arguments.destination, export_tar_arguments.destination,
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path=local_path, local_path=local_path,

View File

@ -11,7 +11,9 @@ logger = logging.getLogger(__name__)
def run_extract( def run_extract(
config_filename, config_filename,
repository, repository,
config, location,
storage,
hooks,
hook_context, hook_context,
local_borg_version, local_borg_version,
extract_arguments, extract_arguments,
@ -23,8 +25,8 @@ def run_extract(
Run the "extract" action for the given repository. Run the "extract" action for the given repository.
''' '''
borgmatic.hooks.command.execute_hook( borgmatic.hooks.command.execute_hook(
config.get('before_extract'), hooks.get('before_extract'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'pre-extract', 'pre-extract',
global_arguments.dry_run, global_arguments.dry_run,
@ -42,14 +44,15 @@ def run_extract(
borgmatic.borg.rlist.resolve_archive_name( borgmatic.borg.rlist.resolve_archive_name(
repository['path'], repository['path'],
extract_arguments.archive, extract_arguments.archive,
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
), ),
extract_arguments.paths, extract_arguments.paths,
config, location,
storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path=local_path, local_path=local_path,
@ -59,8 +62,8 @@ def run_extract(
progress=extract_arguments.progress, progress=extract_arguments.progress,
) )
borgmatic.hooks.command.execute_hook( borgmatic.hooks.command.execute_hook(
config.get('after_extract'), hooks.get('after_extract'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'post-extract', 'post-extract',
global_arguments.dry_run, global_arguments.dry_run,

View File

@ -1,7 +1,7 @@
import json
import logging import logging
import borgmatic.actions.arguments import borgmatic.actions.arguments
import borgmatic.actions.json
import borgmatic.borg.info import borgmatic.borg.info
import borgmatic.borg.rlist import borgmatic.borg.rlist
import borgmatic.config.validate import borgmatic.config.validate
@ -11,7 +11,7 @@ logger = logging.getLogger(__name__)
def run_info( def run_info(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
info_arguments, info_arguments,
global_arguments, global_arguments,
@ -26,14 +26,14 @@ def run_info(
if info_arguments.repository is None or borgmatic.config.validate.repositories_match( if info_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, info_arguments.repository repository, info_arguments.repository
): ):
if not info_arguments.json: if not info_arguments.json: # pragma: nocover
logger.answer( logger.answer(
f'{repository.get("label", repository["path"])}: Displaying archive summary information' f'{repository.get("label", repository["path"])}: Displaying archive summary information'
) )
archive_name = borgmatic.borg.rlist.resolve_archive_name( archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository['path'], repository['path'],
info_arguments.archive, info_arguments.archive,
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
@ -41,12 +41,12 @@ def run_info(
) )
json_output = borgmatic.borg.info.display_archives_info( json_output = borgmatic.borg.info.display_archives_info(
repository['path'], repository['path'],
config, storage,
local_borg_version, local_borg_version,
borgmatic.actions.arguments.update_arguments(info_arguments, archive=archive_name), borgmatic.actions.arguments.update_arguments(info_arguments, archive=archive_name),
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
if json_output: if json_output: # pragma: nocover
yield borgmatic.actions.json.parse_json(json_output, repository.get('label')) yield json.loads(json_output)

View File

@ -1,30 +0,0 @@
import json
import logging
logger = logging.getLogger(__name__)
def parse_json(borg_json_output, label):
'''
Given a Borg JSON output string, parse it as JSON into a dict. Inject the given borgmatic
repository label into it and return the dict.
Raise JSONDecodeError if the JSON output cannot be parsed.
'''
lines = borg_json_output.splitlines()
start_line_index = 0
# Scan forward to find the first line starting with "{" and assume that's where the JSON starts.
for line_index, line in enumerate(lines):
if line.startswith('{'):
start_line_index = line_index
break
json_data = json.loads('\n'.join(lines[start_line_index:]))
if 'repository' not in json_data:
return json_data
json_data['repository']['label'] = label or ''
return json_data

View File

@ -1,7 +1,7 @@
import json
import logging import logging
import borgmatic.actions.arguments import borgmatic.actions.arguments
import borgmatic.actions.json
import borgmatic.borg.list import borgmatic.borg.list
import borgmatic.config.validate import borgmatic.config.validate
@ -10,7 +10,7 @@ logger = logging.getLogger(__name__)
def run_list( def run_list(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
list_arguments, list_arguments,
global_arguments, global_arguments,
@ -25,16 +25,16 @@ def run_list(
if list_arguments.repository is None or borgmatic.config.validate.repositories_match( if list_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, list_arguments.repository repository, list_arguments.repository
): ):
if not list_arguments.json: if not list_arguments.json: # pragma: nocover
if list_arguments.find_paths: # pragma: no cover if list_arguments.find_paths:
logger.answer(f'{repository.get("label", repository["path"])}: Searching archives') logger.answer(f'{repository.get("label", repository["path"])}: Searching archives')
elif not list_arguments.archive: # pragma: no cover elif not list_arguments.archive:
logger.answer(f'{repository.get("label", repository["path"])}: Listing archives') logger.answer(f'{repository.get("label", repository["path"])}: Listing archives')
archive_name = borgmatic.borg.rlist.resolve_archive_name( archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository['path'], repository['path'],
list_arguments.archive, list_arguments.archive,
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
@ -42,12 +42,12 @@ def run_list(
) )
json_output = borgmatic.borg.list.list_archive( json_output = borgmatic.borg.list.list_archive(
repository['path'], repository['path'],
config, storage,
local_borg_version, local_borg_version,
borgmatic.actions.arguments.update_arguments(list_arguments, archive=archive_name), borgmatic.actions.arguments.update_arguments(list_arguments, archive=archive_name),
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
if json_output: if json_output: # pragma: nocover
yield borgmatic.actions.json.parse_json(json_output, repository.get('label')) yield json.loads(json_output)

View File

@ -9,7 +9,7 @@ logger = logging.getLogger(__name__)
def run_mount( def run_mount(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
mount_arguments, mount_arguments,
global_arguments, global_arguments,
@ -34,14 +34,14 @@ def run_mount(
borgmatic.borg.rlist.resolve_archive_name( borgmatic.borg.rlist.resolve_archive_name(
repository['path'], repository['path'],
mount_arguments.archive, mount_arguments.archive,
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
), ),
mount_arguments, mount_arguments,
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path=local_path, local_path=local_path,

View File

@ -10,7 +10,9 @@ logger = logging.getLogger(__name__)
def run_prune( def run_prune(
config_filename, config_filename,
repository, repository,
config, storage,
retention,
hooks,
hook_context, hook_context,
local_borg_version, local_borg_version,
prune_arguments, prune_arguments,
@ -28,8 +30,8 @@ def run_prune(
return return
borgmatic.hooks.command.execute_hook( borgmatic.hooks.command.execute_hook(
config.get('before_prune'), hooks.get('before_prune'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'pre-prune', 'pre-prune',
global_arguments.dry_run, global_arguments.dry_run,
@ -39,7 +41,8 @@ def run_prune(
borgmatic.borg.prune.prune_archives( borgmatic.borg.prune.prune_archives(
global_arguments.dry_run, global_arguments.dry_run,
repository['path'], repository['path'],
config, storage,
retention,
local_borg_version, local_borg_version,
prune_arguments, prune_arguments,
global_arguments, global_arguments,
@ -47,8 +50,8 @@ def run_prune(
remote_path=remote_path, remote_path=remote_path,
) )
borgmatic.hooks.command.execute_hook( borgmatic.hooks.command.execute_hook(
config.get('after_prune'), hooks.get('after_prune'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'post-prune', 'post-prune',
global_arguments.dry_run, global_arguments.dry_run,

View File

@ -8,7 +8,7 @@ logger = logging.getLogger(__name__)
def run_rcreate( def run_rcreate(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
rcreate_arguments, rcreate_arguments,
global_arguments, global_arguments,
@ -27,7 +27,7 @@ def run_rcreate(
borgmatic.borg.rcreate.create_repository( borgmatic.borg.rcreate.create_repository(
global_arguments.dry_run, global_arguments.dry_run,
repository['path'], repository['path'],
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
rcreate_arguments.encryption_mode, rcreate_arguments.encryption_mode,

View File

@ -17,86 +17,85 @@ logger = logging.getLogger(__name__)
UNSPECIFIED_HOOK = object() UNSPECIFIED_HOOK = object()
def get_configured_data_source( def get_configured_database(
config, hooks, archive_database_names, hook_name, database_name, configuration_database_name=None
archive_data_source_names,
hook_name,
data_source_name,
configuration_data_source_name=None,
): ):
''' '''
Find the first data source with the given hook name and data source name in the configuration Find the first database with the given hook name and database name in the configured hooks
dict and the given archive data source names dict (from hook name to data source names contained dict and the given archive database names dict (from hook name to database names contained in
in a particular backup archive). If UNSPECIFIED_HOOK is given as the hook name, search all data a particular backup archive). If UNSPECIFIED_HOOK is given as the hook name, search all database
source hooks for the named data source. If a configuration data source name is given, use that hooks for the named database. If a configuration database name is given, use that instead of the
instead of the data source name to lookup the data source in the given hooks configuration. database name to lookup the database in the given hooks configuration.
Return the found data source as a tuple of (found hook name, data source configuration dict) or Return the found database as a tuple of (found hook name, database configuration dict).
(None, None) if not found.
''' '''
if not configuration_data_source_name: if not configuration_database_name:
configuration_data_source_name = data_source_name configuration_database_name = database_name
if hook_name == UNSPECIFIED_HOOK: if hook_name == UNSPECIFIED_HOOK:
hooks_to_search = { hooks_to_search = hooks
hook_name: value
for (hook_name, value) in config.items()
if hook_name in borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES
}
else: else:
try: hooks_to_search = {hook_name: hooks[hook_name]}
hooks_to_search = {hook_name: config[hook_name]}
except KeyError:
return (None, None)
return next( return next(
( (
(name, hook_data_source) (name, hook_database)
for (name, hook) in hooks_to_search.items() for (name, hook) in hooks_to_search.items()
for hook_data_source in hook for hook_database in hook
if hook_data_source['name'] == configuration_data_source_name if hook_database['name'] == configuration_database_name
and data_source_name in archive_data_source_names.get(name, []) and database_name in archive_database_names.get(name, [])
), ),
(None, None), (None, None),
) )
def restore_single_data_source( def get_configured_hook_name_and_database(hooks, database_name):
'''
Find the hook name and first database dict with the given database name in the configured hooks
dict. This searches across all database hooks.
'''
def restore_single_database(
repository, repository,
config, location,
storage,
hooks,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
archive_name, archive_name,
hook_name, hook_name,
data_source, database,
connection_params, connection_params,
): # pragma: no cover ): # pragma: no cover
''' '''
Given (among other things) an archive name, a data source hook name, the hostname, port, Given (among other things) an archive name, a database hook name, the hostname,
username/password as connection params, and a configured data source configuration dict, restore port, username and password as connection params, and a configured database
that data source from the archive. configuration dict, restore that database from the archive.
''' '''
logger.info( logger.info(
f'{repository.get("label", repository["path"])}: Restoring data source {data_source["name"]}' f'{repository.get("label", repository["path"])}: Restoring database {database["name"]}'
) )
dump_pattern = borgmatic.hooks.dispatch.call_hooks( dump_pattern = borgmatic.hooks.dispatch.call_hooks(
'make_data_source_dump_pattern', 'make_database_dump_pattern',
config, hooks,
repository['path'], repository['path'],
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES, borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
data_source['name'], location,
database['name'],
)[hook_name] )[hook_name]
# Kick off a single data source extract to stdout. # Kick off a single database extract to stdout.
extract_process = borgmatic.borg.extract.extract_archive( extract_process = borgmatic.borg.extract.extract_archive(
dry_run=global_arguments.dry_run, dry_run=global_arguments.dry_run,
repository=repository['path'], repository=repository['path'],
archive=archive_name, archive=archive_name,
paths=borgmatic.hooks.dump.convert_glob_patterns_to_borg_patterns([dump_pattern]), paths=borgmatic.hooks.dump.convert_glob_patterns_to_borg_patterns([dump_pattern]),
config=config, location_config=location,
storage_config=storage,
local_borg_version=local_borg_version, local_borg_version=local_borg_version,
global_arguments=global_arguments, global_arguments=global_arguments,
local_path=local_path, local_path=local_path,
@ -104,90 +103,89 @@ def restore_single_data_source(
destination_path='/', destination_path='/',
# A directory format dump isn't a single file, and therefore can't extract # A directory format dump isn't a single file, and therefore can't extract
# to stdout. In this case, the extract_process return value is None. # to stdout. In this case, the extract_process return value is None.
extract_to_stdout=bool(data_source.get('format') != 'directory'), extract_to_stdout=bool(database.get('format') != 'directory'),
) )
# Run a single data source restore, consuming the extract stdout (if any). # Run a single database restore, consuming the extract stdout (if any).
borgmatic.hooks.dispatch.call_hooks( borgmatic.hooks.dispatch.call_hooks(
function_name='restore_data_source_dump', 'restore_database_dump',
config=config, {hook_name: [database]},
log_prefix=repository['path'], repository['path'],
hook_names=[hook_name], borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
data_source=data_source, location,
dry_run=global_arguments.dry_run, global_arguments.dry_run,
extract_process=extract_process, extract_process,
connection_params=connection_params, connection_params,
) )
def collect_archive_data_source_names( def collect_archive_database_names(
repository, repository,
archive, archive,
config, location,
storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
): ):
''' '''
Given a local or remote repository path, a resolved archive name, a configuration dict, the Given a local or remote repository path, a resolved archive name, a location configuration dict,
local Borg version, global_arguments an argparse.Namespace, and local and remote Borg paths, a storage configuration dict, the local Borg version, global_arguments an argparse.Namespace,
query the archive for the names of data sources it contains as dumps and return them as a dict and local and remote Borg paths, query the archive for the names of databases it contains and
from hook name to a sequence of data source names. return them as a dict from hook name to a sequence of database names.
''' '''
borgmatic_source_directory = os.path.expanduser( borgmatic_source_directory = os.path.expanduser(
config.get( location.get(
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY 'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
) )
).lstrip('/') ).lstrip('/')
parent_dump_path = os.path.expanduser(
borgmatic.hooks.dump.make_database_dump_path(borgmatic_source_directory, '*_databases/*/*')
)
dump_paths = borgmatic.borg.list.capture_archive_listing( dump_paths = borgmatic.borg.list.capture_archive_listing(
repository, repository,
archive, archive,
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
list_paths=[ list_path=parent_dump_path,
os.path.expanduser(
borgmatic.hooks.dump.make_data_source_dump_path(borgmatic_source_directory, pattern)
)
for pattern in ('*_databases/*/*',)
],
local_path=local_path, local_path=local_path,
remote_path=remote_path, remote_path=remote_path,
) )
# Determine the data source names corresponding to the dumps found in the archive and # Determine the database names corresponding to the dumps found in the archive and
# add them to restore_names. # add them to restore_names.
archive_data_source_names = {} archive_database_names = {}
for dump_path in dump_paths: for dump_path in dump_paths:
try: try:
(hook_name, _, data_source_name) = dump_path.split( (hook_name, _, database_name) = dump_path.split(
borgmatic_source_directory + os.path.sep, 1 borgmatic_source_directory + os.path.sep, 1
)[1].split(os.path.sep)[0:3] )[1].split(os.path.sep)[0:3]
except (ValueError, IndexError): except (ValueError, IndexError):
logger.warning( logger.warning(
f'{repository}: Ignoring invalid data source dump path "{dump_path}" in archive {archive}' f'{repository}: Ignoring invalid database dump path "{dump_path}" in archive {archive}'
) )
else: else:
if data_source_name not in archive_data_source_names.get(hook_name, []): if database_name not in archive_database_names.get(hook_name, []):
archive_data_source_names.setdefault(hook_name, []).extend([data_source_name]) archive_database_names.setdefault(hook_name, []).extend([database_name])
return archive_data_source_names return archive_database_names
def find_data_sources_to_restore(requested_data_source_names, archive_data_source_names): def find_databases_to_restore(requested_database_names, archive_database_names):
''' '''
Given a sequence of requested data source names to restore and a dict of hook name to the names Given a sequence of requested database names to restore and a dict of hook name to the names of
of data sources found in an archive, return an expanded sequence of data source names to databases found in an archive, return an expanded sequence of database names to restore,
restore, replacing "all" with actual data source names as appropriate. replacing "all" with actual database names as appropriate.
Raise ValueError if any of the requested data source names cannot be found in the archive. Raise ValueError if any of the requested database names cannot be found in the archive.
''' '''
# A map from data source hook name to the data source names to restore for that hook. # A map from database hook name to the database names to restore for that hook.
restore_names = ( restore_names = (
{UNSPECIFIED_HOOK: requested_data_source_names} {UNSPECIFIED_HOOK: requested_database_names}
if requested_data_source_names if requested_database_names
else {UNSPECIFIED_HOOK: ['all']} else {UNSPECIFIED_HOOK: ['all']}
) )
@ -196,65 +194,64 @@ def find_data_sources_to_restore(requested_data_source_names, archive_data_sourc
if 'all' in restore_names[UNSPECIFIED_HOOK]: if 'all' in restore_names[UNSPECIFIED_HOOK]:
restore_names[UNSPECIFIED_HOOK].remove('all') restore_names[UNSPECIFIED_HOOK].remove('all')
for hook_name, data_source_names in archive_data_source_names.items(): for hook_name, database_names in archive_database_names.items():
restore_names.setdefault(hook_name, []).extend(data_source_names) restore_names.setdefault(hook_name, []).extend(database_names)
# If a data source is to be restored as part of "all", then remove it from restore names # If a database is to be restored as part of "all", then remove it from restore names so
# so it doesn't get restored twice. # it doesn't get restored twice.
for data_source_name in data_source_names: for database_name in database_names:
if data_source_name in restore_names[UNSPECIFIED_HOOK]: if database_name in restore_names[UNSPECIFIED_HOOK]:
restore_names[UNSPECIFIED_HOOK].remove(data_source_name) restore_names[UNSPECIFIED_HOOK].remove(database_name)
if not restore_names[UNSPECIFIED_HOOK]: if not restore_names[UNSPECIFIED_HOOK]:
restore_names.pop(UNSPECIFIED_HOOK) restore_names.pop(UNSPECIFIED_HOOK)
combined_restore_names = set( combined_restore_names = set(
name for data_source_names in restore_names.values() for name in data_source_names name for database_names in restore_names.values() for name in database_names
) )
combined_archive_data_source_names = set( combined_archive_database_names = set(
name name for database_names in archive_database_names.values() for name in database_names
for data_source_names in archive_data_source_names.values()
for name in data_source_names
) )
missing_names = sorted(set(combined_restore_names) - combined_archive_data_source_names) missing_names = sorted(set(combined_restore_names) - combined_archive_database_names)
if missing_names: if missing_names:
joined_names = ', '.join(f'"{name}"' for name in missing_names) joined_names = ', '.join(f'"{name}"' for name in missing_names)
raise ValueError( raise ValueError(
f"Cannot restore data source{'s' if len(missing_names) > 1 else ''} {joined_names} missing from archive" f"Cannot restore database{'s' if len(missing_names) > 1 else ''} {joined_names} missing from archive"
) )
return restore_names return restore_names
def ensure_data_sources_found(restore_names, remaining_restore_names, found_names): def ensure_databases_found(restore_names, remaining_restore_names, found_names):
''' '''
Given a dict from hook name to data source names to restore, a dict from hook name to remaining Given a dict from hook name to database names to restore, a dict from hook name to remaining
data source names to restore, and a sequence of found (actually restored) data source names, database names to restore, and a sequence of found (actually restored) database names, raise
raise ValueError if requested data source to restore were missing from the archive and/or ValueError if requested databases to restore were missing from the archive and/or configuration.
configuration.
''' '''
combined_restore_names = set( combined_restore_names = set(
name name
for data_source_names in tuple(restore_names.values()) for database_names in tuple(restore_names.values())
+ tuple(remaining_restore_names.values()) + tuple(remaining_restore_names.values())
for name in data_source_names for name in database_names
) )
if not combined_restore_names and not found_names: if not combined_restore_names and not found_names:
raise ValueError('No data sources were found to restore') raise ValueError('No databases were found to restore')
missing_names = sorted(set(combined_restore_names) - set(found_names)) missing_names = sorted(set(combined_restore_names) - set(found_names))
if missing_names: if missing_names:
joined_names = ', '.join(f'"{name}"' for name in missing_names) joined_names = ', '.join(f'"{name}"' for name in missing_names)
raise ValueError( raise ValueError(
f"Cannot restore data source{'s' if len(missing_names) > 1 else ''} {joined_names} missing from borgmatic's configuration" f"Cannot restore database{'s' if len(missing_names) > 1 else ''} {joined_names} missing from borgmatic's configuration"
) )
def run_restore( def run_restore(
repository, repository,
config, location,
storage,
hooks,
local_borg_version, local_borg_version,
restore_arguments, restore_arguments,
global_arguments, global_arguments,
@ -265,7 +262,7 @@ def run_restore(
Run the "restore" action for the given repository, but only if the repository matches the Run the "restore" action for the given repository, but only if the repository matches the
requested repository in restore arguments. requested repository in restore arguments.
Raise ValueError if a configured data source could not be found to restore. Raise ValueError if a configured database could not be found to restore.
''' '''
if restore_arguments.repository and not borgmatic.config.validate.repositories_match( if restore_arguments.repository and not borgmatic.config.validate.repositories_match(
repository, restore_arguments.repository repository, restore_arguments.repository
@ -273,38 +270,38 @@ def run_restore(
return return
logger.info( logger.info(
f'{repository.get("label", repository["path"])}: Restoring data sources from archive {restore_arguments.archive}' f'{repository.get("label", repository["path"])}: Restoring databases from archive {restore_arguments.archive}'
) )
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured( borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_data_source_dumps', 'remove_database_dumps',
config, hooks,
repository['path'], repository['path'],
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES, borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run, global_arguments.dry_run,
) )
archive_name = borgmatic.borg.rlist.resolve_archive_name( archive_name = borgmatic.borg.rlist.resolve_archive_name(
repository['path'], repository['path'],
restore_arguments.archive, restore_arguments.archive,
config, storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
archive_data_source_names = collect_archive_data_source_names( archive_database_names = collect_archive_database_names(
repository['path'], repository['path'],
archive_name, archive_name,
config, location,
storage,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
restore_names = find_data_sources_to_restore( restore_names = find_databases_to_restore(restore_arguments.databases, archive_database_names)
restore_arguments.data_sources, archive_data_source_names
)
found_names = set() found_names = set()
remaining_restore_names = {} remaining_restore_names = {}
connection_params = { connection_params = {
@ -315,66 +312,71 @@ def run_restore(
'restore_path': restore_arguments.restore_path, 'restore_path': restore_arguments.restore_path,
} }
for hook_name, data_source_names in restore_names.items(): for hook_name, database_names in restore_names.items():
for data_source_name in data_source_names: for database_name in database_names:
found_hook_name, found_data_source = get_configured_data_source( found_hook_name, found_database = get_configured_database(
config, archive_data_source_names, hook_name, data_source_name hooks, archive_database_names, hook_name, database_name
) )
if not found_data_source: if not found_database:
remaining_restore_names.setdefault(found_hook_name or hook_name, []).append( remaining_restore_names.setdefault(found_hook_name or hook_name, []).append(
data_source_name database_name
) )
continue continue
found_names.add(data_source_name) found_names.add(database_name)
restore_single_data_source( restore_single_database(
repository, repository,
config, location,
storage,
hooks,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
archive_name, archive_name,
found_hook_name or hook_name, found_hook_name or hook_name,
dict(found_data_source, **{'schemas': restore_arguments.schemas}), dict(found_database, **{'schemas': restore_arguments.schemas}),
connection_params, connection_params,
) )
# For any data sources that weren't found via exact matches in the configuration, try to # For any database that weren't found via exact matches in the hooks configuration, try to
# fallback to "all" entries. # fallback to "all" entries.
for hook_name, data_source_names in remaining_restore_names.items(): for hook_name, database_names in remaining_restore_names.items():
for data_source_name in data_source_names: for database_name in database_names:
found_hook_name, found_data_source = get_configured_data_source( found_hook_name, found_database = get_configured_database(
config, archive_data_source_names, hook_name, data_source_name, 'all' hooks, archive_database_names, hook_name, database_name, 'all'
) )
if not found_data_source: if not found_database:
continue continue
found_names.add(data_source_name) found_names.add(database_name)
data_source = copy.copy(found_data_source) database = copy.copy(found_database)
data_source['name'] = data_source_name database['name'] = database_name
restore_single_data_source( restore_single_database(
repository, repository,
config, location,
storage,
hooks,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
archive_name, archive_name,
found_hook_name or hook_name, found_hook_name or hook_name,
dict(data_source, **{'schemas': restore_arguments.schemas}), dict(database, **{'schemas': restore_arguments.schemas}),
connection_params, connection_params,
) )
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured( borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
'remove_data_source_dumps', 'remove_database_dumps',
config, hooks,
repository['path'], repository['path'],
borgmatic.hooks.dump.DATA_SOURCE_HOOK_NAMES, borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
location,
global_arguments.dry_run, global_arguments.dry_run,
) )
ensure_data_sources_found(restore_names, remaining_restore_names, found_names) ensure_databases_found(restore_names, remaining_restore_names, found_names)

View File

@ -1,6 +1,6 @@
import json
import logging import logging
import borgmatic.actions.json
import borgmatic.borg.rinfo import borgmatic.borg.rinfo
import borgmatic.config.validate import borgmatic.config.validate
@ -9,7 +9,7 @@ logger = logging.getLogger(__name__)
def run_rinfo( def run_rinfo(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
rinfo_arguments, rinfo_arguments,
global_arguments, global_arguments,
@ -24,19 +24,19 @@ def run_rinfo(
if rinfo_arguments.repository is None or borgmatic.config.validate.repositories_match( if rinfo_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, rinfo_arguments.repository repository, rinfo_arguments.repository
): ):
if not rinfo_arguments.json: if not rinfo_arguments.json: # pragma: nocover
logger.answer( logger.answer(
f'{repository.get("label", repository["path"])}: Displaying repository summary information' f'{repository.get("label", repository["path"])}: Displaying repository summary information'
) )
json_output = borgmatic.borg.rinfo.display_repository_info( json_output = borgmatic.borg.rinfo.display_repository_info(
repository['path'], repository['path'],
config, storage,
local_borg_version, local_borg_version,
rinfo_arguments=rinfo_arguments, rinfo_arguments=rinfo_arguments,
global_arguments=global_arguments, global_arguments=global_arguments,
local_path=local_path, local_path=local_path,
remote_path=remote_path, remote_path=remote_path,
) )
if json_output: if json_output: # pragma: nocover
yield borgmatic.actions.json.parse_json(json_output, repository.get('label')) yield json.loads(json_output)

View File

@ -1,6 +1,6 @@
import json
import logging import logging
import borgmatic.actions.json
import borgmatic.borg.rlist import borgmatic.borg.rlist
import borgmatic.config.validate import borgmatic.config.validate
@ -9,7 +9,7 @@ logger = logging.getLogger(__name__)
def run_rlist( def run_rlist(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
rlist_arguments, rlist_arguments,
global_arguments, global_arguments,
@ -24,17 +24,17 @@ def run_rlist(
if rlist_arguments.repository is None or borgmatic.config.validate.repositories_match( if rlist_arguments.repository is None or borgmatic.config.validate.repositories_match(
repository, rlist_arguments.repository repository, rlist_arguments.repository
): ):
if not rlist_arguments.json: if not rlist_arguments.json: # pragma: nocover
logger.answer(f'{repository.get("label", repository["path"])}: Listing repository') logger.answer(f'{repository.get("label", repository["path"])}: Listing repository')
json_output = borgmatic.borg.rlist.list_repository( json_output = borgmatic.borg.rlist.list_repository(
repository['path'], repository['path'],
config, storage,
local_borg_version, local_borg_version,
rlist_arguments=rlist_arguments, rlist_arguments=rlist_arguments,
global_arguments=global_arguments, global_arguments=global_arguments,
local_path=local_path, local_path=local_path,
remote_path=remote_path, remote_path=remote_path,
) )
if json_output: if json_output: # pragma: nocover
yield borgmatic.actions.json.parse_json(json_output, repository.get('label')) yield json.loads(json_output)

View File

@ -7,7 +7,7 @@ logger = logging.getLogger(__name__)
def run_transfer( def run_transfer(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
transfer_arguments, transfer_arguments,
global_arguments, global_arguments,
@ -23,7 +23,7 @@ def run_transfer(
borgmatic.borg.transfer.transfer_archives( borgmatic.borg.transfer.transfer_archives(
global_arguments.dry_run, global_arguments.dry_run,
repository['path'], repository['path'],
config, storage,
local_borg_version, local_borg_version,
transfer_arguments, transfer_arguments,
global_arguments, global_arguments,

View File

@ -1,5 +1,4 @@
import logging import logging
import shlex
import borgmatic.commands.arguments import borgmatic.commands.arguments
import borgmatic.logger import borgmatic.logger
@ -14,7 +13,7 @@ BORG_SUBCOMMANDS_WITH_SUBCOMMANDS = {'key', 'debug'}
def run_arbitrary_borg( def run_arbitrary_borg(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
options, options,
archive=None, archive=None,
@ -22,13 +21,13 @@ def run_arbitrary_borg(
remote_path=None, remote_path=None,
): ):
''' '''
Given a local or remote repository path, a configuration dict, the local Borg version, a Given a local or remote repository path, a storage config dict, the local Borg version, a
sequence of arbitrary command-line Borg options, and an optional archive name, run an arbitrary sequence of arbitrary command-line Borg options, and an optional archive name, run an arbitrary
Borg command, passing in REPOSITORY and ARCHIVE environment variables for optional use in the Borg command, passing in REPOSITORY and ARCHIVE environment variables for optional use in the
command. command.
''' '''
borgmatic.logger.add_custom_log_levels() borgmatic.logger.add_custom_log_levels()
lock_wait = config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
try: try:
options = options[1:] if options[0] == '--' else options options = options[1:] if options[0] == '--' else options
@ -57,16 +56,15 @@ def run_arbitrary_borg(
) )
return execute_command( return execute_command(
tuple(shlex.quote(part) for part in full_command), full_command,
output_file=DO_NOT_CAPTURE, output_file=DO_NOT_CAPTURE,
borg_local_path=local_path,
shell=True, shell=True,
extra_environment=dict( extra_environment=dict(
(environment.make_environment(config) or {}), (environment.make_environment(storage_config) or {}),
**{ **{
'BORG_REPO': repository_path, 'BORG_REPO': repository_path,
'ARCHIVE': archive if archive else '', 'ARCHIVE': archive if archive else '',
}, },
), ),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
) )

View File

@ -8,19 +8,19 @@ logger = logging.getLogger(__name__)
def break_lock( def break_lock(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path='borg', local_path='borg',
remote_path=None, remote_path=None,
): ):
''' '''
Given a local or remote repository path, a configuration dict, the local Borg version, an Given a local or remote repository path, a storage configuration dict, the local Borg version,
argparse.Namespace of global arguments, and optional local and remote Borg paths, break any an argparse.Namespace of global arguments, and optional local and remote Borg paths, break any
repository and cache locks leftover from Borg aborting. repository and cache locks leftover from Borg aborting.
''' '''
umask = config.get('umask', None) umask = storage_config.get('umask', None)
lock_wait = config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
full_command = ( full_command = (
(local_path, 'break-lock') (local_path, 'break-lock')
@ -33,10 +33,5 @@ def break_lock(
+ flags.make_repository_flags(repository_path, local_borg_version) + flags.make_repository_flags(repository_path, local_borg_version)
) )
borg_environment = environment.make_environment(config) borg_environment = environment.make_environment(storage_config)
execute_command( execute_command(full_command, borg_local_path=local_path, extra_environment=borg_environment)
full_command,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@ -1,26 +1,169 @@
import argparse import argparse
import datetime
import hashlib
import itertools
import json import json
import logging import logging
import os
import pathlib
from borgmatic.borg import environment, feature, flags, rinfo from borgmatic.borg import environment, extract, feature, flags, rinfo, state
from borgmatic.execute import DO_NOT_CAPTURE, execute_command from borgmatic.execute import DO_NOT_CAPTURE, execute_command
DEFAULT_CHECKS = (
{'name': 'repository', 'frequency': '1 month'},
{'name': 'archives', 'frequency': '1 month'},
)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def make_archive_filter_flags(local_borg_version, config, checks, check_arguments): def parse_checks(consistency_config, only_checks=None):
''' '''
Given the local Borg version, a configuration dict, a parsed sequence of checks, and check Given a consistency config with a "checks" sequence of dicts and an optional list of override
arguments as an argparse.Namespace instance, transform the checks into tuple of command-line checks, return a tuple of named checks to run.
flags for filtering archives in a check command.
If "check_last" is set in the configuration and "archives" is in checks, then include a "--last" For example, given a retention config of:
flag. And if "prefix" is set in configuration and "archives" is in checks, then include a
"--match-archives" flag. {'checks': ({'name': 'repository'}, {'name': 'archives'})}
This will be returned as:
('repository', 'archives')
If no "checks" option is present in the config, return the DEFAULT_CHECKS. If a checks value
has a name of "disabled", return an empty tuple, meaning that no checks should be run.
''' '''
check_last = config.get('check_last', None) checks = only_checks or tuple(
prefix = config.get('prefix') check_config['name']
for check_config in (consistency_config.get('checks', None) or DEFAULT_CHECKS)
)
checks = tuple(check.lower() for check in checks)
if 'disabled' in checks:
if len(checks) > 1:
logger.warning(
'Multiple checks are configured, but one of them is "disabled"; not running any checks'
)
return ()
return checks
def parse_frequency(frequency):
'''
Given a frequency string with a number and a unit of time, return a corresponding
datetime.timedelta instance or None if the frequency is None or "always".
For instance, given "3 weeks", return datetime.timedelta(weeks=3)
Raise ValueError if the given frequency cannot be parsed.
'''
if not frequency:
return None
frequency = frequency.strip().lower()
if frequency == 'always':
return None
try:
number, time_unit = frequency.split(' ')
number = int(number)
except ValueError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
if not time_unit.endswith('s'):
time_unit += 's'
if time_unit == 'months':
number *= 30
time_unit = 'days'
elif time_unit == 'years':
number *= 365
time_unit = 'days'
try:
return datetime.timedelta(**{time_unit: number})
except TypeError:
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
def filter_checks_on_frequency(
location_config,
consistency_config,
borg_repository_id,
checks,
force,
archives_check_id=None,
):
'''
Given a location config, a consistency config with a "checks" sequence of dicts, a Borg
repository ID, a sequence of checks, whether to force checks to run, and an ID for the archives
check potentially being run (if any), filter down those checks based on the configured
"frequency" for each check as compared to its check time file.
In other words, a check whose check time file's timestamp is too new (based on the configured
frequency) will get cut from the returned sequence of checks. Example:
consistency_config = {
'checks': [
{
'name': 'archives',
'frequency': '2 weeks',
},
]
}
When this function is called with that consistency_config and "archives" in checks, "archives"
will get filtered out of the returned result if its check time file is newer than 2 weeks old,
indicating that it's not yet time to run that check again.
Raise ValueError if a frequency cannot be parsed.
'''
filtered_checks = list(checks)
if force:
return tuple(filtered_checks)
for check_config in consistency_config.get('checks', DEFAULT_CHECKS):
check = check_config['name']
if checks and check not in checks:
continue
frequency_delta = parse_frequency(check_config.get('frequency'))
if not frequency_delta:
continue
check_time = probe_for_check_time(
location_config, borg_repository_id, check, archives_check_id
)
if not check_time:
continue
# If we've not yet reached the time when the frequency dictates we're ready for another
# check, skip this check.
if datetime.datetime.now() < check_time + frequency_delta:
remaining = check_time + frequency_delta - datetime.datetime.now()
logger.info(
f'Skipping {check} check due to configured frequency; {remaining} until next check'
)
filtered_checks.remove(check)
return tuple(filtered_checks)
def make_archive_filter_flags(
local_borg_version, storage_config, checks, check_last=None, prefix=None
):
'''
Given the local Borg version, a storage configuration dict, a parsed sequence of checks, the
check last value, and a consistency check prefix, transform the checks into tuple of
command-line flags for filtering archives in a check command.
If a check_last value is given and "archives" is in checks, then include a "--last" flag. And if
a prefix value is given and "archives" is in checks, then include a "--match-archives" flag.
'''
if 'archives' in checks or 'data' in checks: if 'archives' in checks or 'data' in checks:
return (('--last', str(check_last)) if check_last else ()) + ( return (('--last', str(check_last)) if check_last else ()) + (
( (
@ -31,8 +174,8 @@ def make_archive_filter_flags(local_borg_version, config, checks, check_argument
if prefix if prefix
else ( else (
flags.make_match_archives_flags( flags.make_match_archives_flags(
check_arguments.match_archives or config.get('match_archives'), storage_config.get('match_archives'),
config.get('archive_name_format'), storage_config.get('archive_name_format'),
local_borg_version, local_borg_version,
) )
) )
@ -50,10 +193,21 @@ def make_archive_filter_flags(local_borg_version, config, checks, check_argument
return () return ()
def make_check_name_flags(checks, archive_filter_flags): def make_archives_check_id(archive_filter_flags):
''' '''
Given parsed checks set and a sequence of flags to filter archives, transform the checks into Given a sequence of flags to filter archives, return a unique hash corresponding to those
tuple of command-line check flags. particular flags. If there are no flags, return None.
'''
if not archive_filter_flags:
return None
return hashlib.sha256(' '.join(archive_filter_flags).encode()).hexdigest()
def make_check_flags(checks, archive_filter_flags):
'''
Given a parsed sequence of checks and a sequence of flags to filter archives, transform the
checks into tuple of command-line check flags.
For example, given parsed checks of: For example, given parsed checks of:
@ -68,13 +222,13 @@ def make_check_name_flags(checks, archive_filter_flags):
''' '''
if 'data' in checks: if 'data' in checks:
data_flags = ('--verify-data',) data_flags = ('--verify-data',)
checks.update({'archives'}) checks += ('archives',)
else: else:
data_flags = () data_flags = ()
common_flags = (archive_filter_flags if 'archives' in checks else ()) + data_flags common_flags = (archive_filter_flags if 'archives' in checks else ()) + data_flags
if {'repository', 'archives'}.issubset(checks): if {'repository', 'archives'}.issubset(set(checks)):
return common_flags return common_flags
return ( return (
@ -83,20 +237,153 @@ def make_check_name_flags(checks, archive_filter_flags):
) )
def get_repository_id( def make_check_time_path(location_config, borg_repository_id, check_type, archives_check_id=None):
repository_path, config, local_borg_version, global_arguments, local_path, remote_path '''
Given a location configuration dict, a Borg repository ID, the name of a check type
("repository", "archives", etc.), and a unique hash of the archives filter flags, return a
path for recording that check's time (the time of that check last occurring).
'''
borgmatic_source_directory = os.path.expanduser(
location_config.get('borgmatic_source_directory', state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY)
)
if check_type in ('archives', 'data'):
return os.path.join(
borgmatic_source_directory,
'checks',
borg_repository_id,
check_type,
archives_check_id if archives_check_id else 'all',
)
return os.path.join(
borgmatic_source_directory,
'checks',
borg_repository_id,
check_type,
)
def write_check_time(path): # pragma: no cover
'''
Record a check time of now as the modification time of the given path.
'''
logger.debug(f'Writing check time at {path}')
os.makedirs(os.path.dirname(path), mode=0o700, exist_ok=True)
pathlib.Path(path, mode=0o600).touch()
def read_check_time(path):
'''
Return the check time based on the modification time of the given path. Return None if the path
doesn't exist.
'''
logger.debug(f'Reading check time from {path}')
try:
return datetime.datetime.fromtimestamp(os.stat(path).st_mtime)
except FileNotFoundError:
return None
def probe_for_check_time(location_config, borg_repository_id, check, archives_check_id):
'''
Given a location configuration dict, a Borg repository ID, the name of a check type
("repository", "archives", etc.), and a unique hash of the archives filter flags, return a
the corresponding check time or None if such a check time does not exist.
When the check type is "archives" or "data", this function probes two different paths to find
the check time, e.g.:
~/.borgmatic/checks/1234567890/archives/9876543210
~/.borgmatic/checks/1234567890/archives/all
... and returns the maximum modification time of the files found (if any). The first path
represents a more specific archives check time (a check on a subset of archives), and the second
is a fallback to the last "all" archives check.
For other check types, this function reads from a single check time path, e.g.:
~/.borgmatic/checks/1234567890/repository
'''
check_times = (
read_check_time(group[0])
for group in itertools.groupby(
(
make_check_time_path(location_config, borg_repository_id, check, archives_check_id),
make_check_time_path(location_config, borg_repository_id, check),
)
)
)
try:
return max(check_time for check_time in check_times if check_time)
except ValueError:
return None
def upgrade_check_times(location_config, borg_repository_id):
'''
Given a location configuration dict and a Borg repository ID, upgrade any corresponding check
times on disk from old-style paths to new-style paths.
Currently, the only upgrade performed is renaming an archive or data check path that looks like:
~/.borgmatic/checks/1234567890/archives
to:
~/.borgmatic/checks/1234567890/archives/all
'''
for check_type in ('archives', 'data'):
new_path = make_check_time_path(location_config, borg_repository_id, check_type, 'all')
old_path = os.path.dirname(new_path)
temporary_path = f'{old_path}.temp'
if not os.path.isfile(old_path) and not os.path.isfile(temporary_path):
continue
logger.debug(f'Upgrading archives check time from {old_path} to {new_path}')
try:
os.rename(old_path, temporary_path)
except FileNotFoundError:
pass
os.mkdir(old_path)
os.rename(temporary_path, new_path)
def check_archives(
repository_path,
location_config,
storage_config,
consistency_config,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
progress=None,
repair=None,
only_checks=None,
force=None,
): ):
''' '''
Given a local or remote repository path, a configuration dict, the local Borg version, global Given a local or remote repository path, a storage config dict, a consistency config dict,
arguments, and local/remote commands to run, return the corresponding Borg repository ID. local/remote commands to run, whether to include progress information, whether to attempt a
repair, and an optional list of checks to use instead of configured checks, check the contained
Borg archives for consistency.
Raise ValueError if the Borg repository ID cannot be determined. If there are no consistency checks to run, skip running them.
Raises ValueError if the Borg repository ID cannot be determined.
''' '''
try: try:
return json.loads( borg_repository_id = json.loads(
rinfo.display_repository_info( rinfo.display_repository_info(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
argparse.Namespace(json=True), argparse.Namespace(json=True),
global_arguments, global_arguments,
@ -107,80 +394,73 @@ def get_repository_id(
except (json.JSONDecodeError, KeyError): except (json.JSONDecodeError, KeyError):
raise ValueError(f'Cannot determine Borg repository ID for {repository_path}') raise ValueError(f'Cannot determine Borg repository ID for {repository_path}')
upgrade_check_times(location_config, borg_repository_id)
def check_archives( check_last = consistency_config.get('check_last', None)
repository_path, prefix = consistency_config.get('prefix')
config, configured_checks = parse_checks(consistency_config, only_checks)
local_borg_version, lock_wait = None
check_arguments, extra_borg_options = storage_config.get('extra_borg_options', {}).get('check', '')
global_arguments, archive_filter_flags = make_archive_filter_flags(
checks, local_borg_version, storage_config, configured_checks, check_last, prefix
archive_filter_flags, )
local_path='borg', archives_check_id = make_archives_check_id(archive_filter_flags)
remote_path=None,
):
'''
Given a local or remote repository path, a configuration dict, the local Borg version, check
arguments as an argparse.Namespace instance, global arguments, a set of named Borg checks to run
(some combination "repository", "archives", and/or "data"), archive filter flags, and
local/remote commands to run, check the contained Borg archives for consistency.
'''
lock_wait = config.get('lock_wait')
extra_borg_options = config.get('extra_borg_options', {}).get('check', '')
verbosity_flags = () checks = filter_checks_on_frequency(
if logger.isEnabledFor(logging.INFO): location_config,
verbosity_flags = ('--info',) consistency_config,
if logger.isEnabledFor(logging.DEBUG): borg_repository_id,
verbosity_flags = ('--debug', '--show-rc') configured_checks,
force,
try: archives_check_id,
repository_check_config = next(
check for check in config.get('checks', ()) if check.get('name') == 'repository'
)
except StopIteration:
repository_check_config = {}
if check_arguments.max_duration and 'archives' in checks:
raise ValueError('The archives check cannot run when the --max-duration flag is used')
if repository_check_config.get('max_duration') and 'archives' in checks:
raise ValueError(
'The archives check cannot run when the repository check has the max_duration option set'
)
max_duration = check_arguments.max_duration or repository_check_config.get('max_duration')
borg_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
full_command = (
(local_path, 'check')
+ (('--repair',) if check_arguments.repair else ())
+ (('--max-duration', str(max_duration)) if max_duration else ())
+ make_check_name_flags(checks, archive_filter_flags)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ (('--progress',) if check_arguments.progress else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
) )
# The Borg repair option triggers an interactive prompt, which won't work when output is if set(checks).intersection({'repository', 'archives', 'data'}):
# captured. And progress messes with the terminal directly. lock_wait = storage_config.get('lock_wait')
if check_arguments.repair or check_arguments.progress:
execute_command( verbosity_flags = ()
full_command, if logger.isEnabledFor(logging.INFO):
output_file=DO_NOT_CAPTURE, verbosity_flags = ('--info',)
extra_environment=borg_environment, if logger.isEnabledFor(logging.DEBUG):
borg_local_path=local_path, verbosity_flags = ('--debug', '--show-rc')
borg_exit_codes=borg_exit_codes,
full_command = (
(local_path, 'check')
+ (('--repair',) if repair else ())
+ make_check_flags(checks, archive_filter_flags)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ verbosity_flags
+ (('--progress',) if progress else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
+ flags.make_repository_flags(repository_path, local_borg_version)
) )
else:
execute_command( borg_environment = environment.make_environment(storage_config)
full_command,
extra_environment=borg_environment, # The Borg repair option triggers an interactive prompt, which won't work when output is
borg_local_path=local_path, # captured. And progress messes with the terminal directly.
borg_exit_codes=borg_exit_codes, if repair or progress:
execute_command(
full_command, output_file=DO_NOT_CAPTURE, extra_environment=borg_environment
)
else:
execute_command(full_command, extra_environment=borg_environment)
for check in checks:
write_check_time(
make_check_time_path(location_config, borg_repository_id, check, archives_check_id)
)
if 'extract' in checks:
extract.extract_last_archive_dry_run(
storage_config,
local_borg_version,
global_arguments,
repository_path,
lock_wait,
local_path,
remote_path,
) )
write_check_time(make_check_time_path(location_config, borg_repository_id, 'extract'))

View File

@ -9,7 +9,7 @@ logger = logging.getLogger(__name__)
def compact_segments( def compact_segments(
dry_run, dry_run,
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path='borg', local_path='borg',
@ -19,12 +19,12 @@ def compact_segments(
threshold=None, threshold=None,
): ):
''' '''
Given dry-run flag, a local or remote repository path, a configuration dict, and the local Borg Given dry-run flag, a local or remote repository path, a storage config dict, and the local
version, compact the segments in a repository. Borg version, compact the segments in a repository.
''' '''
umask = config.get('umask', None) umask = storage_config.get('umask', None)
lock_wait = config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
extra_borg_options = config.get('extra_borg_options', {}).get('compact', '') extra_borg_options = storage_config.get('extra_borg_options', {}).get('compact', '')
full_command = ( full_command = (
(local_path, 'compact') (local_path, 'compact')
@ -48,7 +48,6 @@ def compact_segments(
execute_command( execute_command(
full_command, full_command,
output_log_level=logging.INFO, output_log_level=logging.INFO,
extra_environment=environment.make_environment(config),
borg_local_path=local_path, borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'), extra_environment=environment.make_environment(storage_config),
) )

View File

@ -146,12 +146,12 @@ def ensure_files_readable(*filename_lists):
open(file_object).close() open(file_object).close()
def make_pattern_flags(config, pattern_filename=None): def make_pattern_flags(location_config, pattern_filename=None):
''' '''
Given a configuration dict with a potential patterns_from option, and a filename containing any Given a location config dict with a potential patterns_from option, and a filename containing
additional patterns, return the corresponding Borg flags for those files as a tuple. any additional patterns, return the corresponding Borg flags for those files as a tuple.
''' '''
pattern_filenames = tuple(config.get('patterns_from') or ()) + ( pattern_filenames = tuple(location_config.get('patterns_from') or ()) + (
(pattern_filename,) if pattern_filename else () (pattern_filename,) if pattern_filename else ()
) )
@ -162,12 +162,12 @@ def make_pattern_flags(config, pattern_filename=None):
) )
def make_exclude_flags(config, exclude_filename=None): def make_exclude_flags(location_config, exclude_filename=None):
''' '''
Given a configuration dict with various exclude options, and a filename containing any exclude Given a location config dict with various exclude options, and a filename containing any exclude
patterns, return the corresponding Borg flags as a tuple. patterns, return the corresponding Borg flags as a tuple.
''' '''
exclude_filenames = tuple(config.get('exclude_from') or ()) + ( exclude_filenames = tuple(location_config.get('exclude_from') or ()) + (
(exclude_filename,) if exclude_filename else () (exclude_filename,) if exclude_filename else ()
) )
exclude_from_flags = tuple( exclude_from_flags = tuple(
@ -175,15 +175,17 @@ def make_exclude_flags(config, exclude_filename=None):
('--exclude-from', exclude_filename) for exclude_filename in exclude_filenames ('--exclude-from', exclude_filename) for exclude_filename in exclude_filenames
) )
) )
caches_flag = ('--exclude-caches',) if config.get('exclude_caches') else () caches_flag = ('--exclude-caches',) if location_config.get('exclude_caches') else ()
if_present_flags = tuple( if_present_flags = tuple(
itertools.chain.from_iterable( itertools.chain.from_iterable(
('--exclude-if-present', if_present) ('--exclude-if-present', if_present)
for if_present in config.get('exclude_if_present', ()) for if_present in location_config.get('exclude_if_present', ())
) )
) )
keep_exclude_tags_flags = ('--keep-exclude-tags',) if config.get('keep_exclude_tags') else () keep_exclude_tags_flags = (
exclude_nodump_flags = ('--exclude-nodump',) if config.get('exclude_nodump') else () ('--keep-exclude-tags',) if location_config.get('keep_exclude_tags') else ()
)
exclude_nodump_flags = ('--exclude-nodump',) if location_config.get('exclude_nodump') else ()
return ( return (
exclude_from_flags exclude_from_flags
@ -215,6 +217,9 @@ def make_list_filter_flags(local_borg_version, dry_run):
return f'{base_flags}-' return f'{base_flags}-'
DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}' # noqa: FS003
def collect_borgmatic_source_directories(borgmatic_source_directory): def collect_borgmatic_source_directories(borgmatic_source_directory):
''' '''
Return a list of borgmatic-specific source directories used for state like database backups. Return a list of borgmatic-specific source directories used for state like database backups.
@ -272,14 +277,14 @@ def any_parent_directories(path, candidate_parents):
def collect_special_file_paths( def collect_special_file_paths(
create_command, config, local_path, working_directory, borg_environment, skip_directories create_command, local_path, working_directory, borg_environment, skip_directories
): ):
''' '''
Given a Borg create command as a tuple, a configuration dict, a local Borg path, a working Given a Borg create command as a tuple, a local Borg path, a working directory, a dict of
directory, a dict of environment variables to pass to Borg, and a sequence of parent directories environment variables to pass to Borg, and a sequence of parent directories to skip, collect the
to skip, collect the paths for any special files (character devices, block devices, and named paths for any special files (character devices, block devices, and named pipes / FIFOs) that
pipes / FIFOs) that Borg would encounter during a create. These are all paths that could cause Borg would encounter during a create. These are all paths that could cause Borg to hang if its
Borg to hang if its --read-special flag is used. --read-special flag is used.
''' '''
# Omit "--exclude-nodump" from the Borg dry run command, because that flag causes Borg to open # Omit "--exclude-nodump" from the Borg dry run command, because that flag causes Borg to open
# files including any named pipe we've created. # files including any named pipe we've created.
@ -289,8 +294,6 @@ def collect_special_file_paths(
capture_stderr=True, capture_stderr=True,
working_directory=working_directory, working_directory=working_directory,
extra_environment=borg_environment, extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
) )
paths = tuple( paths = tuple(
@ -320,79 +323,88 @@ def check_all_source_directories_exist(source_directories):
raise ValueError(f"Source directories do not exist: {', '.join(missing_directories)}") raise ValueError(f"Source directories do not exist: {', '.join(missing_directories)}")
def make_base_create_command( def create_archive(
dry_run, dry_run,
repository_path, repository_path,
config, location_config,
config_paths, storage_config,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
borgmatic_source_directories,
local_path='borg', local_path='borg',
remote_path=None, remote_path=None,
progress=False, progress=False,
stats=False,
json=False, json=False,
list_files=False, list_files=False,
stream_processes=None, stream_processes=None,
): ):
''' '''
Given vebosity/dry-run flags, a local or remote repository path, a configuration dict, a Given vebosity/dry-run flags, a local or remote repository path, a location config dict, and a
sequence of loaded configuration paths, the local Borg version, global arguments as an storage config dict, create a Borg archive and return Borg's JSON output (if any).
argparse.Namespace instance, and a sequence of borgmatic source directories, return a tuple of
(base Borg create command flags, Borg create command positional arguments, open pattern file
handle, open exclude file handle).
'''
if config.get('source_directories_must_exist', False):
check_all_source_directories_exist(config.get('source_directories'))
If a sequence of stream processes is given (instances of subprocess.Popen), then execute the
create command while also triggering the given processes to produce output.
'''
borgmatic.logger.add_custom_log_levels()
borgmatic_source_directories = expand_directories(
collect_borgmatic_source_directories(location_config.get('borgmatic_source_directory'))
)
if location_config.get('source_directories_must_exist', False):
check_all_source_directories_exist(location_config.get('source_directories'))
sources = deduplicate_directories( sources = deduplicate_directories(
map_directories_to_devices( map_directories_to_devices(
expand_directories( expand_directories(
tuple(config.get('source_directories', ())) tuple(location_config.get('source_directories', ()))
+ borgmatic_source_directories + borgmatic_source_directories
+ tuple(config_paths if config.get('store_config_files', True) else ()) + tuple(global_arguments.used_config_paths)
) )
), ),
additional_directory_devices=map_directories_to_devices( additional_directory_devices=map_directories_to_devices(
expand_directories(pattern_root_directories(config.get('patterns'))) expand_directories(pattern_root_directories(location_config.get('patterns')))
), ),
) )
ensure_files_readable(config.get('patterns_from'), config.get('exclude_from')) ensure_files_readable(location_config.get('patterns_from'), location_config.get('exclude_from'))
try:
working_directory = os.path.expanduser(location_config.get('working_directory'))
except TypeError:
working_directory = None
pattern_file = ( pattern_file = (
write_pattern_file(config.get('patterns'), sources) write_pattern_file(location_config.get('patterns'), sources)
if config.get('patterns') or config.get('patterns_from') if location_config.get('patterns') or location_config.get('patterns_from')
else None else None
) )
exclude_file = write_pattern_file(expand_home_directories(config.get('exclude_patterns'))) exclude_file = write_pattern_file(
checkpoint_interval = config.get('checkpoint_interval', None) expand_home_directories(location_config.get('exclude_patterns'))
checkpoint_volume = config.get('checkpoint_volume', None) )
chunker_params = config.get('chunker_params', None) checkpoint_interval = storage_config.get('checkpoint_interval', None)
compression = config.get('compression', None) checkpoint_volume = storage_config.get('checkpoint_volume', None)
upload_rate_limit = config.get('upload_rate_limit', None) chunker_params = storage_config.get('chunker_params', None)
upload_buffer_size = config.get('upload_buffer_size', None) compression = storage_config.get('compression', None)
umask = config.get('umask', None) upload_rate_limit = storage_config.get('upload_rate_limit', None)
lock_wait = config.get('lock_wait', None) umask = storage_config.get('umask', None)
lock_wait = storage_config.get('lock_wait', None)
list_filter_flags = make_list_filter_flags(local_borg_version, dry_run) list_filter_flags = make_list_filter_flags(local_borg_version, dry_run)
files_cache = config.get('files_cache') files_cache = location_config.get('files_cache')
archive_name_format = config.get('archive_name_format', flags.DEFAULT_ARCHIVE_NAME_FORMAT) archive_name_format = storage_config.get('archive_name_format', DEFAULT_ARCHIVE_NAME_FORMAT)
extra_borg_options = config.get('extra_borg_options', {}).get('create', '') extra_borg_options = storage_config.get('extra_borg_options', {}).get('create', '')
if feature.available(feature.Feature.ATIME, local_borg_version): if feature.available(feature.Feature.ATIME, local_borg_version):
atime_flags = ('--atime',) if config.get('atime') is True else () atime_flags = ('--atime',) if location_config.get('atime') is True else ()
else: else:
atime_flags = ('--noatime',) if config.get('atime') is False else () atime_flags = ('--noatime',) if location_config.get('atime') is False else ()
if feature.available(feature.Feature.NOFLAGS, local_borg_version): if feature.available(feature.Feature.NOFLAGS, local_borg_version):
noflags_flags = ('--noflags',) if config.get('flags') is False else () noflags_flags = ('--noflags',) if location_config.get('flags') is False else ()
else: else:
noflags_flags = ('--nobsdflags',) if config.get('flags') is False else () noflags_flags = ('--nobsdflags',) if location_config.get('flags') is False else ()
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version): if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
numeric_ids_flags = ('--numeric-ids',) if config.get('numeric_ids') else () numeric_ids_flags = ('--numeric-ids',) if location_config.get('numeric_ids') else ()
else: else:
numeric_ids_flags = ('--numeric-owner',) if config.get('numeric_ids') else () numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_ids') else ()
if feature.available(feature.Feature.UPLOAD_RATELIMIT, local_borg_version): if feature.available(feature.Feature.UPLOAD_RATELIMIT, local_borg_version):
upload_ratelimit_flags = ( upload_ratelimit_flags = (
@ -403,23 +415,31 @@ def make_base_create_command(
('--remote-ratelimit', str(upload_rate_limit)) if upload_rate_limit else () ('--remote-ratelimit', str(upload_rate_limit)) if upload_rate_limit else ()
) )
create_flags = ( if stream_processes and location_config.get('read_special') is False:
logger.warning(
f'{repository_path}: Ignoring configured "read_special" value of false, as true is needed for database hooks.'
)
create_command = (
tuple(local_path.split(' ')) tuple(local_path.split(' '))
+ ('create',) + ('create',)
+ make_pattern_flags(config, pattern_file.name if pattern_file else None) + make_pattern_flags(location_config, pattern_file.name if pattern_file else None)
+ make_exclude_flags(config, exclude_file.name if exclude_file else None) + make_exclude_flags(location_config, exclude_file.name if exclude_file else None)
+ (('--checkpoint-interval', str(checkpoint_interval)) if checkpoint_interval else ()) + (('--checkpoint-interval', str(checkpoint_interval)) if checkpoint_interval else ())
+ (('--checkpoint-volume', str(checkpoint_volume)) if checkpoint_volume else ()) + (('--checkpoint-volume', str(checkpoint_volume)) if checkpoint_volume else ())
+ (('--chunker-params', chunker_params) if chunker_params else ()) + (('--chunker-params', chunker_params) if chunker_params else ())
+ (('--compression', compression) if compression else ()) + (('--compression', compression) if compression else ())
+ upload_ratelimit_flags + upload_ratelimit_flags
+ (('--upload-buffer', str(upload_buffer_size)) if upload_buffer_size else ()) + (
+ (('--one-file-system',) if config.get('one_file_system') or stream_processes else ()) ('--one-file-system',)
if location_config.get('one_file_system') or stream_processes
else ()
)
+ numeric_ids_flags + numeric_ids_flags
+ atime_flags + atime_flags
+ (('--noctime',) if config.get('ctime') is False else ()) + (('--noctime',) if location_config.get('ctime') is False else ())
+ (('--nobirthtime',) if config.get('birthtime') is False else ()) + (('--nobirthtime',) if location_config.get('birthtime') is False else ())
+ (('--read-special',) if config.get('read_special') or stream_processes else ()) + (('--read-special',) if location_config.get('read_special') or stream_processes else ())
+ noflags_flags + noflags_flags
+ (('--files-cache', files_cache) if files_cache else ()) + (('--files-cache', files_cache) if files_cache else ())
+ (('--remote-path', remote_path) if remote_path else ()) + (('--remote-path', remote_path) if remote_path else ())
@ -433,94 +453,10 @@ def make_base_create_command(
) )
+ (('--dry-run',) if dry_run else ()) + (('--dry-run',) if dry_run else ())
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ()) + (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
) + flags.make_repository_archive_flags(
repository_path, archive_name_format, local_borg_version
create_positional_arguments = flags.make_repository_archive_flags(
repository_path, archive_name_format, local_borg_version
) + (sources if not pattern_file else ())
# If database hooks are enabled (as indicated by streaming processes), exclude files that might
# cause Borg to hang. But skip this if the user has explicitly set the "read_special" to True.
if stream_processes and not config.get('read_special'):
logger.warning(
f'{repository_path}: Ignoring configured "read_special" value of false, as true is needed for database hooks.'
)
try:
working_directory = os.path.expanduser(config.get('working_directory'))
except TypeError:
working_directory = None
borg_environment = environment.make_environment(config)
logger.debug(f'{repository_path}: Collecting special file paths')
special_file_paths = collect_special_file_paths(
create_flags + create_positional_arguments,
config,
local_path,
working_directory,
borg_environment,
skip_directories=borgmatic_source_directories,
)
if special_file_paths:
logger.warning(
f'{repository_path}: Excluding special files to prevent Borg from hanging: {", ".join(special_file_paths)}'
)
exclude_file = write_pattern_file(
expand_home_directories(
tuple(config.get('exclude_patterns') or ()) + special_file_paths
),
pattern_file=exclude_file,
)
create_flags += make_exclude_flags(config, exclude_file.name)
return (create_flags, create_positional_arguments, pattern_file, exclude_file)
def create_archive(
dry_run,
repository_path,
config,
config_paths,
local_borg_version,
global_arguments,
local_path='borg',
remote_path=None,
progress=False,
stats=False,
json=False,
list_files=False,
stream_processes=None,
):
'''
Given vebosity/dry-run flags, a local or remote repository path, a configuration dict, a
sequence of loaded configuration paths, the local Borg version, and global arguments as an
argparse.Namespace instance, create a Borg archive and return Borg's JSON output (if any).
If a sequence of stream processes is given (instances of subprocess.Popen), then execute the
create command while also triggering the given processes to produce output.
'''
borgmatic.logger.add_custom_log_levels()
borgmatic_source_directories = expand_directories(
collect_borgmatic_source_directories(config.get('borgmatic_source_directory'))
)
(create_flags, create_positional_arguments, pattern_file, exclude_file) = (
make_base_create_command(
dry_run,
repository_path,
config,
config_paths,
local_borg_version,
global_arguments,
borgmatic_source_directories,
local_path,
remote_path,
progress,
json,
list_files,
stream_processes,
) )
+ (sources if not pattern_file else ())
) )
if json: if json:
@ -534,48 +470,62 @@ def create_archive(
# the terminal directly. # the terminal directly.
output_file = DO_NOT_CAPTURE if progress else None output_file = DO_NOT_CAPTURE if progress else None
try: borg_environment = environment.make_environment(storage_config)
working_directory = os.path.expanduser(config.get('working_directory'))
except TypeError:
working_directory = None
borg_environment = environment.make_environment(config) # If database hooks are enabled (as indicated by streaming processes), exclude files that might
# cause Borg to hang. But skip this if the user has explicitly set the "read_special" to True.
if stream_processes and not location_config.get('read_special'):
logger.debug(f'{repository_path}: Collecting special file paths')
special_file_paths = collect_special_file_paths(
create_command,
local_path,
working_directory,
borg_environment,
skip_directories=borgmatic_source_directories,
)
create_flags += ( if special_file_paths:
logger.warning(
f'{repository_path}: Excluding special files to prevent Borg from hanging: {", ".join(special_file_paths)}'
)
exclude_file = write_pattern_file(
expand_home_directories(
tuple(location_config.get('exclude_patterns') or ()) + special_file_paths
),
pattern_file=exclude_file,
)
create_command += make_exclude_flags(location_config, exclude_file.name)
create_command += (
(('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ()) (('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
+ (('--stats',) if stats and not json and not dry_run else ()) + (('--stats',) if stats and not json and not dry_run else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ()) + (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
+ (('--progress',) if progress else ()) + (('--progress',) if progress else ())
+ (('--json',) if json else ()) + (('--json',) if json else ())
) )
borg_exit_codes = config.get('borg_exit_codes')
if stream_processes: if stream_processes:
return execute_command_with_processes( return execute_command_with_processes(
create_flags + create_positional_arguments, create_command,
stream_processes, stream_processes,
output_log_level, output_log_level,
output_file, output_file,
borg_local_path=local_path,
working_directory=working_directory, working_directory=working_directory,
extra_environment=borg_environment, extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
) )
elif output_log_level is None: elif output_log_level is None:
return execute_command_and_capture_output( return execute_command_and_capture_output(
create_flags + create_positional_arguments, create_command,
working_directory=working_directory, working_directory=working_directory,
extra_environment=borg_environment, extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
) )
else: else:
execute_command( execute_command(
create_flags + create_positional_arguments, create_command,
output_log_level, output_log_level,
output_file, output_file,
borg_local_path=local_path,
working_directory=working_directory, working_directory=working_directory,
extra_environment=borg_environment, extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
) )

View File

@ -21,15 +21,15 @@ DEFAULT_BOOL_OPTION_TO_UPPERCASE_ENVIRONMENT_VARIABLE = {
} }
def make_environment(config): def make_environment(storage_config):
''' '''
Given a borgmatic configuration dict, return its options converted to a Borg environment Given a borgmatic storage configuration dict, return its options converted to a Borg environment
variable dict. variable dict.
''' '''
environment = {} environment = {}
for option_name, environment_variable_name in OPTION_TO_ENVIRONMENT_VARIABLE.items(): for option_name, environment_variable_name in OPTION_TO_ENVIRONMENT_VARIABLE.items():
value = config.get(option_name) value = storage_config.get(option_name)
if value: if value:
environment[environment_variable_name] = str(value) environment[environment_variable_name] = str(value)
@ -38,20 +38,14 @@ def make_environment(config):
option_name, option_name,
environment_variable_name, environment_variable_name,
) in DEFAULT_BOOL_OPTION_TO_DOWNCASE_ENVIRONMENT_VARIABLE.items(): ) in DEFAULT_BOOL_OPTION_TO_DOWNCASE_ENVIRONMENT_VARIABLE.items():
value = config.get(option_name) value = storage_config.get(option_name, False)
if value is not None: environment[environment_variable_name] = 'yes' if value else 'no'
environment[environment_variable_name] = 'yes' if value else 'no'
for ( for (
option_name, option_name,
environment_variable_name, environment_variable_name,
) in DEFAULT_BOOL_OPTION_TO_UPPERCASE_ENVIRONMENT_VARIABLE.items(): ) in DEFAULT_BOOL_OPTION_TO_UPPERCASE_ENVIRONMENT_VARIABLE.items():
value = config.get(option_name) value = storage_config.get(option_name, False)
if value is not None: environment[environment_variable_name] = 'YES' if value else 'NO'
environment[environment_variable_name] = 'YES' if value else 'NO'
# On Borg 1.4.0a1+, take advantage of more specific exit codes. No effect on
# older versions of Borg.
environment['BORG_EXIT_CODES'] = 'modern'
return environment return environment

View File

@ -1,71 +0,0 @@
import logging
import os
import borgmatic.logger
from borgmatic.borg import environment, flags
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__)
def export_key(
repository_path,
config,
local_borg_version,
export_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a configuration dict, the local Borg version, and
optional local and remote Borg paths, export the repository key to the destination path
indicated in the export arguments.
If the destination path is empty or "-", then print the key to stdout instead of to a file.
Raise FileExistsError if a path is given but it already exists on disk.
'''
borgmatic.logger.add_custom_log_levels()
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
if export_arguments.path and export_arguments.path != '-':
if os.path.exists(export_arguments.path):
raise FileExistsError(
f'Destination path {export_arguments.path} already exists. Aborting.'
)
output_file = None
else:
output_file = DO_NOT_CAPTURE
full_command = (
(local_path, 'key', 'export')
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('paper', export_arguments.paper)
+ flags.make_flags('qr-html', export_arguments.qr_html)
+ flags.make_repository_flags(
repository_path,
local_borg_version,
)
+ ((export_arguments.path,) if output_file is None else ())
)
if global_arguments.dry_run:
logging.info(f'{repository_path}: Skipping key export (dry run)')
return
execute_command(
full_command,
output_file=output_file,
output_log_level=logging.ANSWER,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@ -13,7 +13,7 @@ def export_tar_archive(
archive, archive,
paths, paths,
destination_path, destination_path,
config, storage_config,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path='borg', local_path='borg',
@ -24,16 +24,16 @@ def export_tar_archive(
): ):
''' '''
Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to
export from the archive, a destination path to export to, a configuration dict, the local Borg export from the archive, a destination path to export to, a storage configuration dict, the
version, optional local and remote Borg paths, an optional filter program, whether to include local Borg version, optional local and remote Borg paths, an optional filter program, whether to
per-file details, and an optional number of path components to strip, export the archive into include per-file details, and an optional number of path components to strip, export the archive
the given destination path as a tar-formatted file. into the given destination path as a tar-formatted file.
If the destination path is "-", then stream the output to stdout instead of to a file. If the destination path is "-", then stream the output to stdout instead of to a file.
''' '''
borgmatic.logger.add_custom_log_levels() borgmatic.logger.add_custom_log_levels()
umask = config.get('umask', None) umask = storage_config.get('umask', None)
lock_wait = config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
full_command = ( full_command = (
(local_path, 'export-tar') (local_path, 'export-tar')
@ -69,7 +69,6 @@ def export_tar_archive(
full_command, full_command,
output_file=DO_NOT_CAPTURE if destination_path == '-' else None, output_file=DO_NOT_CAPTURE if destination_path == '-' else None,
output_log_level=output_log_level, output_log_level=output_log_level,
extra_environment=environment.make_environment(config),
borg_local_path=local_path, borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'), extra_environment=environment.make_environment(storage_config),
) )

View File

@ -10,7 +10,7 @@ logger = logging.getLogger(__name__)
def extract_last_archive_dry_run( def extract_last_archive_dry_run(
config, storage_config,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
repository_path, repository_path,
@ -32,7 +32,7 @@ def extract_last_archive_dry_run(
last_archive_name = rlist.resolve_archive_name( last_archive_name = rlist.resolve_archive_name(
repository_path, repository_path,
'latest', 'latest',
config, storage_config,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path, local_path,
@ -43,7 +43,7 @@ def extract_last_archive_dry_run(
return return
list_flag = ('--list',) if logger.isEnabledFor(logging.DEBUG) else () list_flag = ('--list',) if logger.isEnabledFor(logging.DEBUG) else ()
borg_environment = environment.make_environment(config) borg_environment = environment.make_environment(storage_config)
full_extract_command = ( full_extract_command = (
(local_path, 'extract', '--dry-run') (local_path, 'extract', '--dry-run')
+ (('--remote-path', remote_path) if remote_path else ()) + (('--remote-path', remote_path) if remote_path else ())
@ -57,11 +57,7 @@ def extract_last_archive_dry_run(
) )
execute_command( execute_command(
full_extract_command, full_extract_command, working_directory=None, extra_environment=borg_environment
working_directory=None,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
) )
@ -70,7 +66,8 @@ def extract_archive(
repository, repository,
archive, archive,
paths, paths,
config, location_config,
storage_config,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path='borg', local_path='borg',
@ -83,34 +80,29 @@ def extract_archive(
''' '''
Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to
restore from the archive, the local Borg version string, an argparse.Namespace of global restore from the archive, the local Borg version string, an argparse.Namespace of global
arguments, a configuration dict, optional local and remote Borg paths, and an optional arguments, location/storage configuration dicts, optional local and remote Borg paths, and an
destination path to extract to, extract the archive into the current directory. optional destination path to extract to, extract the archive into the current directory.
If extract to stdout is True, then start the extraction streaming to stdout, and return that If extract to stdout is True, then start the extraction streaming to stdout, and return that
extract process as an instance of subprocess.Popen. extract process as an instance of subprocess.Popen.
''' '''
umask = config.get('umask', None) umask = storage_config.get('umask', None)
lock_wait = config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
if progress and extract_to_stdout: if progress and extract_to_stdout:
raise ValueError('progress and extract_to_stdout cannot both be set') raise ValueError('progress and extract_to_stdout cannot both be set')
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version): if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
numeric_ids_flags = ('--numeric-ids',) if config.get('numeric_ids') else () numeric_ids_flags = ('--numeric-ids',) if location_config.get('numeric_ids') else ()
else: else:
numeric_ids_flags = ('--numeric-owner',) if config.get('numeric_ids') else () numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_ids') else ()
if strip_components == 'all': if strip_components == 'all':
if not paths: if not paths:
raise ValueError('The --strip-components flag with "all" requires at least one --path') raise ValueError('The --strip-components flag with "all" requires at least one --path')
# Calculate the maximum number of leading path components of the given paths. "if piece" # Calculate the maximum number of leading path components of the given paths.
# ignores empty path components, e.g. those resulting from a leading slash. And the "- 1" strip_components = max(0, *(len(path.split(os.path.sep)) - 1 for path in paths))
# is so this doesn't count the final path component, e.g. the filename itself.
strip_components = max(
0,
*(len(tuple(piece for piece in path.split(os.path.sep) if piece)) - 1 for path in paths)
)
full_command = ( full_command = (
(local_path, 'extract') (local_path, 'extract')
@ -135,8 +127,7 @@ def extract_archive(
+ (tuple(paths) if paths else ()) + (tuple(paths) if paths else ())
) )
borg_environment = environment.make_environment(config) borg_environment = environment.make_environment(storage_config)
borg_exit_codes = config.get('borg_exit_codes')
# The progress output isn't compatible with captured and logged output, as progress messes with # The progress output isn't compatible with captured and logged output, as progress messes with
# the terminal directly. # the terminal directly.
@ -146,8 +137,6 @@ def extract_archive(
output_file=DO_NOT_CAPTURE, output_file=DO_NOT_CAPTURE,
working_directory=destination_path, working_directory=destination_path,
extra_environment=borg_environment, extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
) )
return None return None
@ -158,16 +147,10 @@ def extract_archive(
working_directory=destination_path, working_directory=destination_path,
run_to_completion=False, run_to_completion=False,
extra_environment=borg_environment, extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
) )
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning # Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive. # if the restore paths don't exist in the archive.
execute_command( execute_command(
full_command, full_command, working_directory=destination_path, extra_environment=borg_environment
working_directory=destination_path,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
) )

View File

@ -1,12 +1,8 @@
import itertools import itertools
import json
import logging
import re import re
from borgmatic.borg import feature from borgmatic.borg import feature
logger = logging.getLogger(__name__)
def make_flags(name, value): def make_flags(name, value):
''' '''
@ -63,56 +59,25 @@ def make_repository_archive_flags(repository_path, archive, local_borg_version):
) )
DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}' # noqa: FS003
def make_match_archives_flags(match_archives, archive_name_format, local_borg_version): def make_match_archives_flags(match_archives, archive_name_format, local_borg_version):
''' '''
Return match archives flags based on the given match archives value, if any. If it isn't set, Return match archives flags based on the given match archives value, if any. If it isn't set,
return match archives flags to match archives created with the given (or default) archive name return match archives flags to match archives created with the given archive name format, if
format. This is done by replacing certain archive name format placeholders for ephemeral data any. This is done by replacing certain archive name format placeholders for ephemeral data (like
(like "{now}") with globs. "{now}") with globs.
''' '''
if match_archives: if match_archives:
if match_archives in {'*', 're:.*', 'sh:*'}:
return ()
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version): if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
return ('--match-archives', match_archives) return ('--match-archives', match_archives)
else: else:
return ('--glob-archives', re.sub(r'^sh:', '', match_archives)) return ('--glob-archives', re.sub(r'^sh:', '', match_archives))
derived_match_archives = re.sub( if not archive_name_format:
r'\{(now|utcnow|pid)([:%\w\.-]*)\}', '*', archive_name_format or DEFAULT_ARCHIVE_NAME_FORMAT
)
if derived_match_archives == '*':
return () return ()
derived_match_archives = re.sub(r'\{(now|utcnow|pid)([:%\w\.-]*)\}', '*', archive_name_format)
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version): if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
return ('--match-archives', f'sh:{derived_match_archives}') return ('--match-archives', f'sh:{derived_match_archives}')
else: else:
return ('--glob-archives', f'{derived_match_archives}') return ('--glob-archives', f'{derived_match_archives}')
def warn_for_aggressive_archive_flags(json_command, json_output):
'''
Given a JSON archives command and the resulting JSON string output from running it, parse the
JSON and warn if the command used an archive flag but the output indicates zero archives were
found.
'''
archive_flags_used = {'--glob-archives', '--match-archives'}.intersection(set(json_command))
if not archive_flags_used:
return
try:
if len(json.loads(json_output)['archives']) == 0:
logger.warning('An archive filter was applied, but no matching archives were found.')
logger.warning(
'Try adding --match-archives "*" or adjusting archive_name_format/match_archives in configuration.'
)
except json.JSONDecodeError as error:
logger.debug(f'Cannot parse JSON output from archive command: {error}')
except (TypeError, KeyError):
logger.debug('Cannot parse JSON output from archive command: No "archives" key found')

View File

@ -1,4 +1,3 @@
import argparse
import logging import logging
import borgmatic.logger import borgmatic.logger
@ -8,21 +7,24 @@ from borgmatic.execute import execute_command, execute_command_and_capture_outpu
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def make_info_command( def display_archives_info(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
info_arguments, info_arguments,
global_arguments, global_arguments,
local_path, local_path='borg',
remote_path, remote_path=None,
): ):
''' '''
Given a local or remote repository path, a configuration dict, the local Borg version, the Given a local or remote repository path, a storage config dict, the local Borg version, global
arguments to the info action as an argparse.Namespace, and global arguments, return a command arguments as an argparse.Namespace, and the arguments to the info action, display summary
as a tuple to display summary information for archives in the repository. information for Borg archives in the repository or return JSON summary information.
''' '''
return ( borgmatic.logger.add_custom_log_levels()
lock_wait = storage_config.get('lock_wait', None)
full_command = (
(local_path, 'info') (local_path, 'info')
+ ( + (
('--info',) ('--info',)
@ -36,7 +38,7 @@ def make_info_command(
) )
+ flags.make_flags('remote-path', remote_path) + flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json) + flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', config.get('lock_wait')) + flags.make_flags('lock-wait', lock_wait)
+ ( + (
( (
flags.make_flags('match-archives', f'sh:{info_arguments.prefix}*') flags.make_flags('match-archives', f'sh:{info_arguments.prefix}*')
@ -48,8 +50,8 @@ def make_info_command(
flags.make_match_archives_flags( flags.make_match_archives_flags(
info_arguments.match_archives info_arguments.match_archives
or info_arguments.archive or info_arguments.archive
or config.get('match_archives'), or storage_config.get('match_archives'),
config.get('archive_name_format'), storage_config.get('archive_name_format'),
local_borg_version, local_borg_version,
) )
) )
@ -60,59 +62,15 @@ def make_info_command(
+ flags.make_repository_flags(repository_path, local_borg_version) + flags.make_repository_flags(repository_path, local_borg_version)
) )
def display_archives_info(
repository_path,
config,
local_borg_version,
info_arguments,
global_arguments,
local_path='borg',
remote_path=None,
):
'''
Given a local or remote repository path, a configuration dict, the local Borg version, the
arguments to the info action as an argparse.Namespace, and global arguments, display summary
information for Borg archives in the repository or return JSON summary information.
'''
borgmatic.logger.add_custom_log_levels()
main_command = make_info_command(
repository_path,
config,
local_borg_version,
info_arguments,
global_arguments,
local_path,
remote_path,
)
json_command = make_info_command(
repository_path,
config,
local_borg_version,
argparse.Namespace(**dict(info_arguments.__dict__, json=True)),
global_arguments,
local_path,
remote_path,
)
borg_exit_codes = config.get('borg_exit_codes')
json_info = execute_command_and_capture_output(
json_command,
extra_environment=environment.make_environment(config),
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
if info_arguments.json: if info_arguments.json:
return json_info return execute_command_and_capture_output(
full_command,
flags.warn_for_aggressive_archive_flags(json_command, json_info) extra_environment=environment.make_environment(storage_config),
)
execute_command( else:
main_command, execute_command(
output_log_level=logging.ANSWER, full_command,
extra_environment=environment.make_environment(config), output_log_level=logging.ANSWER,
borg_local_path=local_path, borg_local_path=local_path,
borg_exit_codes=borg_exit_codes, extra_environment=environment.make_environment(storage_config),
) )

View File

@ -21,7 +21,7 @@ MAKE_FLAGS_EXCLUDES = (
def make_list_command( def make_list_command(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
list_arguments, list_arguments,
global_arguments, global_arguments,
@ -29,11 +29,11 @@ def make_list_command(
remote_path=None, remote_path=None,
): ):
''' '''
Given a local or remote repository path, a configuration dict, the arguments to the list action, Given a local or remote repository path, a storage config dict, the arguments to the list
and local and remote Borg paths, return a command as a tuple to list archives or paths within an action, and local and remote Borg paths, return a command as a tuple to list archives or paths
archive. within an archive.
''' '''
lock_wait = config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
return ( return (
(local_path, 'list') (local_path, 'list')
@ -79,11 +79,9 @@ def make_find_paths(find_paths):
return () return ()
return tuple( return tuple(
( find_path
find_path if re.compile(r'([-!+RrPp] )|(\w\w:)').match(find_path)
if re.compile(r'([-!+RrPp] )|(\w\w:)').match(find_path) else f'sh:**/*{find_path}*/**'
else f'sh:**/*{find_path}*/**'
)
for find_path in find_paths for find_path in find_paths
) )
@ -91,43 +89,40 @@ def make_find_paths(find_paths):
def capture_archive_listing( def capture_archive_listing(
repository_path, repository_path,
archive, archive,
config, storage_config,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
list_paths=None, list_path=None,
path_format=None,
local_path='borg', local_path='borg',
remote_path=None, remote_path=None,
): ):
''' '''
Given a local or remote repository path, an archive name, a configuration dict, the local Borg Given a local or remote repository path, an archive name, a storage config dict, the local Borg
version, global arguments as an argparse.Namespace, the archive paths in which to list files, version, global arguments as an argparse.Namespace, the archive path in which to list files, and
the Borg path format to use for the output, and local and remote Borg paths, capture the output local and remote Borg paths, capture the output of listing that archive and return it as a list
of listing that archive and return it as a list of file paths. of file paths.
''' '''
borg_environment = environment.make_environment(config) borg_environment = environment.make_environment(storage_config)
return tuple( return tuple(
execute_command_and_capture_output( execute_command_and_capture_output(
make_list_command( make_list_command(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
argparse.Namespace( argparse.Namespace(
repository=repository_path, repository=repository_path,
archive=archive, archive=archive,
paths=[f'sh:{path}' for path in list_paths] if list_paths else None, paths=[f'sh:{list_path}'],
find_paths=None, find_paths=None,
json=None, json=None,
format=path_format or '{path}{NL}', # noqa: FS003 format='{path}{NL}', # noqa: FS003
), ),
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
), ),
extra_environment=borg_environment, extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
) )
.strip('\n') .strip('\n')
.split('\n') .split('\n')
@ -136,7 +131,7 @@ def capture_archive_listing(
def list_archive( def list_archive(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
list_arguments, list_arguments,
global_arguments, global_arguments,
@ -144,7 +139,7 @@ def list_archive(
remote_path=None, remote_path=None,
): ):
''' '''
Given a local or remote repository path, a configuration dict, the local Borg version, global Given a local or remote repository path, a storage config dict, the local Borg version, global
arguments as an argparse.Namespace, the arguments to the list action as an argparse.Namespace, arguments as an argparse.Namespace, the arguments to the list action as an argparse.Namespace,
and local and remote Borg paths, display the output of listing the files of a Borg archive (or and local and remote Borg paths, display the output of listing the files of a Borg archive (or
return JSON output). If list_arguments.find_paths are given, list the files by searching across return JSON output). If list_arguments.find_paths are given, list the files by searching across
@ -172,7 +167,7 @@ def list_archive(
) )
return rlist.list_repository( return rlist.list_repository(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
rlist_arguments, rlist_arguments,
global_arguments, global_arguments,
@ -192,8 +187,7 @@ def list_archive(
'The --json flag on the list action is not supported when using the --archive/--find flags.' 'The --json flag on the list action is not supported when using the --archive/--find flags.'
) )
borg_environment = environment.make_environment(config) borg_environment = environment.make_environment(storage_config)
borg_exit_codes = config.get('borg_exit_codes')
# If there are any paths to find (and there's not a single archive already selected), start by # If there are any paths to find (and there's not a single archive already selected), start by
# getting a list of archives to search. # getting a list of archives to search.
@ -215,7 +209,7 @@ def list_archive(
execute_command_and_capture_output( execute_command_and_capture_output(
rlist.make_rlist_command( rlist.make_rlist_command(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
rlist_arguments, rlist_arguments,
global_arguments, global_arguments,
@ -223,8 +217,6 @@ def list_archive(
remote_path, remote_path,
), ),
extra_environment=borg_environment, extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
) )
.strip('\n') .strip('\n')
.split('\n') .split('\n')
@ -246,7 +238,7 @@ def list_archive(
main_command = make_list_command( main_command = make_list_command(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
archive_arguments, archive_arguments,
global_arguments, global_arguments,
@ -257,7 +249,6 @@ def list_archive(
execute_command( execute_command(
main_command, main_command,
output_log_level=logging.ANSWER, output_log_level=logging.ANSWER,
extra_environment=borg_environment,
borg_local_path=local_path, borg_local_path=local_path,
borg_exit_codes=borg_exit_codes, extra_environment=borg_environment,
) )

View File

@ -10,7 +10,7 @@ def mount_archive(
repository_path, repository_path,
archive, archive,
mount_arguments, mount_arguments,
config, storage_config,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path='borg', local_path='borg',
@ -22,8 +22,8 @@ def mount_archive(
dict, the local Borg version, global arguments as an argparse.Namespace instance, and optional dict, the local Borg version, global arguments as an argparse.Namespace instance, and optional
local and remote Borg paths, mount the archive onto the mount point. local and remote Borg paths, mount the archive onto the mount point.
''' '''
umask = config.get('umask', None) umask = storage_config.get('umask', None)
lock_wait = config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
full_command = ( full_command = (
(local_path, 'mount') (local_path, 'mount')
@ -58,22 +58,16 @@ def mount_archive(
+ (tuple(mount_arguments.paths) if mount_arguments.paths else ()) + (tuple(mount_arguments.paths) if mount_arguments.paths else ())
) )
borg_environment = environment.make_environment(config) borg_environment = environment.make_environment(storage_config)
# Don't capture the output when foreground mode is used so that ctrl-C can work properly. # Don't capture the output when foreground mode is used so that ctrl-C can work properly.
if mount_arguments.foreground: if mount_arguments.foreground:
execute_command( execute_command(
full_command, full_command,
output_file=DO_NOT_CAPTURE, output_file=DO_NOT_CAPTURE,
extra_environment=borg_environment,
borg_local_path=local_path, borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'), extra_environment=borg_environment,
) )
return return
execute_command( execute_command(full_command, borg_local_path=local_path, extra_environment=borg_environment)
full_command,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
)

View File

@ -7,9 +7,9 @@ from borgmatic.execute import execute_command
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def make_prune_flags(config, local_borg_version): def make_prune_flags(storage_config, retention_config, local_borg_version):
''' '''
Given a configuration dict mapping from option name to value, transform it into an sequence of Given a retention config dict mapping from option name to value, transform it into an sequence of
command-line flags. command-line flags.
For example, given a retention config of: For example, given a retention config of:
@ -23,12 +23,12 @@ def make_prune_flags(config, local_borg_version):
('--keep-monthly', '6'), ('--keep-monthly', '6'),
) )
''' '''
config = retention_config.copy()
prefix = config.pop('prefix', None)
flag_pairs = ( flag_pairs = (
('--' + option_name.replace('_', '-'), str(value)) ('--' + option_name.replace('_', '-'), str(value)) for option_name, value in config.items()
for option_name, value in config.items()
if option_name.startswith('keep_') and option_name != 'keep_exclude_tags'
) )
prefix = config.get('prefix')
return tuple(element for pair in flag_pairs for element in pair) + ( return tuple(element for pair in flag_pairs for element in pair) + (
( (
@ -39,8 +39,8 @@ def make_prune_flags(config, local_borg_version):
if prefix if prefix
else ( else (
flags.make_match_archives_flags( flags.make_match_archives_flags(
config.get('match_archives'), storage_config.get('match_archives'),
config.get('archive_name_format'), storage_config.get('archive_name_format'),
local_borg_version, local_borg_version,
) )
) )
@ -50,7 +50,8 @@ def make_prune_flags(config, local_borg_version):
def prune_archives( def prune_archives(
dry_run, dry_run,
repository_path, repository_path,
config, storage_config,
retention_config,
local_borg_version, local_borg_version,
prune_arguments, prune_arguments,
global_arguments, global_arguments,
@ -58,17 +59,18 @@ def prune_archives(
remote_path=None, remote_path=None,
): ):
''' '''
Given dry-run flag, a local or remote repository path, and a configuration dict, prune Borg Given dry-run flag, a local or remote repository path, a storage config dict, and a
archives according to the retention policy specified in that configuration. retention config dict, prune Borg archives according to the retention policy specified in that
configuration.
''' '''
borgmatic.logger.add_custom_log_levels() borgmatic.logger.add_custom_log_levels()
umask = config.get('umask', None) umask = storage_config.get('umask', None)
lock_wait = config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
extra_borg_options = config.get('extra_borg_options', {}).get('prune', '') extra_borg_options = storage_config.get('extra_borg_options', {}).get('prune', '')
full_command = ( full_command = (
(local_path, 'prune') (local_path, 'prune')
+ make_prune_flags(config, local_borg_version) + make_prune_flags(storage_config, retention_config, local_borg_version)
+ (('--remote-path', remote_path) if remote_path else ()) + (('--remote-path', remote_path) if remote_path else ())
+ (('--umask', str(umask)) if umask else ()) + (('--umask', str(umask)) if umask else ())
+ (('--log-json',) if global_arguments.log_json else ()) + (('--log-json',) if global_arguments.log_json else ())
@ -94,7 +96,6 @@ def prune_archives(
execute_command( execute_command(
full_command, full_command,
output_log_level=output_log_level, output_log_level=output_log_level,
extra_environment=environment.make_environment(config),
borg_local_path=local_path, borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'), extra_environment=environment.make_environment(storage_config),
) )

View File

@ -1,5 +1,4 @@
import argparse import argparse
import json
import logging import logging
import subprocess import subprocess
@ -9,13 +8,13 @@ from borgmatic.execute import DO_NOT_CAPTURE, execute_command
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
RINFO_REPOSITORY_NOT_FOUND_EXIT_CODES = {2, 13} RINFO_REPOSITORY_NOT_FOUND_EXIT_CODE = 2
def create_repository( def create_repository(
dry_run, dry_run,
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
encryption_mode, encryption_mode,
@ -28,42 +27,29 @@ def create_repository(
remote_path=None, remote_path=None,
): ):
''' '''
Given a dry-run flag, a local or remote repository path, a configuration dict, the local Borg Given a dry-run flag, a local or remote repository path, a storage configuration dict, the local
version, a Borg encryption mode, the path to another repo whose key material should be reused, Borg version, a Borg encryption mode, the path to another repo whose key material should be
whether the repository should be append-only, and the storage quota to use, create the reused, whether the repository should be append-only, and the storage quota to use, create the
repository. If the repository already exists, then log and skip creation. repository. If the repository already exists, then log and skip creation.
Raise ValueError if the requested encryption mode does not match that of the repository.
Raise json.decoder.JSONDecodeError if the "borg info" JSON outputcannot be decoded.
Raise subprocess.CalledProcessError if "borg info" returns an error exit code.
''' '''
try: try:
info_data = json.loads( rinfo.display_repository_info(
rinfo.display_repository_info( repository_path,
repository_path, storage_config,
config, local_borg_version,
local_borg_version, argparse.Namespace(json=True),
argparse.Namespace(json=True), global_arguments,
global_arguments, local_path,
local_path, remote_path,
remote_path,
)
) )
repository_encryption_mode = info_data.get('encryption', {}).get('mode')
if repository_encryption_mode != encryption_mode:
raise ValueError(
f'Requested encryption mode "{encryption_mode}" does not match existing repository encryption mode "{repository_encryption_mode}"'
)
logger.info(f'{repository_path}: Repository already exists. Skipping creation.') logger.info(f'{repository_path}: Repository already exists. Skipping creation.')
return return
except subprocess.CalledProcessError as error: except subprocess.CalledProcessError as error:
if error.returncode not in RINFO_REPOSITORY_NOT_FOUND_EXIT_CODES: if error.returncode != RINFO_REPOSITORY_NOT_FOUND_EXIT_CODE:
raise raise
lock_wait = config.get('lock_wait') lock_wait = storage_config.get('lock_wait')
extra_borg_options = config.get('extra_borg_options', {}).get('rcreate', '') extra_borg_options = storage_config.get('extra_borg_options', {}).get('rcreate', '')
rcreate_command = ( rcreate_command = (
(local_path,) (local_path,)
@ -95,7 +81,6 @@ def create_repository(
execute_command( execute_command(
rcreate_command, rcreate_command,
output_file=DO_NOT_CAPTURE, output_file=DO_NOT_CAPTURE,
extra_environment=environment.make_environment(config),
borg_local_path=local_path, borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'), extra_environment=environment.make_environment(storage_config),
) )

View File

@ -9,7 +9,7 @@ logger = logging.getLogger(__name__)
def display_repository_info( def display_repository_info(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
rinfo_arguments, rinfo_arguments,
global_arguments, global_arguments,
@ -17,12 +17,12 @@ def display_repository_info(
remote_path=None, remote_path=None,
): ):
''' '''
Given a local or remote repository path, a configuration dict, the local Borg version, the Given a local or remote repository path, a storage config dict, the local Borg version, the
arguments to the rinfo action, and global arguments as an argparse.Namespace, display summary arguments to the rinfo action, and global arguments as an argparse.Namespace, display summary
information for the Borg repository or return JSON summary information. information for the Borg repository or return JSON summary information.
''' '''
borgmatic.logger.add_custom_log_levels() borgmatic.logger.add_custom_log_levels()
lock_wait = config.get('lock_wait', None) lock_wait = storage_config.get('lock_wait', None)
full_command = ( full_command = (
(local_path,) (local_path,)
@ -48,21 +48,17 @@ def display_repository_info(
+ flags.make_repository_flags(repository_path, local_borg_version) + flags.make_repository_flags(repository_path, local_borg_version)
) )
extra_environment = environment.make_environment(config) extra_environment = environment.make_environment(storage_config)
borg_exit_codes = config.get('borg_exit_codes')
if rinfo_arguments.json: if rinfo_arguments.json:
return execute_command_and_capture_output( return execute_command_and_capture_output(
full_command, full_command,
extra_environment=extra_environment, extra_environment=extra_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
) )
else: else:
execute_command( execute_command(
full_command, full_command,
output_log_level=logging.ANSWER, output_log_level=logging.ANSWER,
extra_environment=extra_environment,
borg_local_path=local_path, borg_local_path=local_path,
borg_exit_codes=borg_exit_codes, extra_environment=extra_environment,
) )

View File

@ -1,4 +1,3 @@
import argparse
import logging import logging
import borgmatic.logger import borgmatic.logger
@ -11,14 +10,14 @@ logger = logging.getLogger(__name__)
def resolve_archive_name( def resolve_archive_name(
repository_path, repository_path,
archive, archive,
config, storage_config,
local_borg_version, local_borg_version,
global_arguments, global_arguments,
local_path='borg', local_path='borg',
remote_path=None, remote_path=None,
): ):
''' '''
Given a local or remote repository path, an archive name, a configuration dict, the local Borg Given a local or remote repository path, an archive name, a storage config dict, the local Borg
version, global arguments as an argparse.Namespace, a local Borg path, and a remote Borg path, version, global arguments as an argparse.Namespace, a local Borg path, and a remote Borg path,
return the archive name. But if the archive name is "latest", then instead introspect the return the archive name. But if the archive name is "latest", then instead introspect the
repository for the latest archive and return its name. repository for the latest archive and return its name.
@ -35,7 +34,7 @@ def resolve_archive_name(
) )
+ flags.make_flags('remote-path', remote_path) + flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json) + flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', config.get('lock_wait')) + flags.make_flags('lock-wait', storage_config.get('lock_wait'))
+ flags.make_flags('last', 1) + flags.make_flags('last', 1)
+ ('--short',) + ('--short',)
+ flags.make_repository_flags(repository_path, local_borg_version) + flags.make_repository_flags(repository_path, local_borg_version)
@ -43,9 +42,7 @@ def resolve_archive_name(
output = execute_command_and_capture_output( output = execute_command_and_capture_output(
full_command, full_command,
extra_environment=environment.make_environment(config), extra_environment=environment.make_environment(storage_config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
) )
try: try:
latest_archive = output.strip().splitlines()[-1] latest_archive = output.strip().splitlines()[-1]
@ -62,7 +59,7 @@ MAKE_FLAGS_EXCLUDES = ('repository', 'prefix', 'match_archives')
def make_rlist_command( def make_rlist_command(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
rlist_arguments, rlist_arguments,
global_arguments, global_arguments,
@ -70,7 +67,7 @@ def make_rlist_command(
remote_path=None, remote_path=None,
): ):
''' '''
Given a local or remote repository path, a configuration dict, the local Borg version, the Given a local or remote repository path, a storage config dict, the local Borg version, the
arguments to the rlist action, global arguments as an argparse.Namespace instance, and local and arguments to the rlist action, global arguments as an argparse.Namespace instance, and local and
remote Borg paths, return a command as a tuple to list archives with a repository. remote Borg paths, return a command as a tuple to list archives with a repository.
''' '''
@ -91,7 +88,7 @@ def make_rlist_command(
) )
+ flags.make_flags('remote-path', remote_path) + flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json) + flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', config.get('lock_wait')) + flags.make_flags('lock-wait', storage_config.get('lock_wait'))
+ ( + (
( (
flags.make_flags('match-archives', f'sh:{rlist_arguments.prefix}*') flags.make_flags('match-archives', f'sh:{rlist_arguments.prefix}*')
@ -101,8 +98,8 @@ def make_rlist_command(
if rlist_arguments.prefix if rlist_arguments.prefix
else ( else (
flags.make_match_archives_flags( flags.make_match_archives_flags(
rlist_arguments.match_archives or config.get('match_archives'), rlist_arguments.match_archives or storage_config.get('match_archives'),
config.get('archive_name_format'), storage_config.get('archive_name_format'),
local_borg_version, local_borg_version,
) )
) )
@ -114,7 +111,7 @@ def make_rlist_command(
def list_repository( def list_repository(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
rlist_arguments, rlist_arguments,
global_arguments, global_arguments,
@ -122,50 +119,30 @@ def list_repository(
remote_path=None, remote_path=None,
): ):
''' '''
Given a local or remote repository path, a configuration dict, the local Borg version, the Given a local or remote repository path, a storage config dict, the local Borg version, the
arguments to the list action, global arguments as an argparse.Namespace instance, and local and arguments to the list action, global arguments as an argparse.Namespace instance, and local and
remote Borg paths, display the output of listing Borg archives in the given repository (or remote Borg paths, display the output of listing Borg archives in the given repository (or
return JSON output). return JSON output).
''' '''
borgmatic.logger.add_custom_log_levels() borgmatic.logger.add_custom_log_levels()
borg_environment = environment.make_environment(config) borg_environment = environment.make_environment(storage_config)
main_command = make_rlist_command( main_command = make_rlist_command(
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
rlist_arguments, rlist_arguments,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
json_command = make_rlist_command(
repository_path,
config,
local_borg_version,
argparse.Namespace(**dict(rlist_arguments.__dict__, json=True)),
global_arguments,
local_path,
remote_path,
)
borg_exit_codes = config.get('borg_exit_codes')
json_listing = execute_command_and_capture_output(
json_command,
extra_environment=borg_environment,
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)
if rlist_arguments.json: if rlist_arguments.json:
return json_listing return execute_command_and_capture_output(main_command, extra_environment=borg_environment)
else:
flags.warn_for_aggressive_archive_flags(json_command, json_listing) execute_command(
main_command,
execute_command( output_log_level=logging.ANSWER,
main_command, borg_local_path=local_path,
output_log_level=logging.ANSWER, extra_environment=borg_environment,
extra_environment=borg_environment, )
borg_local_path=local_path,
borg_exit_codes=borg_exit_codes,
)

View File

@ -10,7 +10,7 @@ logger = logging.getLogger(__name__)
def transfer_archives( def transfer_archives(
dry_run, dry_run,
repository_path, repository_path,
config, storage_config,
local_borg_version, local_borg_version,
transfer_arguments, transfer_arguments,
global_arguments, global_arguments,
@ -18,7 +18,7 @@ def transfer_archives(
remote_path=None, remote_path=None,
): ):
''' '''
Given a dry-run flag, a local or remote repository path, a configuration dict, the local Borg Given a dry-run flag, a local or remote repository path, a storage config dict, the local Borg
version, the arguments to the transfer action, and global arguments as an argparse.Namespace version, the arguments to the transfer action, and global arguments as an argparse.Namespace
instance, transfer archives to the given repository. instance, transfer archives to the given repository.
''' '''
@ -30,7 +30,7 @@ def transfer_archives(
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ()) + (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+ flags.make_flags('remote-path', remote_path) + flags.make_flags('remote-path', remote_path)
+ flags.make_flags('log-json', global_arguments.log_json) + flags.make_flags('log-json', global_arguments.log_json)
+ flags.make_flags('lock-wait', config.get('lock_wait', None)) + flags.make_flags('lock-wait', storage_config.get('lock_wait', None))
+ ( + (
flags.make_flags_from_arguments( flags.make_flags_from_arguments(
transfer_arguments, transfer_arguments,
@ -40,8 +40,8 @@ def transfer_archives(
flags.make_match_archives_flags( flags.make_match_archives_flags(
transfer_arguments.match_archives transfer_arguments.match_archives
or transfer_arguments.archive or transfer_arguments.archive
or config.get('match_archives'), or storage_config.get('match_archives'),
config.get('archive_name_format'), storage_config.get('archive_name_format'),
local_borg_version, local_borg_version,
) )
) )
@ -56,6 +56,5 @@ def transfer_archives(
output_log_level=logging.ANSWER, output_log_level=logging.ANSWER,
output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None, output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None,
borg_local_path=local_path, borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'), extra_environment=environment.make_environment(storage_config),
extra_environment=environment.make_environment(config),
) )

View File

@ -5,7 +5,7 @@ from borgmatic.execute import execute_command
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def unmount_archive(config, mount_point, local_path='borg'): def unmount_archive(mount_point, local_path='borg'):
''' '''
Given a mounted filesystem mount point, and an optional local Borg paths, umount the filesystem Given a mounted filesystem mount point, and an optional local Borg paths, umount the filesystem
from the mount point. from the mount point.
@ -17,6 +17,4 @@ def unmount_archive(config, mount_point, local_path='borg'):
+ (mount_point,) + (mount_point,)
) )
execute_command( execute_command(full_command)
full_command, borg_local_path=local_path, borg_exit_codes=config.get('borg_exit_codes')
)

View File

@ -6,9 +6,9 @@ from borgmatic.execute import execute_command_and_capture_output
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def local_borg_version(config, local_path='borg'): def local_borg_version(storage_config, local_path='borg'):
''' '''
Given a configuration dict and a local Borg binary path, return a version string for it. Given a storage configuration dict and a local Borg binary path, return a version string for it.
Raise OSError or CalledProcessError if there is a problem running Borg. Raise OSError or CalledProcessError if there is a problem running Borg.
Raise ValueError if the version cannot be parsed. Raise ValueError if the version cannot be parsed.
@ -20,9 +20,7 @@ def local_borg_version(config, local_path='borg'):
) )
output = execute_command_and_capture_output( output = execute_command_and_capture_output(
full_command, full_command,
extra_environment=environment.make_environment(config), extra_environment=environment.make_environment(storage_config),
borg_local_path=local_path,
borg_exit_codes=config.get('borg_exit_codes'),
) )
try: try:

View File

@ -1,7 +1,7 @@
import collections import collections
import itertools import itertools
import sys import sys
from argparse import ArgumentParser from argparse import Action, ArgumentParser
from borgmatic.config import collect from borgmatic.config import collect
@ -23,7 +23,6 @@ ACTION_ALIASES = {
'info': ['-i'], 'info': ['-i'],
'transfer': [], 'transfer': [],
'break-lock': [], 'break-lock': [],
'key': [],
'borg': [], 'borg': [],
} }
@ -113,54 +112,6 @@ def parse_and_record_action_arguments(
return tuple(argument for argument in remaining if argument != action_name) return tuple(argument for argument in remaining if argument != action_name)
def argument_is_flag(argument):
'''
Return True if the given argument looks like a flag, e.g. '--some-flag', as opposed to a
non-flag value.
'''
return isinstance(argument, str) and argument.startswith('--')
def group_arguments_with_values(arguments):
'''
Given a sequence of arguments, return a sequence of tuples where each one contains either a
single argument (such as for a stand-alone flag) or a flag argument and its corresponding value.
For instance, given the following arguments sequence as input:
('--foo', '--bar', '33', '--baz')
... return the following output:
(('--foo',), ('--bar', '33'), ('--baz',))
'''
grouped_arguments = []
index = 0
while index < len(arguments):
this_argument = arguments[index]
try:
next_argument = arguments[index + 1]
except IndexError:
grouped_arguments.append((this_argument,))
break
if (
argument_is_flag(this_argument)
and not argument_is_flag(next_argument)
and next_argument not in ACTION_ALIASES
):
grouped_arguments.append((this_argument, next_argument))
index += 2
continue
grouped_arguments.append((this_argument,))
index += 1
return tuple(grouped_arguments)
def get_unparsable_arguments(remaining_action_arguments): def get_unparsable_arguments(remaining_action_arguments):
''' '''
Given a sequence of argument tuples (one per action parser that parsed arguments), determine the Given a sequence of argument tuples (one per action parser that parsed arguments), determine the
@ -169,21 +120,12 @@ def get_unparsable_arguments(remaining_action_arguments):
if not remaining_action_arguments: if not remaining_action_arguments:
return () return ()
grouped_action_arguments = tuple(
group_arguments_with_values(action_arguments)
for action_arguments in remaining_action_arguments
)
return tuple( return tuple(
itertools.chain.from_iterable( argument
argument_group for argument in dict.fromkeys(
for argument_group in dict.fromkeys( itertools.chain.from_iterable(remaining_action_arguments)
itertools.chain.from_iterable(grouped_action_arguments) ).keys()
).keys() if all(argument in action_arguments for action_arguments in remaining_action_arguments)
if all(
argument_group in action_arguments for action_arguments in grouped_action_arguments
)
)
) )
@ -274,12 +216,42 @@ def parse_arguments_for_actions(unparsed_arguments, action_parsers, global_parse
arguments['global'], remaining = global_parser.parse_known_args(unparsed_arguments) arguments['global'], remaining = global_parser.parse_known_args(unparsed_arguments)
remaining_action_arguments.append(remaining) remaining_action_arguments.append(remaining)
# Prevent action names and arguments that follow "--config" paths from being considered as
# additional paths.
for argument_name in arguments.keys():
if argument_name == 'global':
continue
for action_name in [argument_name] + ACTION_ALIASES.get(argument_name, []):
try:
action_name_index = arguments['global'].config_paths.index(action_name)
arguments['global'].config_paths = arguments['global'].config_paths[
:action_name_index
]
break
except ValueError:
pass
return ( return (
arguments, arguments,
tuple(remaining_action_arguments) if arguments else unparsed_arguments, tuple(remaining_action_arguments) if arguments else unparsed_arguments,
) )
class Extend_action(Action):
'''
An argparse action to support Python 3.8's "extend" action in older versions of Python.
'''
def __call__(self, parser, namespace, values, option_string=None):
items = getattr(namespace, self.dest, None)
if items:
items.extend(values) # pragma: no cover
else:
setattr(namespace, self.dest, list(values))
def make_parsers(): def make_parsers():
''' '''
Build a global arguments parser, individual action parsers, and a combined parser containing Build a global arguments parser, individual action parsers, and a combined parser containing
@ -291,14 +263,16 @@ def make_parsers():
unexpanded_config_paths = collect.get_default_config_paths(expand_home=False) unexpanded_config_paths = collect.get_default_config_paths(expand_home=False)
global_parser = ArgumentParser(add_help=False) global_parser = ArgumentParser(add_help=False)
global_parser.register('action', 'extend', Extend_action)
global_group = global_parser.add_argument_group('global arguments') global_group = global_parser.add_argument_group('global arguments')
global_group.add_argument( global_group.add_argument(
'-c', '-c',
'--config', '--config',
nargs='*',
dest='config_paths', dest='config_paths',
action='append', default=config_paths,
help=f"Configuration filename or directory, can specify flag multiple times, defaults to: {' '.join(unexpanded_config_paths)}", help=f"Configuration filenames or directories, defaults to: {' '.join(unexpanded_config_paths)}",
) )
global_group.add_argument( global_group.add_argument(
'-n', '-n',
@ -316,28 +290,28 @@ def make_parsers():
type=int, type=int,
choices=range(-2, 3), choices=range(-2, 3),
default=0, default=0,
help='Display verbose progress to the console: -2 (disabled), -1 (errors only), 0 (responses to actions, the default), 1 (info about steps borgmatic is taking), or 2 (debug)', help='Display verbose progress to the console (disabled, errors only, default, some, or lots: -2, -1, 0, 1, or 2)',
) )
global_group.add_argument( global_group.add_argument(
'--syslog-verbosity', '--syslog-verbosity',
type=int, type=int,
choices=range(-2, 3), choices=range(-2, 3),
default=-2, default=0,
help='Log verbose progress to syslog: -2 (disabled, the default), -1 (errors only), 0 (responses to actions), 1 (info about steps borgmatic is taking), or 2 (debug)', help='Log verbose progress to syslog (disabled, errors only, default, some, or lots: -2, -1, 0, 1, or 2). Ignored when console is interactive or --log-file is given',
) )
global_group.add_argument( global_group.add_argument(
'--log-file-verbosity', '--log-file-verbosity',
type=int, type=int,
choices=range(-2, 3), choices=range(-2, 3),
default=1, default=0,
help='When --log-file is given, log verbose progress to file: -2 (disabled), -1 (errors only), 0 (responses to actions), 1 (info about steps borgmatic is taking, the default), or 2 (debug)', help='Log verbose progress to log file (disabled, errors only, default, some, or lots: -2, -1, 0, 1, or 2). Only used when --log-file is given',
) )
global_group.add_argument( global_group.add_argument(
'--monitoring-verbosity', '--monitoring-verbosity',
type=int, type=int,
choices=range(-2, 3), choices=range(-2, 3),
default=1, default=0,
help='When a monitoring integration supporting logging is configured, log verbose progress to it: -2 (disabled), -1 (errors only), responses to actions (0), 1 (info about steps borgmatic is taking, the default), or 2 (debug)', help='Log verbose progress to monitoring integrations that support logging (from disabled, errors only, default, some, or lots: -2, -1, 0, 1, or 2)',
) )
global_group.add_argument( global_group.add_argument(
'--log-file', '--log-file',
@ -356,10 +330,11 @@ def make_parsers():
) )
global_group.add_argument( global_group.add_argument(
'--override', '--override',
metavar='OPTION.SUBOPTION=VALUE', metavar='SECTION.OPTION=VALUE',
nargs='+',
dest='overrides', dest='overrides',
action='append', action='extend',
help='Configuration file option to override with specified value, see documentation for overriding list or key/value options, can specify flag multiple times', help='One or more configuration file options to override with specified values',
) )
global_group.add_argument( global_group.add_argument(
'--no-environment-interpolation', '--no-environment-interpolation',
@ -389,8 +364,9 @@ def make_parsers():
global_plus_action_parser = ArgumentParser( global_plus_action_parser = ArgumentParser(
description=''' description='''
Simple, configuration-driven backup software for servers and workstations. If no actions Simple, configuration-driven backup software for servers and workstations. If none of
are given, then borgmatic defaults to: create, prune, compact, and check. the action options are given, then borgmatic defaults to: create, prune, compact, and
check.
''', ''',
parents=[global_parser], parents=[global_parser],
) )
@ -524,8 +500,8 @@ def make_parsers():
prune_parser = action_parsers.add_parser( prune_parser = action_parsers.add_parser(
'prune', 'prune',
aliases=ACTION_ALIASES['prune'], aliases=ACTION_ALIASES['prune'],
help='Prune archives according to the retention policy (with Borg 1.2+, you must run compact afterwards to actually free space)', help='Prune archives according to the retention policy (with Borg 1.2+, run compact afterwards to actually free space)',
description='Prune archives according to the retention policy (with Borg 1.2+, you must run compact afterwards to actually free space)', description='Prune archives according to the retention policy (with Borg 1.2+, run compact afterwards to actually free space)',
add_help=False, add_help=False,
) )
prune_group = prune_parser.add_argument_group('prune arguments') prune_group = prune_parser.add_argument_group('prune arguments')
@ -661,25 +637,13 @@ def make_parsers():
action='store_true', action='store_true',
help='Attempt to repair any inconsistencies found (for interactive use)', help='Attempt to repair any inconsistencies found (for interactive use)',
) )
check_group.add_argument(
'--max-duration',
metavar='SECONDS',
help='How long to check the repository before interrupting the check, defaults to no interruption',
)
check_group.add_argument(
'-a',
'--match-archives',
'--glob-archives',
metavar='PATTERN',
help='Only check archives with names matching this pattern',
)
check_group.add_argument( check_group.add_argument(
'--only', '--only',
metavar='CHECK', metavar='CHECK',
choices=('repository', 'archives', 'data', 'extract', 'spot'), choices=('repository', 'archives', 'data', 'extract'),
dest='only_checks', dest='only',
action='append', action='append',
help='Run a particular consistency check (repository, archives, data, extract, or spot) instead of configured checks (subject to configured frequency, can specify flag multiple times)', help='Run a particular consistency check (repository, archives, data, or extract) instead of configured checks (subject to configured frequency, can specify flag multiple times)',
) )
check_group.add_argument( check_group.add_argument(
'--force', '--force',
@ -708,9 +672,9 @@ def make_parsers():
'--path', '--path',
'--restore-path', '--restore-path',
metavar='PATH', metavar='PATH',
nargs='+',
dest='paths', dest='paths',
action='append', help='Paths to extract from archive, defaults to the entire archive',
help='Path to extract from archive, can specify flag multiple times, defaults to the entire archive',
) )
extract_group.add_argument( extract_group.add_argument(
'--destination', '--destination',
@ -793,11 +757,6 @@ def make_parsers():
action='store_true', action='store_true',
help='Display progress for each file as it is extracted', help='Display progress for each file as it is extracted',
) )
config_bootstrap_group.add_argument(
'--ssh-command',
metavar='COMMAND',
help='Command to use instead of "ssh"',
)
config_bootstrap_group.add_argument( config_bootstrap_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit' '-h', '--help', action='help', help='Show this help message and exit'
) )
@ -867,9 +826,9 @@ def make_parsers():
export_tar_group.add_argument( export_tar_group.add_argument(
'--path', '--path',
metavar='PATH', metavar='PATH',
nargs='+',
dest='paths', dest='paths',
action='append', help='Paths to export from archive, defaults to the entire archive',
help='Path to export from archive, can specify flag multiple times, defaults to the entire archive',
) )
export_tar_group.add_argument( export_tar_group.add_argument(
'--destination', '--destination',
@ -918,9 +877,9 @@ def make_parsers():
mount_group.add_argument( mount_group.add_argument(
'--path', '--path',
metavar='PATH', metavar='PATH',
nargs='+',
dest='paths', dest='paths',
action='append', help='Paths to mount from archive, defaults to the entire archive',
help='Path to mount from archive, can specify multiple times, defaults to the entire archive',
) )
mount_group.add_argument( mount_group.add_argument(
'--foreground', '--foreground',
@ -980,8 +939,8 @@ def make_parsers():
restore_parser = action_parsers.add_parser( restore_parser = action_parsers.add_parser(
'restore', 'restore',
aliases=ACTION_ALIASES['restore'], aliases=ACTION_ALIASES['restore'],
help='Restore data source (e.g. database) dumps from a named archive', help='Restore database dumps from a named archive',
description='Restore data source (e.g. database) dumps from a named archive. (To extract files instead, use "borgmatic extract".)', description='Restore database dumps from a named archive. (To extract files instead, use "borgmatic extract".)',
add_help=False, add_help=False,
) )
restore_group = restore_parser.add_argument_group('restore arguments') restore_group = restore_parser.add_argument_group('restore arguments')
@ -993,19 +952,18 @@ def make_parsers():
'--archive', help='Name of archive to restore from (or "latest")', required=True '--archive', help='Name of archive to restore from (or "latest")', required=True
) )
restore_group.add_argument( restore_group.add_argument(
'--data-source',
'--database', '--database',
metavar='NAME', metavar='NAME',
dest='data_sources', nargs='+',
action='append', dest='databases',
help="Name of data source (e.g. database) to restore from archive, must be defined in borgmatic's configuration, can specify flag multiple times, defaults to all data sources in the archive", help="Names of databases to restore from archive, defaults to all databases. Note that any databases to restore must be defined in borgmatic's configuration",
) )
restore_group.add_argument( restore_group.add_argument(
'--schema', '--schema',
metavar='NAME', metavar='NAME',
nargs='+',
dest='schemas', dest='schemas',
action='append', help='Names of schemas to restore from the database, defaults to all schemas. Schemas are only supported for PostgreSQL and MongoDB databases',
help='Name of schema to restore from the data source, can specify flag multiple times, defaults to all schemas. Schemas are only supported for PostgreSQL and MongoDB databases',
) )
restore_group.add_argument( restore_group.add_argument(
'--hostname', '--hostname',
@ -1013,7 +971,7 @@ def make_parsers():
) )
restore_group.add_argument( restore_group.add_argument(
'--port', '--port',
help='Database port to restore to. Defaults to the "restore_port" option in borgmatic\'s configuration', help='Port to restore to. Defaults to the "restore_port" option in borgmatic\'s configuration',
) )
restore_group.add_argument( restore_group.add_argument(
'--username', '--username',
@ -1107,16 +1065,16 @@ def make_parsers():
list_group.add_argument( list_group.add_argument(
'--path', '--path',
metavar='PATH', metavar='PATH',
nargs='+',
dest='paths', dest='paths',
action='append', help='Paths or patterns to list from a single selected archive (via "--archive"), defaults to listing the entire archive',
help='Path or pattern to list from a single selected archive (via "--archive"), can specify flag multiple times, defaults to listing the entire archive',
) )
list_group.add_argument( list_group.add_argument(
'--find', '--find',
metavar='PATH', metavar='PATH',
nargs='+',
dest='find_paths', dest='find_paths',
action='append', help='Partial paths or patterns to search for and list across multiple archives',
help='Partial path or pattern to search for and list across multiple archives, can specify flag multiple times',
) )
list_group.add_argument( list_group.add_argument(
'--short', default=False, action='store_true', help='Output only path names' '--short', default=False, action='store_true', help='Output only path names'
@ -1252,51 +1210,6 @@ def make_parsers():
'-h', '--help', action='help', help='Show this help message and exit' '-h', '--help', action='help', help='Show this help message and exit'
) )
key_parser = action_parsers.add_parser(
'key',
aliases=ACTION_ALIASES['key'],
help='Perform repository key related operations',
description='Perform repository key related operations',
add_help=False,
)
key_group = key_parser.add_argument_group('key arguments')
key_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
key_parsers = key_parser.add_subparsers(
title='key sub-actions',
)
key_export_parser = key_parsers.add_parser(
'export',
help='Export a copy of the repository key for safekeeping in case the original goes missing or gets damaged',
description='Export a copy of the repository key for safekeeping in case the original goes missing or gets damaged',
add_help=False,
)
key_export_group = key_export_parser.add_argument_group('key export arguments')
key_export_group.add_argument(
'--paper',
action='store_true',
help='Export the key in a text format suitable for printing and later manual entry',
)
key_export_group.add_argument(
'--qr-html',
action='store_true',
help='Export the key in an HTML format suitable for printing and later manual entry or QR code scanning',
)
key_export_group.add_argument(
'--repository',
help='Path of repository to export the key for, defaults to the configured repository if there is only one',
)
key_export_group.add_argument(
'--path',
metavar='PATH',
help='Path to export the key to, defaults to stdout (but be careful about dirtying the output with --verbosity)',
)
key_export_group.add_argument(
'-h', '--help', action='help', help='Show this help message and exit'
)
borg_parser = action_parsers.add_parser( borg_parser = action_parsers.add_parser(
'borg', 'borg',
aliases=ACTION_ALIASES['borg'], aliases=ACTION_ALIASES['borg'],
@ -1335,9 +1248,6 @@ def parse_arguments(*unparsed_arguments):
unparsed_arguments, action_parsers.choices, global_parser unparsed_arguments, action_parsers.choices, global_parser
) )
if not arguments['global'].config_paths:
arguments['global'].config_paths = collect.get_default_config_paths(expand_home=True)
for action_name in ('bootstrap', 'generate', 'validate'): for action_name in ('bootstrap', 'generate', 'validate'):
if ( if (
action_name in arguments.keys() and len(arguments.keys()) > 2 action_name in arguments.keys() and len(arguments.keys()) > 2
@ -1402,7 +1312,4 @@ def parse_arguments(*unparsed_arguments):
'With the info action, only one of --archive, --prefix, or --match-archives flags can be used.' 'With the info action, only one of --archive, --prefix, or --match-archives flags can be used.'
) )
if 'borg' in arguments and arguments['global'].dry_run:
raise ValueError('With the borg action, --dry-run is not supported.')
return arguments return arguments

View File

@ -1,5 +1,4 @@
import collections import collections
import importlib.metadata
import json import json
import logging import logging
import os import os
@ -10,6 +9,11 @@ from subprocess import CalledProcessError
import colorama import colorama
try:
import importlib_metadata
except ModuleNotFoundError: # pragma: nocover
import importlib.metadata as importlib_metadata
import borgmatic.actions.borg import borgmatic.actions.borg
import borgmatic.actions.break_lock import borgmatic.actions.break_lock
import borgmatic.actions.check import borgmatic.actions.check
@ -18,7 +22,6 @@ import borgmatic.actions.config.bootstrap
import borgmatic.actions.config.generate import borgmatic.actions.config.generate
import borgmatic.actions.config.validate import borgmatic.actions.config.validate
import borgmatic.actions.create import borgmatic.actions.create
import borgmatic.actions.export_key
import borgmatic.actions.export_tar import borgmatic.actions.export_tar
import borgmatic.actions.extract import borgmatic.actions.extract
import borgmatic.actions.info import borgmatic.actions.info
@ -44,52 +47,35 @@ from borgmatic.verbosity import verbosity_to_log_level
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def get_skip_actions(config, arguments): def run_configuration(config_filename, config, arguments):
''' '''
Given a configuration dict and command-line arguments as an argparse.Namespace, return a list of Given a config filename, the corresponding parsed config dict, and command-line arguments as a
the configured action names to skip. Omit "check" from this list though if "check --force" is dict from subparser name to a namespace of parsed arguments, execute the defined create, prune,
part of the command-like arguments. compact, check, and/or other actions.
'''
skip_actions = config.get('skip_actions', [])
if 'check' in arguments and arguments['check'].force:
return [action for action in skip_actions if action != 'check']
return skip_actions
def run_configuration(config_filename, config, config_paths, arguments):
'''
Given a config filename, the corresponding parsed config dict, a sequence of loaded
configuration paths, and command-line arguments as a dict from subparser name to a namespace of
parsed arguments, execute the defined create, prune, compact, check, and/or other actions.
Yield a combination of: Yield a combination of:
* JSON output strings from successfully executing any actions that produce JSON * JSON output strings from successfully executing any actions that produce JSON
* logging.LogRecord instances containing errors from any actions or backup hooks that fail * logging.LogRecord instances containing errors from any actions or backup hooks that fail
''' '''
(location, storage, retention, consistency, hooks) = (
config.get(section_name, {})
for section_name in ('location', 'storage', 'retention', 'consistency', 'hooks')
)
global_arguments = arguments['global'] global_arguments = arguments['global']
local_path = config.get('local_path', 'borg') local_path = location.get('local_path', 'borg')
remote_path = config.get('remote_path') remote_path = location.get('remote_path')
retries = config.get('retries', 0) retries = storage.get('retries', 0)
retry_wait = config.get('retry_wait', 0) retry_wait = storage.get('retry_wait', 0)
encountered_error = None encountered_error = None
error_repository = '' error_repository = ''
using_primary_action = {'create', 'prune', 'compact', 'check'}.intersection(arguments) using_primary_action = {'create', 'prune', 'compact', 'check'}.intersection(arguments)
monitoring_log_level = verbosity_to_log_level(global_arguments.monitoring_verbosity) monitoring_log_level = verbosity_to_log_level(global_arguments.monitoring_verbosity)
monitoring_hooks_are_activated = using_primary_action and monitoring_log_level != DISABLED monitoring_hooks_are_activated = using_primary_action and monitoring_log_level != DISABLED
skip_actions = get_skip_actions(config, arguments)
if skip_actions:
logger.debug(
f"{config_filename}: Skipping {'/'.join(skip_actions)} action{'s' if len(skip_actions) > 1 else ''} due to configured skip_actions"
)
try: try:
local_borg_version = borg_version.local_borg_version(config, local_path) local_borg_version = borg_version.local_borg_version(storage, local_path)
logger.debug(f'{config_filename}: Borg {local_borg_version}')
except (OSError, CalledProcessError, ValueError) as error: except (OSError, CalledProcessError, ValueError) as error:
yield from log_error_records(f'{config_filename}: Error getting local Borg version', error) yield from log_error_records(f'{config_filename}: Error getting local Borg version', error)
return return
@ -98,7 +84,7 @@ def run_configuration(config_filename, config, config_paths, arguments):
if monitoring_hooks_are_activated: if monitoring_hooks_are_activated:
dispatch.call_hooks( dispatch.call_hooks(
'initialize_monitor', 'initialize_monitor',
config, hooks,
config_filename, config_filename,
monitor.MONITOR_HOOK_NAMES, monitor.MONITOR_HOOK_NAMES,
monitoring_log_level, monitoring_log_level,
@ -107,7 +93,7 @@ def run_configuration(config_filename, config, config_paths, arguments):
dispatch.call_hooks( dispatch.call_hooks(
'ping_monitor', 'ping_monitor',
config, hooks,
config_filename, config_filename,
monitor.MONITOR_HOOK_NAMES, monitor.MONITOR_HOOK_NAMES,
monitor.State.START, monitor.State.START,
@ -123,7 +109,7 @@ def run_configuration(config_filename, config, config_paths, arguments):
if not encountered_error: if not encountered_error:
repo_queue = Queue() repo_queue = Queue()
for repo in config['repositories']: for repo in location['repositories']:
repo_queue.put( repo_queue.put(
(repo, 0), (repo, 0),
) )
@ -143,8 +129,11 @@ def run_configuration(config_filename, config, config_paths, arguments):
yield from run_actions( yield from run_actions(
arguments=arguments, arguments=arguments,
config_filename=config_filename, config_filename=config_filename,
config=config, location=location,
config_paths=config_paths, storage=storage,
retention=retention,
consistency=consistency,
hooks=hooks,
local_path=local_path, local_path=local_path,
remote_path=remote_path, remote_path=remote_path,
local_borg_version=local_borg_version, local_borg_version=local_borg_version,
@ -169,7 +158,7 @@ def run_configuration(config_filename, config, config_paths, arguments):
continue continue
if command.considered_soft_failure(config_filename, error): if command.considered_soft_failure(config_filename, error):
break return
yield from log_error_records( yield from log_error_records(
f'{repository.get("label", repository["path"])}: Error running actions for repository', f'{repository.get("label", repository["path"])}: Error running actions for repository',
@ -180,10 +169,10 @@ def run_configuration(config_filename, config, config_paths, arguments):
try: try:
if monitoring_hooks_are_activated: if monitoring_hooks_are_activated:
# Send logs irrespective of error. # send logs irrespective of error
dispatch.call_hooks( dispatch.call_hooks(
'ping_monitor', 'ping_monitor',
config, hooks,
config_filename, config_filename,
monitor.MONITOR_HOOK_NAMES, monitor.MONITOR_HOOK_NAMES,
monitor.State.LOG, monitor.State.LOG,
@ -191,16 +180,18 @@ def run_configuration(config_filename, config, config_paths, arguments):
global_arguments.dry_run, global_arguments.dry_run,
) )
except (OSError, CalledProcessError) as error: except (OSError, CalledProcessError) as error:
if not command.considered_soft_failure(config_filename, error): if command.considered_soft_failure(config_filename, error):
encountered_error = error return
yield from log_error_records(f'{repository["path"]}: Error pinging monitor', error)
encountered_error = error
yield from log_error_records(f'{repository["path"]}: Error pinging monitor', error)
if not encountered_error: if not encountered_error:
try: try:
if monitoring_hooks_are_activated: if monitoring_hooks_are_activated:
dispatch.call_hooks( dispatch.call_hooks(
'ping_monitor', 'ping_monitor',
config, hooks,
config_filename, config_filename,
monitor.MONITOR_HOOK_NAMES, monitor.MONITOR_HOOK_NAMES,
monitor.State.FINISH, monitor.State.FINISH,
@ -209,7 +200,7 @@ def run_configuration(config_filename, config, config_paths, arguments):
) )
dispatch.call_hooks( dispatch.call_hooks(
'destroy_monitor', 'destroy_monitor',
config, hooks,
config_filename, config_filename,
monitor.MONITOR_HOOK_NAMES, monitor.MONITOR_HOOK_NAMES,
monitoring_log_level, monitoring_log_level,
@ -225,8 +216,8 @@ def run_configuration(config_filename, config, config_paths, arguments):
if encountered_error and using_primary_action: if encountered_error and using_primary_action:
try: try:
command.execute_hook( command.execute_hook(
config.get('on_error'), hooks.get('on_error'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'on-error', 'on-error',
global_arguments.dry_run, global_arguments.dry_run,
@ -236,7 +227,7 @@ def run_configuration(config_filename, config, config_paths, arguments):
) )
dispatch.call_hooks( dispatch.call_hooks(
'ping_monitor', 'ping_monitor',
config, hooks,
config_filename, config_filename,
monitor.MONITOR_HOOK_NAMES, monitor.MONITOR_HOOK_NAMES,
monitor.State.FAIL, monitor.State.FAIL,
@ -245,7 +236,7 @@ def run_configuration(config_filename, config, config_paths, arguments):
) )
dispatch.call_hooks( dispatch.call_hooks(
'destroy_monitor', 'destroy_monitor',
config, hooks,
config_filename, config_filename,
monitor.MONITOR_HOOK_NAMES, monitor.MONITOR_HOOK_NAMES,
monitoring_log_level, monitoring_log_level,
@ -262,8 +253,11 @@ def run_actions(
*, *,
arguments, arguments,
config_filename, config_filename,
config, location,
config_paths, storage,
retention,
consistency,
hooks,
local_path, local_path,
remote_path, remote_path,
local_borg_version, local_borg_version,
@ -271,9 +265,9 @@ def run_actions(
): ):
''' '''
Given parsed command-line arguments as an argparse.ArgumentParser instance, the configuration Given parsed command-line arguments as an argparse.ArgumentParser instance, the configuration
filename, a configuration dict, a sequence of loaded configuration paths, local and remote paths filename, several different configuration dicts, local and remote paths to Borg, a local Borg
to Borg, a local Borg version string, and a repository name, run all actions from the version string, and a repository name, run all actions from the command-line arguments on the
command-line arguments on the given repository. given repository.
Yield JSON output strings from executing any actions that produce JSON. Yield JSON output strings from executing any actions that produce JSON.
@ -286,17 +280,15 @@ def run_actions(
global_arguments = arguments['global'] global_arguments = arguments['global']
dry_run_label = ' (dry run; not making any changes)' if global_arguments.dry_run else '' dry_run_label = ' (dry run; not making any changes)' if global_arguments.dry_run else ''
hook_context = { hook_context = {
'repository_label': repository.get('label', ''),
'log_file': global_arguments.log_file if global_arguments.log_file else '',
# Deprecated: For backwards compatibility with borgmatic < 1.6.0.
'repositories': ','.join([repo['path'] for repo in config['repositories']]),
'repository': repository_path, 'repository': repository_path,
# Deprecated: For backwards compatibility with borgmatic < 1.6.0.
'repositories': ','.join([repo['path'] for repo in location['repositories']]),
'log_file': global_arguments.log_file if global_arguments.log_file else '',
} }
skip_actions = set(get_skip_actions(config, arguments))
command.execute_hook( command.execute_hook(
config.get('before_actions'), hooks.get('before_actions'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'pre-actions', 'pre-actions',
global_arguments.dry_run, global_arguments.dry_run,
@ -304,32 +296,33 @@ def run_actions(
) )
for action_name, action_arguments in arguments.items(): for action_name, action_arguments in arguments.items():
if action_name == 'rcreate' and action_name not in skip_actions: if action_name == 'rcreate':
borgmatic.actions.rcreate.run_rcreate( borgmatic.actions.rcreate.run_rcreate(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'transfer' and action_name not in skip_actions: elif action_name == 'transfer':
borgmatic.actions.transfer.run_transfer( borgmatic.actions.transfer.run_transfer(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'create' and action_name not in skip_actions: elif action_name == 'create':
yield from borgmatic.actions.create.run_create( yield from borgmatic.actions.create.run_create(
config_filename, config_filename,
repository, repository,
config, location,
config_paths, storage,
hooks,
hook_context, hook_context,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
@ -338,11 +331,13 @@ def run_actions(
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'prune' and action_name not in skip_actions: elif action_name == 'prune':
borgmatic.actions.prune.run_prune( borgmatic.actions.prune.run_prune(
config_filename, config_filename,
repository, repository,
config, storage,
retention,
hooks,
hook_context, hook_context,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
@ -351,11 +346,13 @@ def run_actions(
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'compact' and action_name not in skip_actions: elif action_name == 'compact':
borgmatic.actions.compact.run_compact( borgmatic.actions.compact.run_compact(
config_filename, config_filename,
repository, repository,
config, storage,
retention,
hooks,
hook_context, hook_context,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
@ -364,12 +361,15 @@ def run_actions(
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'check' and action_name not in skip_actions: elif action_name == 'check':
if checks.repository_enabled_for_checks(repository, config): if checks.repository_enabled_for_checks(repository, consistency):
borgmatic.actions.check.run_check( borgmatic.actions.check.run_check(
config_filename, config_filename,
repository, repository,
config, location,
storage,
consistency,
hooks,
hook_context, hook_context,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
@ -377,11 +377,13 @@ def run_actions(
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'extract' and action_name not in skip_actions: elif action_name == 'extract':
borgmatic.actions.extract.run_extract( borgmatic.actions.extract.run_extract(
config_filename, config_filename,
repository, repository,
config, location,
storage,
hooks,
hook_context, hook_context,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
@ -389,100 +391,92 @@ def run_actions(
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'export-tar' and action_name not in skip_actions: elif action_name == 'export-tar':
borgmatic.actions.export_tar.run_export_tar( borgmatic.actions.export_tar.run_export_tar(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'mount' and action_name not in skip_actions: elif action_name == 'mount':
borgmatic.actions.mount.run_mount( borgmatic.actions.mount.run_mount(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'restore' and action_name not in skip_actions: elif action_name == 'restore':
borgmatic.actions.restore.run_restore( borgmatic.actions.restore.run_restore(
repository, repository,
config, location,
storage,
hooks,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'rlist' and action_name not in skip_actions: elif action_name == 'rlist':
yield from borgmatic.actions.rlist.run_rlist( yield from borgmatic.actions.rlist.run_rlist(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'list' and action_name not in skip_actions: elif action_name == 'list':
yield from borgmatic.actions.list.run_list( yield from borgmatic.actions.list.run_list(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'rinfo' and action_name not in skip_actions: elif action_name == 'rinfo':
yield from borgmatic.actions.rinfo.run_rinfo( yield from borgmatic.actions.rinfo.run_rinfo(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'info' and action_name not in skip_actions: elif action_name == 'info':
yield from borgmatic.actions.info.run_info( yield from borgmatic.actions.info.run_info(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'break-lock' and action_name not in skip_actions: elif action_name == 'break-lock':
borgmatic.actions.break_lock.run_break_lock( borgmatic.actions.break_lock.run_break_lock(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
global_arguments, global_arguments,
local_path, local_path,
remote_path, remote_path,
) )
elif action_name == 'export' and action_name not in skip_actions: elif action_name == 'borg':
borgmatic.actions.export_key.run_export_key(
repository,
config,
local_borg_version,
action_arguments,
global_arguments,
local_path,
remote_path,
)
elif action_name == 'borg' and action_name not in skip_actions:
borgmatic.actions.borg.run_borg( borgmatic.actions.borg.run_borg(
repository, repository,
config, storage,
local_borg_version, local_borg_version,
action_arguments, action_arguments,
global_arguments, global_arguments,
@ -491,8 +485,8 @@ def run_actions(
) )
command.execute_hook( command.execute_hook(
config.get('after_actions'), hooks.get('after_actions'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'post-actions', 'post-actions',
global_arguments.dry_run, global_arguments.dry_run,
@ -504,15 +498,13 @@ def load_configurations(config_filenames, overrides=None, resolve_env=True):
''' '''
Given a sequence of configuration filenames, load and validate each configuration file. Return Given a sequence of configuration filenames, load and validate each configuration file. Return
the results as a tuple of: dict of configuration filename to corresponding parsed configuration, the results as a tuple of: dict of configuration filename to corresponding parsed configuration,
a sequence of paths for all loaded configuration files (including includes), and a sequence of and sequence of logging.LogRecord instances containing any parse errors.
logging.LogRecord instances containing any parse errors.
Log records are returned here instead of being logged directly because logging isn't yet Log records are returned here instead of being logged directly because logging isn't yet
initialized at this point! initialized at this point!
''' '''
# Dict mapping from config filename to corresponding parsed config dict. # Dict mapping from config filename to corresponding parsed config dict.
configs = collections.OrderedDict() configs = collections.OrderedDict()
config_paths = set()
logs = [] logs = []
# Parse and load each configuration file. # Parse and load each configuration file.
@ -529,10 +521,9 @@ def load_configurations(config_filenames, overrides=None, resolve_env=True):
] ]
) )
try: try:
configs[config_filename], paths, parse_logs = validate.parse_configuration( configs[config_filename], parse_logs = validate.parse_configuration(
config_filename, validate.schema_filename(), overrides, resolve_env config_filename, validate.schema_filename(), overrides, resolve_env
) )
config_paths.update(paths)
logs.extend(parse_logs) logs.extend(parse_logs)
except PermissionError: except PermissionError:
logs.extend( logs.extend(
@ -562,7 +553,7 @@ def load_configurations(config_filenames, overrides=None, resolve_env=True):
] ]
) )
return (configs, sorted(config_paths), logs) return (configs, logs)
def log_record(suppress_log=False, **kwargs): def log_record(suppress_log=False, **kwargs):
@ -599,24 +590,14 @@ def log_error_records(
raise error raise error
except CalledProcessError as error: except CalledProcessError as error:
yield log_record(levelno=levelno, levelname=level_name, msg=message) yield log_record(levelno=levelno, levelname=level_name, msg=message)
if error.output: if error.output:
try: # Suppress these logs for now and save full error output for the log summary at the end.
output = error.output.decode('utf-8') yield log_record(
except (UnicodeDecodeError, AttributeError): levelno=levelno,
output = error.output levelname=level_name,
msg=error.output,
# Suppress these logs for now and save the error output for the log summary at the end. suppress_log=not log_command_error_output,
# Log a separate record per line, as some errors can be really verbose and overflow the )
# per-record size limits imposed by some logging backends.
for output_line in output.splitlines():
yield log_record(
levelno=levelno,
levelname=level_name,
msg=output_line,
suppress_log=True,
)
yield log_record(levelno=levelno, levelname=level_name, msg=error) yield log_record(levelno=levelno, levelname=level_name, msg=error)
except (ValueError, OSError) as error: except (ValueError, OSError) as error:
yield log_record(levelno=levelno, levelname=level_name, msg=message) yield log_record(levelno=levelno, levelname=level_name, msg=message)
@ -632,7 +613,7 @@ def get_local_path(configs):
Arbitrarily return the local path from the first configuration dict. Default to "borg" if not Arbitrarily return the local path from the first configuration dict. Default to "borg" if not
set. set.
''' '''
return next(iter(configs.values())).get('local_path', 'borg') return next(iter(configs.values())).get('location', {}).get('local_path', 'borg')
def collect_highlander_action_summary_logs(configs, arguments, configuration_parse_errors): def collect_highlander_action_summary_logs(configs, arguments, configuration_parse_errors):
@ -646,8 +627,6 @@ def collect_highlander_action_summary_logs(configs, arguments, configuration_par
A highlander action is an action that cannot coexist with other actions on the borgmatic A highlander action is an action that cannot coexist with other actions on the borgmatic
command-line, and borgmatic exits after processing such an action. command-line, and borgmatic exits after processing such an action.
''' '''
add_custom_log_levels()
if 'bootstrap' in arguments: if 'bootstrap' in arguments:
try: try:
# No configuration file is needed for bootstrap. # No configuration file is needed for bootstrap.
@ -729,12 +708,12 @@ def collect_highlander_action_summary_logs(configs, arguments, configuration_par
return return
def collect_configuration_run_summary_logs(configs, config_paths, arguments): def collect_configuration_run_summary_logs(configs, arguments):
''' '''
Given a dict of configuration filename to corresponding parsed configuration, a sequence of Given a dict of configuration filename to corresponding parsed configuration and parsed
loaded configuration paths, and parsed command-line arguments as a dict from subparser name to a command-line arguments as a dict from subparser name to a parsed namespace of arguments, run
parsed namespace of arguments, run each configuration file and yield a series of each configuration file and yield a series of logging.LogRecord instances containing summary
logging.LogRecord instances containing summary information about each run. information about each run.
As a side effect of running through these configuration files, output their JSON results, if As a side effect of running through these configuration files, output their JSON results, if
any, to stdout. any, to stdout.
@ -765,9 +744,10 @@ def collect_configuration_run_summary_logs(configs, config_paths, arguments):
if 'create' in arguments: if 'create' in arguments:
try: try:
for config_filename, config in configs.items(): for config_filename, config in configs.items():
hooks = config.get('hooks', {})
command.execute_hook( command.execute_hook(
config.get('before_everything'), hooks.get('before_everything'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'pre-everything', 'pre-everything',
arguments['global'].dry_run, arguments['global'].dry_run,
@ -779,7 +759,7 @@ def collect_configuration_run_summary_logs(configs, config_paths, arguments):
# Execute the actions corresponding to each configuration file. # Execute the actions corresponding to each configuration file.
json_results = [] json_results = []
for config_filename, config in configs.items(): for config_filename, config in configs.items():
results = list(run_configuration(config_filename, config, config_paths, arguments)) results = list(run_configuration(config_filename, config, arguments))
error_logs = tuple(result for result in results if isinstance(result, logging.LogRecord)) error_logs = tuple(result for result in results if isinstance(result, logging.LogRecord))
if error_logs: if error_logs:
@ -800,7 +780,6 @@ def collect_configuration_run_summary_logs(configs, config_paths, arguments):
logger.info(f"Unmounting mount point {arguments['umount'].mount_point}") logger.info(f"Unmounting mount point {arguments['umount'].mount_point}")
try: try:
borg_umount.unmount_archive( borg_umount.unmount_archive(
config,
mount_point=arguments['umount'].mount_point, mount_point=arguments['umount'].mount_point,
local_path=get_local_path(configs), local_path=get_local_path(configs),
) )
@ -813,9 +792,10 @@ def collect_configuration_run_summary_logs(configs, config_paths, arguments):
if 'create' in arguments: if 'create' in arguments:
try: try:
for config_filename, config in configs.items(): for config_filename, config in configs.items():
hooks = config.get('hooks', {})
command.execute_hook( command.execute_hook(
config.get('after_everything'), hooks.get('after_everything'),
config.get('umask'), hooks.get('umask'),
config_filename, config_filename,
'post-everything', 'post-everything',
arguments['global'].dry_run, arguments['global'].dry_run,
@ -851,7 +831,7 @@ def main(extra_summary_logs=[]): # pragma: no cover
global_arguments = arguments['global'] global_arguments = arguments['global']
if global_arguments.version: if global_arguments.version:
print(importlib.metadata.version('borgmatic')) print(importlib_metadata.version('borgmatic'))
sys.exit(0) sys.exit(0)
if global_arguments.bash_completion: if global_arguments.bash_completion:
print(borgmatic.commands.completion.bash.bash_completion()) print(borgmatic.commands.completion.bash.bash_completion())
@ -861,7 +841,8 @@ def main(extra_summary_logs=[]): # pragma: no cover
sys.exit(0) sys.exit(0)
config_filenames = tuple(collect.collect_config_filenames(global_arguments.config_paths)) config_filenames = tuple(collect.collect_config_filenames(global_arguments.config_paths))
configs, config_paths, parse_logs = load_configurations( global_arguments.used_config_paths = list(config_filenames)
configs, parse_logs = load_configurations(
config_filenames, global_arguments.overrides, global_arguments.resolve_env config_filenames, global_arguments.overrides, global_arguments.resolve_env
) )
configuration_parse_errors = ( configuration_parse_errors = (
@ -871,8 +852,10 @@ def main(extra_summary_logs=[]): # pragma: no cover
any_json_flags = any( any_json_flags = any(
getattr(sub_arguments, 'json', False) for sub_arguments in arguments.values() getattr(sub_arguments, 'json', False) for sub_arguments in arguments.values()
) )
color_enabled = should_do_markup(global_arguments.no_color or any_json_flags, configs) colorama.init(
colorama.init(autoreset=color_enabled, strip=not color_enabled) autoreset=True,
strip=not should_do_markup(global_arguments.no_color or any_json_flags, configs),
)
try: try:
configure_logging( configure_logging(
verbosity_to_log_level(global_arguments.verbosity), verbosity_to_log_level(global_arguments.verbosity),
@ -881,7 +864,6 @@ def main(extra_summary_logs=[]): # pragma: no cover
verbosity_to_log_level(global_arguments.monitoring_verbosity), verbosity_to_log_level(global_arguments.monitoring_verbosity),
global_arguments.log_file, global_arguments.log_file,
global_arguments.log_file_format, global_arguments.log_file_format,
color_enabled=color_enabled,
) )
except (FileNotFoundError, PermissionError) as error: except (FileNotFoundError, PermissionError) as error:
configure_logging(logging.CRITICAL) configure_logging(logging.CRITICAL)
@ -897,7 +879,7 @@ def main(extra_summary_logs=[]): # pragma: no cover
configs, arguments, configuration_parse_errors configs, arguments, configuration_parse_errors
) )
) )
or list(collect_configuration_run_summary_logs(configs, config_paths, arguments)) or list(collect_configuration_run_summary_logs(configs, arguments))
) )
) )
summary_logs_max_level = max(log.levelno for log in summary_logs) summary_logs_max_level = max(log.levelno for log in summary_logs)

View File

@ -1,9 +1,9 @@
def repository_enabled_for_checks(repository, config): def repository_enabled_for_checks(repository, consistency):
''' '''
Given a repository name and a configuration dict, return whether the Given a repository name and a consistency configuration dict, return whether the repository
repository is enabled to have consistency checks run. is enabled to have consistency checks run.
''' '''
if not config.get('check_repositories'): if not consistency.get('check_repositories'):
return True return True
return repository in config['check_repositories'] return repository in consistency['check_repositories']

View File

@ -1,64 +0,0 @@
import shlex
def coerce_scalar(value):
'''
Given a configuration value, coerce it to an integer or a boolean as appropriate and return the
result.
'''
try:
return int(value)
except (TypeError, ValueError):
pass
if value == 'true' or value == 'True':
return True
if value == 'false' or value == 'False':
return False
return value
def apply_constants(value, constants, shell_escape=False):
'''
Given a configuration value (bool, dict, int, list, or string) and a dict of named constants,
replace any configuration string values of the form "{constant}" (or containing it) with the
value of the correspondingly named key from the constants. Recurse as necessary into nested
configuration to find values to replace.
For instance, if a configuration value contains "{foo}", replace it with the value of the "foo"
key found within the configuration's "constants".
If shell escape is True, then escape the constant's value before applying it.
Return the configuration value and modify the original.
'''
if not value or not constants:
return value
if isinstance(value, str):
for constant_name, constant_value in constants.items():
value = value.replace(
'{' + constant_name + '}',
shlex.quote(str(constant_value)) if shell_escape else str(constant_value),
)
# Support constants within non-string scalars by coercing the value to its appropriate type.
value = coerce_scalar(value)
elif isinstance(value, list):
for index, list_value in enumerate(value):
value[index] = apply_constants(list_value, constants, shell_escape)
elif isinstance(value, dict):
for option_name, option_value in value.items():
value[option_name] = apply_constants(
option_value,
constants,
shell_escape=(
shell_escape
or option_name.startswith('before_')
or option_name.startswith('after_')
or option_name == 'on_error'
),
)
return value

View File

@ -1,22 +1,21 @@
import os import os
import re import re
VARIABLE_PATTERN = re.compile( _VARIABLE_PATTERN = re.compile(
r'(?P<escape>\\)?(?P<variable>\$\{(?P<name>[A-Za-z0-9_]+)((:?-)(?P<default>[^}]+))?\})' r'(?P<escape>\\)?(?P<variable>\$\{(?P<name>[A-Za-z0-9_]+)((:?-)(?P<default>[^}]+))?\})'
) )
def resolve_string(matcher): def _resolve_string(matcher):
''' '''
Given a matcher containing a name and an optional default value, get the value from environment. Get the value from environment given a matcher containing a name and an optional default value.
If the variable is not defined in environment and no default value is provided, an Error is raised.
Raise ValueError if the variable is not defined in environment and no default value is provided.
''' '''
if matcher.group('escape') is not None: if matcher.group('escape') is not None:
# In the case of an escaped environment variable, unescape it. # in case of escaped envvar, unescape it
return matcher.group('variable') return matcher.group('variable')
# Resolve the environment variable. # resolve the env var
name, default = matcher.group('name'), matcher.group('default') name, default = matcher.group('name'), matcher.group('default')
out = os.getenv(name, default=default) out = os.getenv(name, default=default)
@ -28,24 +27,19 @@ def resolve_string(matcher):
def resolve_env_variables(item): def resolve_env_variables(item):
''' '''
Resolves variables like or ${FOO} from given configuration with values from process environment. Resolves variables like or ${FOO} from given configuration with values from process environment
Supported formats: Supported formats:
- ${FOO} will return FOO env variable
- ${FOO-bar} or ${FOO:-bar} will return FOO env variable if it exists, else "bar"
* ${FOO} will return FOO env variable If any variable is missing in environment and no default value is provided, an Error is raised.
* ${FOO-bar} or ${FOO:-bar} will return FOO env variable if it exists, else "bar"
Raise if any variable is missing in environment and no default value is provided.
''' '''
if isinstance(item, str): if isinstance(item, str):
return VARIABLE_PATTERN.sub(resolve_string, item) return _VARIABLE_PATTERN.sub(_resolve_string, item)
if isinstance(item, list): if isinstance(item, list):
for index, subitem in enumerate(item): for i, subitem in enumerate(item):
item[index] = resolve_env_variables(subitem) item[i] = resolve_env_variables(subitem)
if isinstance(item, dict): if isinstance(item, dict):
for key, value in item.items(): for key, value in item.items():
item[key] = resolve_env_variables(value) item[key] = resolve_env_variables(value)
return item return item

View File

@ -3,7 +3,7 @@ import io
import os import os
import re import re
import ruamel.yaml from ruamel import yaml
from borgmatic.config import load, normalize from borgmatic.config import load, normalize
@ -11,33 +11,20 @@ INDENT = 4
SEQUENCE_INDENT = 2 SEQUENCE_INDENT = 2
def insert_newline_before_comment(config, field_name): def _insert_newline_before_comment(config, field_name):
''' '''
Using some ruamel.yaml black magic, insert a blank line in the config right before the given Using some ruamel.yaml black magic, insert a blank line in the config right before the given
field and its comments. field and its comments.
''' '''
config.ca.items[field_name][1].insert( config.ca.items[field_name][1].insert(
0, ruamel.yaml.tokens.CommentToken('\n', ruamel.yaml.error.CommentMark(0), None) 0, yaml.tokens.CommentToken('\n', yaml.error.CommentMark(0), None)
) )
def get_properties(schema): def _schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
'''
Given a schema dict, return its properties. But if it's got sub-schemas with multiple different
potential properties, returned their merged properties instead.
'''
if 'oneOf' in schema:
return dict(
collections.ChainMap(*[sub_schema['properties'] for sub_schema in schema['oneOf']])
)
return schema['properties']
def schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
''' '''
Given a loaded configuration schema, generate and return sample config for it. Include comments Given a loaded configuration schema, generate and return sample config for it. Include comments
for each option based on the schema "description". for each section based on the schema "description".
''' '''
schema_type = schema.get('type') schema_type = schema.get('type')
example = schema.get('example') example = schema.get('example')
@ -45,15 +32,15 @@ def schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
return example return example
if schema_type == 'array': if schema_type == 'array':
config = ruamel.yaml.comments.CommentedSeq( config = yaml.comments.CommentedSeq(
[schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)] [_schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)]
) )
add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT)) add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT))
elif schema_type == 'object': elif schema_type == 'object':
config = ruamel.yaml.comments.CommentedMap( config = yaml.comments.CommentedMap(
[ [
(field_name, schema_to_sample_configuration(sub_schema, level + 1)) (field_name, _schema_to_sample_configuration(sub_schema, level + 1))
for field_name, sub_schema in get_properties(schema).items() for field_name, sub_schema in schema['properties'].items()
] ]
) )
indent = (level * INDENT) + (SEQUENCE_INDENT if parent_is_sequence else 0) indent = (level * INDENT) + (SEQUENCE_INDENT if parent_is_sequence else 0)
@ -66,13 +53,13 @@ def schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
return config return config
def comment_out_line(line): def _comment_out_line(line):
# If it's already is commented out (or empty), there's nothing further to do! # If it's already is commented out (or empty), there's nothing further to do!
stripped_line = line.lstrip() stripped_line = line.lstrip()
if not stripped_line or stripped_line.startswith('#'): if not stripped_line or stripped_line.startswith('#'):
return line return line
# Comment out the names of optional options, inserting the '#' after any indent for aesthetics. # Comment out the names of optional sections, inserting the '#' after any indent for aesthetics.
matches = re.match(r'(\s*)', line) matches = re.match(r'(\s*)', line)
indent_spaces = matches.group(0) if matches else '' indent_spaces = matches.group(0) if matches else ''
count_indent_spaces = len(indent_spaces) count_indent_spaces = len(indent_spaces)
@ -80,7 +67,7 @@ def comment_out_line(line):
return '# '.join((indent_spaces, line[count_indent_spaces:])) return '# '.join((indent_spaces, line[count_indent_spaces:]))
def comment_out_optional_configuration(rendered_config): def _comment_out_optional_configuration(rendered_config):
''' '''
Post-process a rendered configuration string to comment out optional key/values, as determined Post-process a rendered configuration string to comment out optional key/values, as determined
by a sentinel in the comment before each key. by a sentinel in the comment before each key.
@ -105,7 +92,7 @@ def comment_out_optional_configuration(rendered_config):
if not line.strip(): if not line.strip():
optional = False optional = False
lines.append(comment_out_line(line) if optional else line) lines.append(_comment_out_line(line) if optional else line)
return '\n'.join(lines) return '\n'.join(lines)
@ -114,7 +101,7 @@ def render_configuration(config):
''' '''
Given a config data structure of nested OrderedDicts, render the config as YAML and return it. Given a config data structure of nested OrderedDicts, render the config as YAML and return it.
''' '''
dumper = ruamel.yaml.YAML(typ='rt') dumper = yaml.YAML()
dumper.indent(mapping=INDENT, sequence=INDENT + SEQUENCE_INDENT, offset=INDENT) dumper.indent(mapping=INDENT, sequence=INDENT + SEQUENCE_INDENT, offset=INDENT)
rendered = io.StringIO() rendered = io.StringIO()
dumper.dump(config, rendered) dumper.dump(config, rendered)
@ -164,7 +151,7 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
return return
for field_name in config[0].keys(): for field_name in config[0].keys():
field_schema = get_properties(schema['items']).get(field_name, {}) field_schema = schema['items']['properties'].get(field_name, {})
description = field_schema.get('description') description = field_schema.get('description')
# No description to use? Skip it. # No description to use? Skip it.
@ -178,6 +165,7 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
return return
REQUIRED_SECTION_NAMES = {'location', 'retention'}
REQUIRED_KEYS = {'source_directories', 'repositories', 'keep_daily'} REQUIRED_KEYS = {'source_directories', 'repositories', 'keep_daily'}
COMMENTED_OUT_SENTINEL = 'COMMENT_OUT' COMMENTED_OUT_SENTINEL = 'COMMENT_OUT'
@ -191,13 +179,13 @@ def add_comments_to_configuration_object(config, schema, indent=0, skip_first=Fa
if skip_first and index == 0: if skip_first and index == 0:
continue continue
field_schema = get_properties(schema).get(field_name, {}) field_schema = schema['properties'].get(field_name, {})
description = field_schema.get('description', '').strip() description = field_schema.get('description', '').strip()
# If this is an optional key, add an indicator to the comment flagging it to be commented # If this is an optional key, add an indicator to the comment flagging it to be commented
# out from the sample configuration. This sentinel is consumed by downstream processing that # out from the sample configuration. This sentinel is consumed by downstream processing that
# does the actual commenting out. # does the actual commenting out.
if field_name not in REQUIRED_KEYS: if field_name not in REQUIRED_SECTION_NAMES and field_name not in REQUIRED_KEYS:
description = ( description = (
'\n'.join((description, COMMENTED_OUT_SENTINEL)) '\n'.join((description, COMMENTED_OUT_SENTINEL))
if description if description
@ -211,7 +199,7 @@ def add_comments_to_configuration_object(config, schema, indent=0, skip_first=Fa
config.yaml_set_comment_before_after_key(key=field_name, before=description, indent=indent) config.yaml_set_comment_before_after_key(key=field_name, before=description, indent=indent)
if index > 0: if index > 0:
insert_newline_before_comment(config, field_name) _insert_newline_before_comment(config, field_name)
RUAMEL_YAML_COMMENTS_INDEX = 1 RUAMEL_YAML_COMMENTS_INDEX = 1
@ -238,7 +226,8 @@ def merge_source_configuration_into_destination(destination_config, source_confi
favoring values from the source when there are collisions. favoring values from the source when there are collisions.
The purpose of this is to upgrade configuration files from old versions of borgmatic by adding The purpose of this is to upgrade configuration files from old versions of borgmatic by adding
new configuration keys and comments. new
configuration keys and comments.
''' '''
if not source_config: if not source_config:
return destination_config return destination_config
@ -248,9 +237,7 @@ def merge_source_configuration_into_destination(destination_config, source_confi
for field_name, source_value in source_config.items(): for field_name, source_value in source_config.items():
# Since this key/value is from the source configuration, leave it uncommented and remove any # Since this key/value is from the source configuration, leave it uncommented and remove any
# sentinel that would cause it to get commented out. # sentinel that would cause it to get commented out.
remove_commented_out_sentinel( remove_commented_out_sentinel(destination_config, field_name)
ruamel.yaml.comments.CommentedMap(destination_config), field_name
)
# This is a mapping. Recurse for this key/value. # This is a mapping. Recurse for this key/value.
if isinstance(source_value, collections.abc.Mapping): if isinstance(source_value, collections.abc.Mapping):
@ -262,7 +249,7 @@ def merge_source_configuration_into_destination(destination_config, source_confi
# This is a sequence. Recurse for each item in it. # This is a sequence. Recurse for each item in it.
if isinstance(source_value, collections.abc.Sequence) and not isinstance(source_value, str): if isinstance(source_value, collections.abc.Sequence) and not isinstance(source_value, str):
destination_value = destination_config[field_name] destination_value = destination_config[field_name]
destination_config[field_name] = ruamel.yaml.comments.CommentedSeq( destination_config[field_name] = yaml.comments.CommentedSeq(
[ [
merge_source_configuration_into_destination( merge_source_configuration_into_destination(
destination_value[index] if index < len(destination_value) else None, destination_value[index] if index < len(destination_value) else None,
@ -289,7 +276,7 @@ def generate_sample_configuration(
schema. If a source filename is provided, merge the parsed contents of that configuration into schema. If a source filename is provided, merge the parsed contents of that configuration into
the generated configuration. the generated configuration.
''' '''
schema = ruamel.yaml.YAML(typ='safe').load(open(schema_filename)) schema = yaml.round_trip_load(open(schema_filename))
source_config = None source_config = None
if source_filename: if source_filename:
@ -297,7 +284,7 @@ def generate_sample_configuration(
normalize.normalize(source_filename, source_config) normalize.normalize(source_filename, source_config)
destination_config = merge_source_configuration_into_destination( destination_config = merge_source_configuration_into_destination(
schema_to_sample_configuration(schema), source_config _schema_to_sample_configuration(schema), source_config
) )
if dry_run: if dry_run:
@ -305,6 +292,6 @@ def generate_sample_configuration(
write_configuration( write_configuration(
destination_filename, destination_filename,
comment_out_optional_configuration(render_configuration(destination_config)), _comment_out_optional_configuration(render_configuration(destination_config)),
overwrite=overwrite, overwrite=overwrite,
) )

View File

@ -1,7 +1,6 @@
import functools import functools
import itertools import json
import logging import logging
import operator
import os import os
import ruamel.yaml import ruamel.yaml
@ -9,68 +8,34 @@ import ruamel.yaml
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def probe_and_include_file(filename, include_directories, config_paths): def include_configuration(loader, filename_node, include_directory):
''' '''
Given a filename to include, a list of include directories to search for matching files, and a Given a ruamel.yaml.loader.Loader, a ruamel.yaml.serializer.ScalarNode containing the included
set of configuration paths, probe for the file, load it, and return the loaded configuration as filename, and an include directory path to search for matching files, load the given YAML
a data structure of nested dicts, lists, etc. Add the filename to the given configuration paths. filename (ignoring the given loader so we can use our own) and return its contents as a data
structure of nested dicts and lists. If the filename is relative, probe for it within 1. the
Raise FileNotFoundError if the included file was not found. current working directory and 2. the given include directory.
'''
expanded_filename = os.path.expanduser(filename)
if os.path.isabs(expanded_filename):
return load_configuration(expanded_filename, config_paths)
candidate_filenames = {
os.path.join(directory, expanded_filename) for directory in include_directories
}
for candidate_filename in candidate_filenames:
if os.path.exists(candidate_filename):
return load_configuration(candidate_filename, config_paths)
raise FileNotFoundError(
f'Could not find include {filename} at {" or ".join(candidate_filenames)}'
)
def include_configuration(loader, filename_node, include_directory, config_paths):
'''
Given a ruamel.yaml.loader.Loader, a ruamel.yaml.nodes.ScalarNode containing the included
filename (or a list containing multiple such filenames), an include directory path to search for
matching files, and a set of configuration paths, load the given YAML filenames (ignoring the
given loader so we can use our own) and return their contents as data structure of nested dicts,
lists, etc. Add the names of included files to the given configuration paths. If the given
filename node's value is a scalar string, then the return value will be a single value. But if
the given node value is a list, then the return value will be a list of values, one per loaded
configuration file.
If a filename is relative, probe for it within: 1. the current working directory and 2. the
given include directory.
Raise FileNotFoundError if an included file was not found. Raise FileNotFoundError if an included file was not found.
''' '''
include_directories = [os.getcwd(), os.path.abspath(include_directory)] include_directories = [os.getcwd(), os.path.abspath(include_directory)]
include_filename = os.path.expanduser(filename_node.value)
if isinstance(filename_node.value, str): if not os.path.isabs(include_filename):
return probe_and_include_file(filename_node.value, include_directories, config_paths) candidate_filenames = [
os.path.join(directory, include_filename) for directory in include_directories
if (
isinstance(filename_node.value, list)
and len(filename_node.value)
and isinstance(filename_node.value[0], ruamel.yaml.nodes.ScalarNode)
):
# Reversing the values ensures the correct ordering if these includes are subsequently
# merged together.
return [
probe_and_include_file(node.value, include_directories, config_paths)
for node in reversed(filename_node.value)
] ]
raise ValueError( for candidate_filename in candidate_filenames:
'!include value is not supported; use a single filename or a list of filenames' if os.path.exists(candidate_filename):
) include_filename = candidate_filename
break
else:
raise FileNotFoundError(
f'Could not find include {filename_node.value} at {" or ".join(candidate_filenames)}'
)
return load_configuration(include_filename)
def raise_retain_node_error(loader, node): def raise_retain_node_error(loader, node):
@ -88,7 +53,7 @@ def raise_retain_node_error(loader, node):
'The !retain tag may only be used within a configuration file containing a merged !include tag.' 'The !retain tag may only be used within a configuration file containing a merged !include tag.'
) )
raise ValueError('The !retain tag may only be used on a mapping or list.') raise ValueError('The !retain tag may only be used on a YAML mapping or sequence.')
def raise_omit_node_error(loader, node): def raise_omit_node_error(loader, node):
@ -100,31 +65,22 @@ def raise_omit_node_error(loader, node):
tags are handled by deep_merge_nodes() below. tags are handled by deep_merge_nodes() below.
''' '''
raise ValueError( raise ValueError(
'The !omit tag may only be used on a scalar (e.g., string) or list element within a configuration file containing a merged !include tag.' 'The !omit tag may only be used on a scalar (e.g., string) list element within a configuration file containing a merged !include tag.'
) )
class Include_constructor(ruamel.yaml.SafeConstructor): class Include_constructor(ruamel.yaml.SafeConstructor):
''' '''
A YAML "constructor" (a ruamel.yaml concept) that supports a custom "!include" tag for including A YAML "constructor" (a ruamel.yaml concept) that supports a custom "!include" tag for including
separate YAML configuration files. Example syntax: `option: !include common.yaml` separate YAML configuration files. Example syntax: `retention: !include common.yaml`
''' '''
def __init__( def __init__(self, preserve_quotes=None, loader=None, include_directory=None):
self, preserve_quotes=None, loader=None, include_directory=None, config_paths=None
):
super(Include_constructor, self).__init__(preserve_quotes, loader) super(Include_constructor, self).__init__(preserve_quotes, loader)
self.add_constructor( self.add_constructor(
'!include', '!include',
functools.partial( functools.partial(include_configuration, include_directory=include_directory),
include_configuration,
include_directory=include_directory,
config_paths=config_paths,
),
) )
# These are catch-all error handlers for tags that don't get applied and removed by
# deep_merge_nodes() below.
self.add_constructor('!retain', raise_retain_node_error) self.add_constructor('!retain', raise_retain_node_error)
self.add_constructor('!omit', raise_omit_node_error) self.add_constructor('!omit', raise_omit_node_error)
@ -134,152 +90,112 @@ class Include_constructor(ruamel.yaml.SafeConstructor):
using the YAML '<<' merge key. Example syntax: using the YAML '<<' merge key. Example syntax:
``` ```
option: retention:
sub_option: 1 keep_daily: 1
<<: !include common.yaml <<: !include common.yaml
``` ```
These includes are deep merged into the current configuration file. For instance, in this These includes are deep merged into the current configuration file. For instance, in this
example, any "option" with sub-options in common.yaml will get merged into the corresponding example, any "retention" options in common.yaml will get merged into the "retention" section
"option" with sub-options in the example configuration file. in the example configuration file.
''' '''
representer = ruamel.yaml.representer.SafeRepresenter() representer = ruamel.yaml.representer.SafeRepresenter()
for index, (key_node, value_node) in enumerate(node.value): for index, (key_node, value_node) in enumerate(node.value):
if key_node.tag == u'tag:yaml.org,2002:merge' and value_node.tag == '!include': if key_node.tag == u'tag:yaml.org,2002:merge' and value_node.tag == '!include':
# Replace the merge include with a sequence of included configuration nodes ready included_value = representer.represent_data(self.construct_object(value_node))
# for merging. The construct_object() call here triggers include_configuration() node.value[index] = (key_node, included_value)
# among other constructors.
node.value[index] = (
key_node,
representer.represent_data(self.construct_object(value_node)),
)
# This super().flatten_mapping() call actually performs "<<" merges.
super(Include_constructor, self).flatten_mapping(node) super(Include_constructor, self).flatten_mapping(node)
node.value = deep_merge_nodes(node.value) node.value = deep_merge_nodes(node.value)
def load_configuration(filename, config_paths=None): def load_configuration(filename):
''' '''
Load the given configuration file and return its contents as a data structure of nested dicts Load the given configuration file and return its contents as a data structure of nested dicts
and lists. Add the filename to the given configuration paths set, and also add any included and lists. Also, replace any "{constant}" strings with the value of the "constant" key in the
configuration filenames. "constants" section of the configuration file.
Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError
if there are too many recursive includes. if there are too many recursive includes.
''' '''
if config_paths is None:
config_paths = set()
# Use an embedded derived class for the include constructor so as to capture the include # Use an embedded derived class for the include constructor so as to capture the filename
# directory and configuration paths values. (functools.partial doesn't work for this use case # value. (functools.partial doesn't work for this use case because yaml.Constructor has to be
# because yaml.Constructor has to be an actual class.) # an actual class.)
class Include_constructor_with_extras(Include_constructor): class Include_constructor_with_include_directory(Include_constructor):
def __init__(self, preserve_quotes=None, loader=None): def __init__(self, preserve_quotes=None, loader=None):
super(Include_constructor_with_extras, self).__init__( super(Include_constructor_with_include_directory, self).__init__(
preserve_quotes, preserve_quotes, loader, include_directory=os.path.dirname(filename)
loader,
include_directory=os.path.dirname(filename),
config_paths=config_paths,
) )
yaml = ruamel.yaml.YAML(typ='safe') yaml = ruamel.yaml.YAML(typ='safe')
yaml.Constructor = Include_constructor_with_extras yaml.Constructor = Include_constructor_with_include_directory
config_paths.add(filename)
with open(filename) as file: with open(filename) as file:
return yaml.load(file.read()) file_contents = file.read()
config = yaml.load(file_contents)
if config and 'constants' in config:
for key, value in config['constants'].items():
value = json.dumps(value)
file_contents = file_contents.replace(f'{{{key}}}', value.strip('"'))
config = yaml.load(file_contents)
del config['constants']
return config
def filter_omitted_nodes(nodes, values): def filter_omitted_nodes(nodes):
''' '''
Given a nested borgmatic configuration data structure as a list of tuples in the form of: Given a list of nodes, return a filtered list omitting any nodes with an "!omit" tag or with a
value matching such nodes.
[
(
ruamel.yaml.nodes.ScalarNode as a key,
ruamel.yaml.nodes.MappingNode or other Node as a value,
),
...
]
... and a combined list of all values for those nodes, return a filtered list of the values,
omitting any that have an "!omit" tag (or with a value matching such nodes).
But if only a single node is given, bail and return the given values unfiltered, as "!omit" only
applies when there are merge includes (and therefore multiple nodes).
''' '''
if len(nodes) <= 1: omitted_values = tuple(node.value for node in nodes if node.tag == '!omit')
return values
omitted_values = tuple(node.value for node in values if node.tag == '!omit') return [node for node in nodes if node.value not in omitted_values]
return [node for node in values if node.value not in omitted_values]
def merge_values(nodes): DELETED_NODE = object()
'''
Given a nested borgmatic configuration data structure as a list of tuples in the form of:
[
(
ruamel.yaml.nodes.ScalarNode as a key,
ruamel.yaml.nodes.MappingNode or other Node as a value,
),
...
]
... merge its sequence or mapping node values and return the result. For sequence nodes, this
means appending together its contained lists. For mapping nodes, it means merging its contained
dicts.
'''
return functools.reduce(operator.add, (value.value for key, value in nodes))
def deep_merge_nodes(nodes): def deep_merge_nodes(nodes):
''' '''
Given a nested borgmatic configuration data structure as a list of tuples in the form of: Given a nested borgmatic configuration data structure as a list of tuples in the form of:
[
( (
ruamel.yaml.nodes.ScalarNode as a key, ruamel.yaml.nodes.ScalarNode as a key,
ruamel.yaml.nodes.MappingNode or other Node as a value, ruamel.yaml.nodes.MappingNode or other Node as a value,
), ),
...
]
... deep merge any node values corresponding to duplicate keys and return the result. The ... deep merge any node values corresponding to duplicate keys and return the result. If
purpose of merging like this is to support, for instance, merging one borgmatic configuration there are colliding keys with non-MappingNode values (e.g., integers or strings), the last
file into another for reuse, such that a configuration option with sub-options does not of the values wins.
completely replace the corresponding option in a merged file.
If there are colliding keys with scalar values (e.g., integers or strings), the last of the
values wins.
For instance, given node values of: For instance, given node values of:
[ [
( (
ScalarNode(tag='tag:yaml.org,2002:str', value='option'), ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
MappingNode(tag='tag:yaml.org,2002:map', value=[ MappingNode(tag='tag:yaml.org,2002:map', value=[
( (
ScalarNode(tag='tag:yaml.org,2002:str', value='sub_option1'), ScalarNode(tag='tag:yaml.org,2002:str', value='keep_hourly'),
ScalarNode(tag='tag:yaml.org,2002:int', value='1') ScalarNode(tag='tag:yaml.org,2002:int', value='24')
), ),
( (
ScalarNode(tag='tag:yaml.org,2002:str', value='sub_option2'), ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
ScalarNode(tag='tag:yaml.org,2002:int', value='2') ScalarNode(tag='tag:yaml.org,2002:int', value='7')
), ),
]), ]),
), ),
( (
ScalarNode(tag='tag:yaml.org,2002:str', value='option'), ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
MappingNode(tag='tag:yaml.org,2002:map', value=[ MappingNode(tag='tag:yaml.org,2002:map', value=[
( (
ScalarNode(tag='tag:yaml.org,2002:str', value='sub_option2'), ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
ScalarNode(tag='tag:yaml.org,2002:int', value='5') ScalarNode(tag='tag:yaml.org,2002:int', value='5')
), ),
]), ]),
@ -290,95 +206,88 @@ def deep_merge_nodes(nodes):
[ [
( (
ScalarNode(tag='tag:yaml.org,2002:str', value='option'), ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
MappingNode(tag='tag:yaml.org,2002:map', value=[ MappingNode(tag='tag:yaml.org,2002:map', value=[
( (
ScalarNode(tag='tag:yaml.org,2002:str', value='sub_option1'), ScalarNode(tag='tag:yaml.org,2002:str', value='keep_hourly'),
ScalarNode(tag='tag:yaml.org,2002:int', value='1') ScalarNode(tag='tag:yaml.org,2002:int', value='24')
), ),
( (
ScalarNode(tag='tag:yaml.org,2002:str', value='sub_option2'), ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
ScalarNode(tag='tag:yaml.org,2002:int', value='5') ScalarNode(tag='tag:yaml.org,2002:int', value='5')
), ),
]), ]),
), ),
] ]
This function supports multi-way merging, meaning that if the same option name exists three or
more times (at the same scope level), all of those instances get merged together.
If a mapping or sequence node has a YAML "!retain" tag, then that node is not merged. If a mapping or sequence node has a YAML "!retain" tag, then that node is not merged.
Raise ValueError if a merge is implied using multiple incompatible types. The purpose of deep merging like this is to support, for instance, merging one borgmatic
configuration file into another for reuse, such that a configuration section ("retention",
etc.) does not completely replace the corresponding section in a merged file.
Raise ValueError if a merge is implied using two incompatible types.
''' '''
merged_nodes = [] # Map from original node key/value to the replacement merged node. DELETED_NODE as a replacement
# node indications deletion.
replaced_nodes = {}
def get_node_key_name(node): # To find nodes that require merging, compare each node with each other node.
return node[0].value for a_key, a_value in nodes:
for b_key, b_value in nodes:
# If we've already considered one of the nodes for merging, skip it.
if (a_key, a_value) in replaced_nodes or (b_key, b_value) in replaced_nodes:
continue
# Bucket the nodes by their keys. Then merge all of the values sharing the same key. # If the keys match and the values are different, we need to merge these two A and B nodes.
for key_name, grouped_nodes in itertools.groupby( if a_key.tag == b_key.tag and a_key.value == b_key.value and a_value != b_value:
sorted(nodes, key=get_node_key_name), get_node_key_name if not type(a_value) is type(b_value):
): raise ValueError(
grouped_nodes = list(grouped_nodes) f'Incompatible types found when trying to merge "{a_key.value}:" values across configuration files: {type(a_value).id} and {type(b_value).id}'
# The merged node inherits its attributes from the final node in the group.
(last_node_key, last_node_value) = grouped_nodes[-1]
value_types = set(type(value) for (_, value) in grouped_nodes)
if len(value_types) > 1:
raise ValueError(
f'Incompatible types found when trying to merge "{key_name}:" values across configuration files: {", ".join(value_type.id for value_type in value_types)}'
)
# If we're dealing with MappingNodes, recurse and merge its values as well.
if ruamel.yaml.nodes.MappingNode in value_types:
# A "!retain" tag says to skip deep merging for this node. Replace the tag so
# downstream schema validation doesn't break on our application-specific tag.
if last_node_value.tag == '!retain' and len(grouped_nodes) > 1:
last_node_value.tag = 'tag:yaml.org,2002:map'
merged_nodes.append((last_node_key, last_node_value))
else:
merged_nodes.append(
(
last_node_key,
ruamel.yaml.nodes.MappingNode(
tag=last_node_value.tag,
value=deep_merge_nodes(merge_values(grouped_nodes)),
start_mark=last_node_value.start_mark,
end_mark=last_node_value.end_mark,
flow_style=last_node_value.flow_style,
comment=last_node_value.comment,
anchor=last_node_value.anchor,
),
) )
)
continue # Since we're merging into the B node, consider the A node a duplicate and remove it.
replaced_nodes[(a_key, a_value)] = DELETED_NODE
# If we're dealing with SequenceNodes, merge by appending sequences together. # If we're dealing with MappingNodes, recurse and merge its values as well.
if ruamel.yaml.nodes.SequenceNode in value_types: if isinstance(b_value, ruamel.yaml.nodes.MappingNode):
if last_node_value.tag == '!retain' and len(grouped_nodes) > 1: # A "!retain" tag says to skip deep merging for this node. Replace the tag so
last_node_value.tag = 'tag:yaml.org,2002:seq' # downstream schema validation doesn't break on our application-specific tag.
merged_nodes.append((last_node_key, last_node_value)) if b_value.tag == '!retain':
else: b_value.tag = 'tag:yaml.org,2002:map'
merged_nodes.append( else:
( replaced_nodes[(b_key, b_value)] = (
last_node_key, b_key,
ruamel.yaml.nodes.SequenceNode( ruamel.yaml.nodes.MappingNode(
tag=last_node_value.tag, tag=b_value.tag,
value=filter_omitted_nodes(grouped_nodes, merge_values(grouped_nodes)), value=deep_merge_nodes(a_value.value + b_value.value),
start_mark=last_node_value.start_mark, start_mark=b_value.start_mark,
end_mark=last_node_value.end_mark, end_mark=b_value.end_mark,
flow_style=last_node_value.flow_style, flow_style=b_value.flow_style,
comment=last_node_value.comment, comment=b_value.comment,
anchor=last_node_value.anchor, anchor=b_value.anchor,
), ),
) )
) # If we're dealing with SequenceNodes, merge by appending one sequence to the other.
elif isinstance(b_value, ruamel.yaml.nodes.SequenceNode):
# A "!retain" tag says to skip deep merging for this node. Replace the tag so
# downstream schema validation doesn't break on our application-specific tag.
if b_value.tag == '!retain':
b_value.tag = 'tag:yaml.org,2002:seq'
else:
replaced_nodes[(b_key, b_value)] = (
b_key,
ruamel.yaml.nodes.SequenceNode(
tag=b_value.tag,
value=filter_omitted_nodes(a_value.value + b_value.value),
start_mark=b_value.start_mark,
end_mark=b_value.end_mark,
flow_style=b_value.flow_style,
comment=b_value.comment,
anchor=b_value.anchor,
),
)
continue return [
replaced_nodes.get(node, node) for node in nodes if replaced_nodes.get(node) != DELETED_NODE
merged_nodes.append((last_node_key, last_node_value)) ]
return merged_nodes

View File

@ -2,74 +2,21 @@ import logging
import os import os
def normalize_sections(config_filename, config):
'''
Given a configuration filename and a configuration dict of its loaded contents, airlift any
options out of sections ("location:", etc.) to the global scope and delete those sections.
Return any log message warnings produced based on the normalization performed.
Raise ValueError if the "prefix" option is set in both "location" and "consistency" sections.
'''
try:
location = config.get('location') or {}
except AttributeError:
raise ValueError('Configuration does not contain any options')
storage = config.get('storage') or {}
consistency = config.get('consistency') or {}
hooks = config.get('hooks') or {}
if (
location.get('prefix')
and consistency.get('prefix')
and location.get('prefix') != consistency.get('prefix')
):
raise ValueError(
'The retention prefix and the consistency prefix cannot have different values (unless one is not set).'
)
if storage.get('umask') and hooks.get('umask') and storage.get('umask') != hooks.get('umask'):
raise ValueError(
'The storage umask and the hooks umask cannot have different values (unless one is not set).'
)
any_section_upgraded = False
# Move any options from deprecated sections into the global scope.
for section_name in ('location', 'storage', 'retention', 'consistency', 'output', 'hooks'):
section_config = config.get(section_name)
if section_config is not None:
any_section_upgraded = True
del config[section_name]
config.update(section_config)
if any_section_upgraded:
return [
logging.makeLogRecord(
dict(
levelno=logging.WARNING,
levelname='WARNING',
msg=f'{config_filename}: Configuration sections (like location:, storage:, retention:, consistency:, and hooks:) are deprecated and support will be removed from a future release. To prepare for this, move your options out of sections to the global scope.',
)
)
]
return []
def normalize(config_filename, config): def normalize(config_filename, config):
''' '''
Given a configuration filename and a configuration dict of its loaded contents, apply particular Given a configuration filename and a configuration dict of its loaded contents, apply particular
hard-coded rules to normalize the configuration to adhere to the current schema. Return any log hard-coded rules to normalize the configuration to adhere to the current schema. Return any log
message warnings produced based on the normalization performed. message warnings produced based on the normalization performed.
Raise ValueError the configuration cannot be normalized.
''' '''
logs = normalize_sections(config_filename, config) logs = []
location = config.get('location') or {}
storage = config.get('storage') or {}
consistency = config.get('consistency') or {}
retention = config.get('retention') or {}
hooks = config.get('hooks') or {}
# Upgrade exclude_if_present from a string to a list. # Upgrade exclude_if_present from a string to a list.
exclude_if_present = config.get('exclude_if_present') exclude_if_present = location.get('exclude_if_present')
if isinstance(exclude_if_present, str): if isinstance(exclude_if_present, str):
logs.append( logs.append(
logging.makeLogRecord( logging.makeLogRecord(
@ -80,23 +27,23 @@ def normalize(config_filename, config):
) )
) )
) )
config['exclude_if_present'] = [exclude_if_present] config['location']['exclude_if_present'] = [exclude_if_present]
# Upgrade various monitoring hooks from a string to a dict. # Upgrade various monitoring hooks from a string to a dict.
healthchecks = config.get('healthchecks') healthchecks = hooks.get('healthchecks')
if isinstance(healthchecks, str): if isinstance(healthchecks, str):
logs.append( logs.append(
logging.makeLogRecord( logging.makeLogRecord(
dict( dict(
levelno=logging.WARNING, levelno=logging.WARNING,
levelname='WARNING', levelname='WARNING',
msg=f'{config_filename}: The healthchecks hook now expects a key/value pair with "ping_url" as a key. String values for this option are deprecated and support will be removed from a future release.', msg=f'{config_filename}: The healthchecks hook now expects a mapping value. String values for this option are deprecated and support will be removed from a future release.',
) )
) )
) )
config['healthchecks'] = {'ping_url': healthchecks} config['hooks']['healthchecks'] = {'ping_url': healthchecks}
cronitor = config.get('cronitor') cronitor = hooks.get('cronitor')
if isinstance(cronitor, str): if isinstance(cronitor, str):
logs.append( logs.append(
logging.makeLogRecord( logging.makeLogRecord(
@ -107,9 +54,9 @@ def normalize(config_filename, config):
) )
) )
) )
config['cronitor'] = {'ping_url': cronitor} config['hooks']['cronitor'] = {'ping_url': cronitor}
pagerduty = config.get('pagerduty') pagerduty = hooks.get('pagerduty')
if isinstance(pagerduty, str): if isinstance(pagerduty, str):
logs.append( logs.append(
logging.makeLogRecord( logging.makeLogRecord(
@ -120,9 +67,9 @@ def normalize(config_filename, config):
) )
) )
) )
config['pagerduty'] = {'integration_key': pagerduty} config['hooks']['pagerduty'] = {'integration_key': pagerduty}
cronhub = config.get('cronhub') cronhub = hooks.get('cronhub')
if isinstance(cronhub, str): if isinstance(cronhub, str):
logs.append( logs.append(
logging.makeLogRecord( logging.makeLogRecord(
@ -133,10 +80,10 @@ def normalize(config_filename, config):
) )
) )
) )
config['cronhub'] = {'ping_url': cronhub} config['hooks']['cronhub'] = {'ping_url': cronhub}
# Upgrade consistency checks from a list of strings to a list of dicts. # Upgrade consistency checks from a list of strings to a list of dicts.
checks = config.get('checks') checks = consistency.get('checks')
if isinstance(checks, list) and len(checks) and isinstance(checks[0], str): if isinstance(checks, list) and len(checks) and isinstance(checks[0], str):
logs.append( logs.append(
logging.makeLogRecord( logging.makeLogRecord(
@ -147,10 +94,10 @@ def normalize(config_filename, config):
) )
) )
) )
config['checks'] = [{'name': check_type} for check_type in checks] config['consistency']['checks'] = [{'name': check_type} for check_type in checks]
# Rename various configuration options. # Rename various configuration options.
numeric_owner = config.pop('numeric_owner', None) numeric_owner = location.pop('numeric_owner', None)
if numeric_owner is not None: if numeric_owner is not None:
logs.append( logs.append(
logging.makeLogRecord( logging.makeLogRecord(
@ -161,9 +108,9 @@ def normalize(config_filename, config):
) )
) )
) )
config['numeric_ids'] = numeric_owner config['location']['numeric_ids'] = numeric_owner
bsd_flags = config.pop('bsd_flags', None) bsd_flags = location.pop('bsd_flags', None)
if bsd_flags is not None: if bsd_flags is not None:
logs.append( logs.append(
logging.makeLogRecord( logging.makeLogRecord(
@ -174,9 +121,9 @@ def normalize(config_filename, config):
) )
) )
) )
config['flags'] = bsd_flags config['location']['flags'] = bsd_flags
remote_rate_limit = config.pop('remote_rate_limit', None) remote_rate_limit = storage.pop('remote_rate_limit', None)
if remote_rate_limit is not None: if remote_rate_limit is not None:
logs.append( logs.append(
logging.makeLogRecord( logging.makeLogRecord(
@ -187,12 +134,12 @@ def normalize(config_filename, config):
) )
) )
) )
config['upload_rate_limit'] = remote_rate_limit config['storage']['upload_rate_limit'] = remote_rate_limit
# Upgrade remote repositories to ssh:// syntax, required in Borg 2. # Upgrade remote repositories to ssh:// syntax, required in Borg 2.
repositories = config.get('repositories') repositories = location.get('repositories')
if repositories: if repositories:
if any(isinstance(repository, str) for repository in repositories): if isinstance(repositories[0], str):
logs.append( logs.append(
logging.makeLogRecord( logging.makeLogRecord(
dict( dict(
@ -202,14 +149,11 @@ def normalize(config_filename, config):
) )
) )
) )
config['repositories'] = [ config['location']['repositories'] = [
{'path': repository} if isinstance(repository, str) else repository {'path': repository} for repository in repositories
for repository in repositories
] ]
repositories = config['repositories'] repositories = config['location']['repositories']
config['location']['repositories'] = []
config['repositories'] = []
for repository_dict in repositories: for repository_dict in repositories:
repository_path = repository_dict['path'] repository_path = repository_dict['path']
if '~' in repository_path: if '~' in repository_path:
@ -227,14 +171,14 @@ def normalize(config_filename, config):
updated_repository_path = os.path.abspath( updated_repository_path = os.path.abspath(
repository_path.partition('file://')[-1] repository_path.partition('file://')[-1]
) )
config['repositories'].append( config['location']['repositories'].append(
dict( dict(
repository_dict, repository_dict,
path=updated_repository_path, path=updated_repository_path,
) )
) )
elif repository_path.startswith('ssh://'): elif repository_path.startswith('ssh://'):
config['repositories'].append(repository_dict) config['location']['repositories'].append(repository_dict)
else: else:
rewritten_repository_path = f"ssh://{repository_path.replace(':~', '/~').replace(':/', '/').replace(':', '/./')}" rewritten_repository_path = f"ssh://{repository_path.replace(':~', '/~').replace(':/', '/').replace(':', '/./')}"
logs.append( logs.append(
@ -246,16 +190,16 @@ def normalize(config_filename, config):
) )
) )
) )
config['repositories'].append( config['location']['repositories'].append(
dict( dict(
repository_dict, repository_dict,
path=rewritten_repository_path, path=rewritten_repository_path,
) )
) )
else: else:
config['repositories'].append(repository_dict) config['location']['repositories'].append(repository_dict)
if config.get('prefix'): if consistency.get('prefix') or retention.get('prefix'):
logs.append( logs.append(
logging.makeLogRecord( logging.makeLogRecord(
dict( dict(

View File

@ -13,11 +13,6 @@ def set_values(config, keys, value):
first_key = keys[0] first_key = keys[0]
if len(keys) == 1: if len(keys) == 1:
if isinstance(config, list):
raise ValueError(
'When overriding a list option, the value must use list syntax (e.g., "[foo, bar]" or "[{key: value}]" as appropriate)'
)
config[first_key] = value config[first_key] = value
return return
@ -27,70 +22,29 @@ def set_values(config, keys, value):
set_values(config[first_key], keys[1:], value) set_values(config[first_key], keys[1:], value)
def convert_value_type(value, option_type): def convert_value_type(value):
''' '''
Given a string value and its schema type as a string, determine its logical type (string, Given a string value, determine its logical type (string, boolean, integer, etc.), and return it
boolean, integer, etc.), and return it converted to that type. converted to that type.
If the option type is a string, leave the value as a string so that special characters in it
don't get interpreted as YAML during conversion.
Raise ruamel.yaml.error.YAMLError if there's a parse issue with the YAML. Raise ruamel.yaml.error.YAMLError if there's a parse issue with the YAML.
''' '''
if option_type == 'string':
return value
return ruamel.yaml.YAML(typ='safe').load(io.StringIO(value)) return ruamel.yaml.YAML(typ='safe').load(io.StringIO(value))
LEGACY_SECTION_NAMES = {'location', 'storage', 'retention', 'consistency', 'output', 'hooks'} def parse_overrides(raw_overrides):
def strip_section_names(parsed_override_key):
''' '''
Given a parsed override key as a tuple of option and suboption names, strip out any initial Given a sequence of configuration file override strings in the form of "section.option=value",
legacy section names, since configuration file normalization also strips them out. parse and return a sequence of tuples (keys, values), where keys is a sequence of strings. For
''' instance, given the following raw overrides:
if parsed_override_key[0] in LEGACY_SECTION_NAMES:
return parsed_override_key[1:]
return parsed_override_key ['section.my_option=value1', 'section.other_option=value2']
def type_for_option(schema, option_keys):
'''
Given a configuration schema and a sequence of keys identifying an option, e.g.
('extra_borg_options', 'init'), return the schema type of that option as a string.
Return None if the option or its type cannot be found in the schema.
'''
option_schema = schema
for key in option_keys:
try:
option_schema = option_schema['properties'][key]
except KeyError:
return None
try:
return option_schema['type']
except KeyError:
return None
def parse_overrides(raw_overrides, schema):
'''
Given a sequence of configuration file override strings in the form of "option.suboption=value"
and a configuration schema dict, parse and return a sequence of tuples (keys, values), where
keys is a sequence of strings. For instance, given the following raw overrides:
['my_option.suboption=value1', 'other_option=value2']
... return this: ... return this:
( (
(('my_option', 'suboption'), 'value1'), (('section', 'my_option'), 'value1'),
(('other_option'), 'value2'), (('section', 'other_option'), 'value2'),
) )
Raise ValueError if an override can't be parsed. Raise ValueError if an override can't be parsed.
@ -103,18 +57,15 @@ def parse_overrides(raw_overrides, schema):
for raw_override in raw_overrides: for raw_override in raw_overrides:
try: try:
raw_keys, value = raw_override.split('=', 1) raw_keys, value = raw_override.split('=', 1)
keys = tuple(raw_keys.split('.'))
option_type = type_for_option(schema, keys)
parsed_overrides.append( parsed_overrides.append(
( (
keys, tuple(raw_keys.split('.')),
convert_value_type(value, option_type), convert_value_type(value),
) )
) )
except ValueError: except ValueError:
raise ValueError( raise ValueError(
f"Invalid override '{raw_override}'. Make sure you use the form: OPTION=VALUE or OPTION.SUBOPTION=VALUE" f"Invalid override '{raw_override}'. Make sure you use the form: SECTION.OPTION=VALUE"
) )
except ruamel.yaml.error.YAMLError as error: except ruamel.yaml.error.YAMLError as error:
raise ValueError(f"Invalid override '{raw_override}': {error.problem}") raise ValueError(f"Invalid override '{raw_override}': {error.problem}")
@ -122,18 +73,12 @@ def parse_overrides(raw_overrides, schema):
return tuple(parsed_overrides) return tuple(parsed_overrides)
def apply_overrides(config, schema, raw_overrides): def apply_overrides(config, raw_overrides):
''' '''
Given a configuration dict, a corresponding configuration schema dict, and a sequence of Given a configuration dict and a sequence of configuration file override strings in the form of
configuration file override strings in the form of "option.suboption=value", parse each override "section.option=value", parse each override and set it the configuration dict.
and set it into the configuration dict.
Set the overrides into the configuration both with and without deprecated section names (if
used), so that the overrides work regardless of whether the configuration is also using
deprecated section names.
''' '''
overrides = parse_overrides(raw_overrides, schema) overrides = parse_overrides(raw_overrides)
for keys, value in overrides: for keys, value in overrides:
set_values(config, keys, value) set_values(config, keys, value)
set_values(config, strip_section_names(keys), value)

File diff suppressed because it is too large Load Diff

View File

@ -4,7 +4,7 @@ import jsonschema
import ruamel.yaml import ruamel.yaml
import borgmatic.config import borgmatic.config
from borgmatic.config import constants, environment, load, normalize, override from borgmatic.config import environment, load, normalize, override
def schema_filename(): def schema_filename():
@ -71,15 +71,18 @@ def apply_logical_validation(config_filename, parsed_configuration):
below), run through any additional logical validation checks. If there are any such validation below), run through any additional logical validation checks. If there are any such validation
problems, raise a Validation_error. problems, raise a Validation_error.
''' '''
repositories = parsed_configuration.get('repositories') location_repositories = parsed_configuration.get('location', {}).get('repositories')
check_repositories = parsed_configuration.get('check_repositories', []) check_repositories = parsed_configuration.get('consistency', {}).get('check_repositories', [])
for repository in check_repositories: for repository in check_repositories:
if not any( if not any(
repositories_match(repository, config_repository) for config_repository in repositories repositories_match(repository, config_repository)
for config_repository in location_repositories
): ):
raise Validation_error( raise Validation_error(
config_filename, config_filename,
(f'Unknown repository in "check_repositories": {repository}',), (
f'Unknown repository in the "consistency" section\'s "check_repositories": {repository}',
),
) )
@ -87,38 +90,29 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
''' '''
Given the path to a config filename in YAML format, the path to a schema filename in a YAML Given the path to a config filename in YAML format, the path to a schema filename in a YAML
rendition of JSON Schema format, a sequence of configuration file override strings in the form rendition of JSON Schema format, a sequence of configuration file override strings in the form
of "option.suboption=value", return the parsed configuration as a data structure of nested dicts of "section.option=value", return the parsed configuration as a data structure of nested dicts
and lists corresponding to the schema. Example return value: and lists corresponding to the schema. Example return value:
{ {'location': {'source_directories': ['/home', '/etc'], 'repository': 'hostname.borg'},
'source_directories': ['/home', '/etc'], 'retention': {'keep_daily': 7}, 'consistency': {'checks': ['repository', 'archives']}}
'repository': 'hostname.borg',
'keep_daily': 7,
'checks': ['repository', 'archives'],
}
Also return a set of loaded configuration paths and a sequence of logging.LogRecord instances Also return a sequence of logging.LogRecord instances containing any warnings about the
containing any warnings about the configuration. configuration.
Raise FileNotFoundError if the file does not exist, PermissionError if the user does not Raise FileNotFoundError if the file does not exist, PermissionError if the user does not
have permissions to read the file, or Validation_error if the config does not match the schema. have permissions to read the file, or Validation_error if the config does not match the schema.
''' '''
config_paths = set()
try: try:
config = load.load_configuration(config_filename, config_paths) config = load.load_configuration(config_filename)
schema = load.load_configuration(schema_filename) schema = load.load_configuration(schema_filename)
except (ruamel.yaml.error.YAMLError, RecursionError) as error: except (ruamel.yaml.error.YAMLError, RecursionError) as error:
raise Validation_error(config_filename, (str(error),)) raise Validation_error(config_filename, (str(error),))
override.apply_overrides(config, schema, overrides) override.apply_overrides(config, overrides)
constants.apply_constants(config, config.get('constants') if config else {}) logs = normalize.normalize(config_filename, config)
if resolve_env: if resolve_env:
environment.resolve_env_variables(config) environment.resolve_env_variables(config)
logs = normalize.normalize(config_filename, config)
try: try:
validator = jsonschema.Draft7Validator(schema) validator = jsonschema.Draft7Validator(schema)
except AttributeError: # pragma: no cover except AttributeError: # pragma: no cover
@ -132,7 +126,7 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
apply_logical_validation(config_filename, config) apply_logical_validation(config_filename, config)
return config, config_paths, logs return config, logs
def normalize_repository_path(repository): def normalize_repository_path(repository):
@ -167,10 +161,11 @@ def repositories_match(first, second):
def guard_configuration_contains_repository(repository, configurations): def guard_configuration_contains_repository(repository, configurations):
''' '''
Given a repository path and a dict mapping from config filename to corresponding parsed config Given a repository path and a dict mapping from config filename to corresponding parsed config
dict, ensure that the repository is declared at least once in all of the configurations. If no dict, ensure that the repository is declared exactly once in all of the configurations. If no
repository is given, skip this check. repository is given, skip this check.
Raise ValueError if the repository is not found in any configurations. Raise ValueError if the repository is not found in a configuration, or is declared multiple
times.
''' '''
if not repository: if not repository:
return return
@ -179,13 +174,15 @@ def guard_configuration_contains_repository(repository, configurations):
tuple( tuple(
config_repository config_repository
for config in configurations.values() for config in configurations.values()
for config_repository in config['repositories'] for config_repository in config['location']['repositories']
if repositories_match(config_repository, repository) if repositories_match(config_repository, repository)
) )
) )
if count == 0: if count == 0:
raise ValueError(f'Repository "{repository}" not found in configuration files') raise ValueError(f'Repository {repository} not found in configuration files')
if count > 1:
raise ValueError(f'Repository {repository} found in multiple configuration files')
def guard_single_repository_selected(repository, configurations): def guard_single_repository_selected(repository, configurations):
@ -201,7 +198,7 @@ def guard_single_repository_selected(repository, configurations):
tuple( tuple(
config_repository config_repository
for config in configurations.values() for config in configurations.values()
for config_repository in config['repositories'] for config_repository in config['location']['repositories']
) )
) )

View File

@ -1,70 +1,29 @@
import collections import collections
import enum
import logging import logging
import os import os
import select import select
import subprocess import subprocess
import textwrap
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
ERROR_OUTPUT_MAX_LINE_COUNT = 25 ERROR_OUTPUT_MAX_LINE_COUNT = 25
BORG_ERROR_EXIT_CODE_START = 2 BORG_ERROR_EXIT_CODE = 2
BORG_ERROR_EXIT_CODE_END = 99
class Exit_status(enum.Enum): def exit_code_indicates_error(command, exit_code, borg_local_path=None):
STILL_RUNNING = 1
SUCCESS = 2
WARNING = 3
ERROR = 4
def interpret_exit_code(command, exit_code, borg_local_path=None, borg_exit_codes=None):
''' '''
Return an Exit_status value (e.g. SUCCESS, ERROR, or WARNING) based on interpreting the given Return True if the given exit code from running a command corresponds to an error. If a Borg
exit code. If a Borg local path is given and matches the process' command, then interpret the local path is given and matches the process' command, then treat exit code 1 as a warning
exit code based on Borg's documented exit code semantics. And if Borg exit codes are given as a instead of an error.
sequence of exit code configuration dicts, then take those configured preferences into account.
''' '''
if exit_code is None: if exit_code is None:
return Exit_status.STILL_RUNNING return False
if exit_code == 0:
return Exit_status.SUCCESS
if borg_local_path and command[0] == borg_local_path: if borg_local_path and command[0] == borg_local_path:
# First try looking for the exit code in the borg_exit_codes configuration. return bool(exit_code < 0 or exit_code >= BORG_ERROR_EXIT_CODE)
for entry in borg_exit_codes or ():
if entry.get('code') == exit_code:
treat_as = entry.get('treat_as')
if treat_as == 'error': return bool(exit_code != 0)
logger.error(
f'Treating exit code {exit_code} as an error, as per configuration'
)
return Exit_status.ERROR
elif treat_as == 'warning':
logger.warning(
f'Treating exit code {exit_code} as a warning, as per configuration'
)
return Exit_status.WARNING
# If the exit code doesn't have explicit configuration, then fall back to the default Borg
# behavior.
return (
Exit_status.ERROR
if (
exit_code < 0
or (
exit_code >= BORG_ERROR_EXIT_CODE_START
and exit_code <= BORG_ERROR_EXIT_CODE_END
)
)
else Exit_status.WARNING
)
return Exit_status.ERROR
def command_for_process(process): def command_for_process(process):
@ -101,7 +60,7 @@ def append_last_lines(last_lines, captured_output, line, output_log_level):
logger.log(output_log_level, line) logger.log(output_log_level, line)
def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path, borg_exit_codes): def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
''' '''
Given a sequence of subprocess.Popen() instances for multiple processes, log the output for each Given a sequence of subprocess.Popen() instances for multiple processes, log the output for each
process with the requested log level. Additionally, raise a CalledProcessError if a process process with the requested log level. Additionally, raise a CalledProcessError if a process
@ -109,8 +68,7 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path, b
path). path).
If output log level is None, then instead of logging, capture output for each process and return If output log level is None, then instead of logging, capture output for each process and return
it as a dict from the process to its output. Use the given Borg local path and exit code it as a dict from the process to its output.
configuration to decide what's an error and what's a warning.
For simplicity, it's assumed that the output buffer for each process is its stdout. But if any For simplicity, it's assumed that the output buffer for each process is its stdout. But if any
stdouts are given to exclude, then for any matching processes, log from their stderr instead. stdouts are given to exclude, then for any matching processes, log from their stderr instead.
@ -174,13 +132,10 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path, b
if exit_code is None: if exit_code is None:
still_running = True still_running = True
command = process.args.split(' ') if isinstance(process.args, str) else process.args
continue
command = process.args.split(' ') if isinstance(process.args, str) else process.args command = process.args.split(' ') if isinstance(process.args, str) else process.args
exit_status = interpret_exit_code(command, exit_code, borg_local_path, borg_exit_codes) # If any process errors, then raise accordingly.
if exit_code_indicates_error(command, exit_code, borg_local_path):
if exit_status in (Exit_status.ERROR, Exit_status.WARNING):
# If an error occurs, include its output in the raised exception so that we don't # If an error occurs, include its output in the raised exception so that we don't
# inadvertently hide error output. # inadvertently hide error output.
output_buffer = output_buffer_for_process(process, exclude_stdouts) output_buffer = output_buffer_for_process(process, exclude_stdouts)
@ -206,13 +161,9 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path, b
other_process.stdout.read(0) other_process.stdout.read(0)
other_process.kill() other_process.kill()
if exit_status == Exit_status.ERROR: raise subprocess.CalledProcessError(
raise subprocess.CalledProcessError( exit_code, command_for_process(process), '\n'.join(last_lines)
exit_code, command_for_process(process), '\n'.join(last_lines) )
)
still_running = False
break
if captured_outputs: if captured_outputs:
return { return {
@ -220,47 +171,19 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path, b
} }
SECRET_COMMAND_FLAG_NAMES = {'--password'} def log_command(full_command, input_file=None, output_file=None):
def mask_command_secrets(full_command):
'''
Given a command as a sequence, mask secret values for flags like "--password" in preparation for
logging.
'''
masked_command = []
previous_piece = None
for piece in full_command:
masked_command.append('***' if previous_piece in SECRET_COMMAND_FLAG_NAMES else piece)
previous_piece = piece
return tuple(masked_command)
MAX_LOGGED_COMMAND_LENGTH = 1000
def log_command(full_command, input_file=None, output_file=None, environment=None):
''' '''
Log the given command (a sequence of command/argument strings), along with its input/output file Log the given command (a sequence of command/argument strings), along with its input/output file
paths and extra environment variables (with omitted values in case they contain passwords). paths.
''' '''
logger.debug( logger.debug(
textwrap.shorten( ' '.join(full_command)
' '.join(
tuple(f'{key}=***' for key in (environment or {}).keys())
+ mask_command_secrets(full_command)
),
width=MAX_LOGGED_COMMAND_LENGTH,
placeholder=' ...',
)
+ (f" < {getattr(input_file, 'name', '')}" if input_file else '') + (f" < {getattr(input_file, 'name', '')}" if input_file else '')
+ (f" > {getattr(output_file, 'name', '')}" if output_file else '') + (f" > {getattr(output_file, 'name', '')}" if output_file else '')
) )
# A sentinel passed as an output file to execute_command() to indicate that the command's output # An sentinel passed as an output file to execute_command() to indicate that the command's output
# should be allowed to flow through to stdout without being captured for logging. Useful for # should be allowed to flow through to stdout without being captured for logging. Useful for
# commands with interactive prompts or those that mess directly with the console. # commands with interactive prompts or those that mess directly with the console.
DO_NOT_CAPTURE = object() DO_NOT_CAPTURE = object()
@ -275,7 +198,6 @@ def execute_command(
extra_environment=None, extra_environment=None,
working_directory=None, working_directory=None,
borg_local_path=None, borg_local_path=None,
borg_exit_codes=None,
run_to_completion=True, run_to_completion=True,
): ):
''' '''
@ -286,13 +208,12 @@ def execute_command(
augment the current environment, and pass the result into the command. If a working directory is augment the current environment, and pass the result into the command. If a working directory is
given, use that as the present working directory when running the command. If a Borg local path given, use that as the present working directory when running the command. If a Borg local path
is given, and the command matches it (regardless of arguments), treat exit code 1 as a warning is given, and the command matches it (regardless of arguments), treat exit code 1 as a warning
instead of an error. But if Borg exit codes are given as a sequence of exit code configuration instead of an error. If run to completion is False, then return the process for the command
dicts, then use that configuration to decide what's an error and what's a warning. If run to without executing it to completion.
completion is False, then return the process for the command without executing it to completion.
Raise subprocesses.CalledProcessError if an error occurs while running the command. Raise subprocesses.CalledProcessError if an error occurs while running the command.
''' '''
log_command(full_command, input_file, output_file, extra_environment) log_command(full_command, input_file, output_file)
environment = {**os.environ, **extra_environment} if extra_environment else None environment = {**os.environ, **extra_environment} if extra_environment else None
do_not_capture = bool(output_file is DO_NOT_CAPTURE) do_not_capture = bool(output_file is DO_NOT_CAPTURE)
command = ' '.join(full_command) if shell else full_command command = ' '.join(full_command) if shell else full_command
@ -310,11 +231,7 @@ def execute_command(
return process return process
log_outputs( log_outputs(
(process,), (process,), (input_file, output_file), output_log_level, borg_local_path=borg_local_path
(input_file, output_file),
output_log_level,
borg_local_path,
borg_exit_codes,
) )
@ -324,23 +241,17 @@ def execute_command_and_capture_output(
shell=False, shell=False,
extra_environment=None, extra_environment=None,
working_directory=None, working_directory=None,
borg_local_path=None,
borg_exit_codes=None,
): ):
''' '''
Execute the given command (a sequence of command/argument strings), capturing and returning its Execute the given command (a sequence of command/argument strings), capturing and returning its
output (stdout). If capture stderr is True, then capture and return stderr in addition to output (stdout). If capture stderr is True, then capture and return stderr in addition to
stdout. If shell is True, execute the command within a shell. If an extra environment dict is stdout. If shell is True, execute the command within a shell. If an extra environment dict is
given, then use it to augment the current environment, and pass the result into the command. If given, then use it to augment the current environment, and pass the result into the command. If
a working directory is given, use that as the present working directory when running the a working directory is given, use that as the present working directory when running the command.
command. If a Borg local path is given, and the command matches it (regardless of arguments),
treat exit code 1 as a warning instead of an error. But if Borg exit codes are given as a
sequence of exit code configuration dicts, then use that configuration to decide what's an error
and what's a warning.
Raise subprocesses.CalledProcessError if an error occurs while running the command. Raise subprocesses.CalledProcessError if an error occurs while running the command.
''' '''
log_command(full_command, environment=extra_environment) log_command(full_command)
environment = {**os.environ, **extra_environment} if extra_environment else None environment = {**os.environ, **extra_environment} if extra_environment else None
command = ' '.join(full_command) if shell else full_command command = ' '.join(full_command) if shell else full_command
@ -353,10 +264,7 @@ def execute_command_and_capture_output(
cwd=working_directory, cwd=working_directory,
) )
except subprocess.CalledProcessError as error: except subprocess.CalledProcessError as error:
if ( if exit_code_indicates_error(command, error.returncode):
interpret_exit_code(command, error.returncode, borg_local_path, borg_exit_codes)
== Exit_status.ERROR
):
raise raise
output = error.output output = error.output
@ -373,7 +281,6 @@ def execute_command_with_processes(
extra_environment=None, extra_environment=None,
working_directory=None, working_directory=None,
borg_local_path=None, borg_local_path=None,
borg_exit_codes=None,
): ):
''' '''
Execute the given command (a sequence of command/argument strings) and log its output at the Execute the given command (a sequence of command/argument strings) and log its output at the
@ -388,14 +295,12 @@ def execute_command_with_processes(
use it to augment the current environment, and pass the result into the command. If a working use it to augment the current environment, and pass the result into the command. If a working
directory is given, use that as the present working directory when running the command. If a directory is given, use that as the present working directory when running the command. If a
Borg local path is given, then for any matching command or process (regardless of arguments), Borg local path is given, then for any matching command or process (regardless of arguments),
treat exit code 1 as a warning instead of an error. But if Borg exit codes are given as a treat exit code 1 as a warning instead of an error.
sequence of exit code configuration dicts, then use that configuration to decide what's an error
and what's a warning.
Raise subprocesses.CalledProcessError if an error occurs while running the command or in the Raise subprocesses.CalledProcessError if an error occurs while running the command or in the
upstream process. upstream process.
''' '''
log_command(full_command, input_file, output_file, extra_environment) log_command(full_command, input_file, output_file)
environment = {**os.environ, **extra_environment} if extra_environment else None environment = {**os.environ, **extra_environment} if extra_environment else None
do_not_capture = bool(output_file is DO_NOT_CAPTURE) do_not_capture = bool(output_file is DO_NOT_CAPTURE)
command = ' '.join(full_command) if shell else full_command command = ' '.join(full_command) if shell else full_command
@ -405,9 +310,9 @@ def execute_command_with_processes(
command, command,
stdin=input_file, stdin=input_file,
stdout=None if do_not_capture else (output_file or subprocess.PIPE), stdout=None if do_not_capture else (output_file or subprocess.PIPE),
stderr=( stderr=None
None if do_not_capture else (subprocess.PIPE if output_file else subprocess.STDOUT) if do_not_capture
), else (subprocess.PIPE if output_file else subprocess.STDOUT),
shell=shell, shell=shell,
env=environment, env=environment,
cwd=working_directory, cwd=working_directory,
@ -425,8 +330,7 @@ def execute_command_with_processes(
tuple(processes) + (command_process,), tuple(processes) + (command_process,),
(input_file, output_file), (input_file, output_file),
output_log_level, output_log_level,
borg_local_path, borg_local_path=borg_local_path,
borg_exit_codes,
) )
if output_log_level is None: if output_log_level is None:

View File

@ -1,109 +0,0 @@
import logging
import operator
import borgmatic.hooks.logs
import borgmatic.hooks.monitor
logger = logging.getLogger(__name__)
DEFAULT_LOGS_SIZE_LIMIT_BYTES = 100000
HANDLER_IDENTIFIER = 'apprise'
def initialize_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
'''
Add a handler to the root logger that stores in memory the most recent logs emitted. That way,
we can send them all to an Apprise notification service upon a finish or failure state. But skip
this if the "send_logs" option is false.
'''
if hook_config.get('send_logs') is False:
return
logs_size_limit = max(
hook_config.get('logs_size_limit', DEFAULT_LOGS_SIZE_LIMIT_BYTES)
- len(borgmatic.hooks.logs.PAYLOAD_TRUNCATION_INDICATOR),
0,
)
borgmatic.hooks.logs.add_handler(
borgmatic.hooks.logs.Forgetful_buffering_handler(
HANDLER_IDENTIFIER, logs_size_limit, monitoring_log_level
)
)
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run):
'''
Ping the configured Apprise service URLs. Use the given configuration filename in any log
entries. If this is a dry run, then don't actually ping anything.
'''
try:
import apprise
from apprise import NotifyFormat, NotifyType
except ImportError: # pragma: no cover
logger.warning('Unable to import Apprise in monitoring hook')
return
state_to_notify_type = {
'start': NotifyType.INFO,
'finish': NotifyType.SUCCESS,
'fail': NotifyType.FAILURE,
'log': NotifyType.INFO,
}
run_states = hook_config.get('states', ['fail'])
if state.name.lower() not in run_states:
return
state_config = hook_config.get(
state.name.lower(),
{
'title': f'A borgmatic {state.name} event happened',
'body': f'A borgmatic {state.name} event happened',
},
)
if not hook_config.get('services'):
logger.info(f'{config_filename}: No Apprise services to ping')
return
dry_run_string = ' (dry run; not actually pinging)' if dry_run else ''
labels_string = ', '.join(map(operator.itemgetter('label'), hook_config.get('services')))
logger.info(f'{config_filename}: Pinging Apprise services: {labels_string}{dry_run_string}')
apprise_object = apprise.Apprise()
apprise_object.add(list(map(operator.itemgetter('url'), hook_config.get('services'))))
if dry_run:
return
body = state_config.get('body')
if state in (
borgmatic.hooks.monitor.State.FINISH,
borgmatic.hooks.monitor.State.FAIL,
borgmatic.hooks.monitor.State.LOG,
):
formatted_logs = borgmatic.hooks.logs.format_buffered_logs_for_payload(HANDLER_IDENTIFIER)
if formatted_logs:
body += f'\n\n{formatted_logs}'
result = apprise_object.notify(
title=state_config.get('title', ''),
body=body,
body_format=NotifyFormat.TEXT,
notify_type=state_to_notify_type[state.name.lower()],
)
if result is False:
logger.warning(f'{config_filename}: Error sending some Apprise notifications')
def destroy_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
'''
Remove the monitor handler that was added to the root logger. This prevents the handler from
getting reused by other instances of this monitor.
'''
borgmatic.hooks.logs.remove_handler(HANDLER_IDENTIFIER)

View File

@ -1,7 +1,6 @@
import logging import logging
import os import os
import re import re
import shlex
from borgmatic import execute from borgmatic import execute
@ -17,7 +16,7 @@ def interpolate_context(config_filename, hook_description, command, context):
names/values, interpolate the values by "{name}" into the command and return the result. names/values, interpolate the values by "{name}" into the command and return the result.
''' '''
for name, value in context.items(): for name, value in context.items():
command = command.replace(f'{{{name}}}', shlex.quote(str(value))) command = command.replace(f'{{{name}}}', str(value))
for unsupported_variable in re.findall(r'{\w+}', command): for unsupported_variable in re.findall(r'{\w+}', command):
logger.warning( logger.warning(
@ -68,9 +67,9 @@ def execute_hook(commands, umask, config_filename, description, dry_run, **conte
if not dry_run: if not dry_run:
execute.execute_command( execute.execute_command(
[command], [command],
output_log_level=( output_log_level=logging.ERROR
logging.ERROR if description == 'on-error' else logging.WARNING if description == 'on-error'
), else logging.WARNING,
shell=True, shell=True,
) )
finally: finally:

View File

@ -14,7 +14,7 @@ MONITOR_STATE_TO_CRONHUB = {
def initialize_monitor( def initialize_monitor(
ping_url, config, config_filename, monitoring_log_level, dry_run ping_url, config_filename, monitoring_log_level, dry_run
): # pragma: no cover ): # pragma: no cover
''' '''
No initialization is necessary for this monitor. No initialization is necessary for this monitor.
@ -22,7 +22,7 @@ def initialize_monitor(
pass pass
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run): def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
''' '''
Ping the configured Cronhub URL, modified with the monitor.State. Use the given configuration Ping the configured Cronhub URL, modified with the monitor.State. Use the given configuration
filename in any log entries. If this is a dry run, then don't actually ping anything. filename in any log entries. If this is a dry run, then don't actually ping anything.
@ -55,7 +55,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
def destroy_monitor( def destroy_monitor(
ping_url_or_uuid, config, config_filename, monitoring_log_level, dry_run ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
): # pragma: no cover ): # pragma: no cover
''' '''
No destruction is necessary for this monitor. No destruction is necessary for this monitor.

View File

@ -14,7 +14,7 @@ MONITOR_STATE_TO_CRONITOR = {
def initialize_monitor( def initialize_monitor(
ping_url, config, config_filename, monitoring_log_level, dry_run ping_url, config_filename, monitoring_log_level, dry_run
): # pragma: no cover ): # pragma: no cover
''' '''
No initialization is necessary for this monitor. No initialization is necessary for this monitor.
@ -22,7 +22,7 @@ def initialize_monitor(
pass pass
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run): def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
''' '''
Ping the configured Cronitor URL, modified with the monitor.State. Use the given configuration Ping the configured Cronitor URL, modified with the monitor.State. Use the given configuration
filename in any log entries. If this is a dry run, then don't actually ping anything. filename in any log entries. If this is a dry run, then don't actually ping anything.
@ -50,7 +50,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
def destroy_monitor( def destroy_monitor(
ping_url_or_uuid, config, config_filename, monitoring_log_level, dry_run ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
): # pragma: no cover ): # pragma: no cover
''' '''
No destruction is necessary for this monitor. No destruction is necessary for this monitor.

View File

@ -1,12 +1,9 @@
import logging import logging
from borgmatic.hooks import ( from borgmatic.hooks import (
apprise,
cronhub, cronhub,
cronitor, cronitor,
healthchecks, healthchecks,
loki,
mariadb,
mongodb, mongodb,
mysql, mysql,
ntfy, ntfy,
@ -18,32 +15,30 @@ from borgmatic.hooks import (
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
HOOK_NAME_TO_MODULE = { HOOK_NAME_TO_MODULE = {
'apprise': apprise,
'cronhub': cronhub, 'cronhub': cronhub,
'cronitor': cronitor, 'cronitor': cronitor,
'healthchecks': healthchecks, 'healthchecks': healthchecks,
'mariadb_databases': mariadb,
'mongodb_databases': mongodb, 'mongodb_databases': mongodb,
'mysql_databases': mysql, 'mysql_databases': mysql,
'ntfy': ntfy, 'ntfy': ntfy,
'pagerduty': pagerduty, 'pagerduty': pagerduty,
'postgresql_databases': postgresql, 'postgresql_databases': postgresql,
'sqlite_databases': sqlite, 'sqlite_databases': sqlite,
'loki': loki,
} }
def call_hook(function_name, config, log_prefix, hook_name, *args, **kwargs): def call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs):
''' '''
Given a configuration dict and a prefix to use in log entries, call the requested function of Given the hooks configuration dict and a prefix to use in log entries, call the requested
the Python module corresponding to the given hook name. Supply that call with the configuration function of the Python module corresponding to the given hook name. Supply that call with the
for this hook (if any), the log prefix, and any given args and kwargs. Return any return value. configuration for this hook (if any), the log prefix, and any given args and kwargs. Return any
return value.
Raise ValueError if the hook name is unknown. Raise ValueError if the hook name is unknown.
Raise AttributeError if the function name is not found in the module. Raise AttributeError if the function name is not found in the module.
Raise anything else that the called function raises. Raise anything else that the called function raises.
''' '''
hook_config = config.get(hook_name, {}) config = hooks.get(hook_name, {})
try: try:
module = HOOK_NAME_TO_MODULE[hook_name] module = HOOK_NAME_TO_MODULE[hook_name]
@ -51,15 +46,15 @@ def call_hook(function_name, config, log_prefix, hook_name, *args, **kwargs):
raise ValueError(f'Unknown hook name: {hook_name}') raise ValueError(f'Unknown hook name: {hook_name}')
logger.debug(f'{log_prefix}: Calling {hook_name} hook function {function_name}') logger.debug(f'{log_prefix}: Calling {hook_name} hook function {function_name}')
return getattr(module, function_name)(hook_config, config, log_prefix, *args, **kwargs) return getattr(module, function_name)(config, log_prefix, *args, **kwargs)
def call_hooks(function_name, config, log_prefix, hook_names, *args, **kwargs): def call_hooks(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
''' '''
Given a configuration dict and a prefix to use in log entries, call the requested function of Given the hooks configuration dict and a prefix to use in log entries, call the requested
the Python module corresponding to each given hook name. Supply each call with the configuration function of the Python module corresponding to each given hook name. Supply each call with the
for that hook, the log prefix, and any given args and kwargs. Collect any return values into a configuration for that hook, the log prefix, and any given args and kwargs. Collect any return
dict from hook name to return value. values into a dict from hook name to return value.
If the hook name is not present in the hooks configuration, then don't call the function for it If the hook name is not present in the hooks configuration, then don't call the function for it
and omit it from the return values. and omit it from the return values.
@ -69,23 +64,23 @@ def call_hooks(function_name, config, log_prefix, hook_names, *args, **kwargs):
Raise anything else that a called function raises. An error stops calls to subsequent functions. Raise anything else that a called function raises. An error stops calls to subsequent functions.
''' '''
return { return {
hook_name: call_hook(function_name, config, log_prefix, hook_name, *args, **kwargs) hook_name: call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs)
for hook_name in hook_names for hook_name in hook_names
if config.get(hook_name) if hooks.get(hook_name)
} }
def call_hooks_even_if_unconfigured(function_name, config, log_prefix, hook_names, *args, **kwargs): def call_hooks_even_if_unconfigured(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
''' '''
Given a configuration dict and a prefix to use in log entries, call the requested function of Given the hooks configuration dict and a prefix to use in log entries, call the requested
the Python module corresponding to each given hook name. Supply each call with the configuration function of the Python module corresponding to each given hook name. Supply each call with the
for that hook, the log prefix, and any given args and kwargs. Collect any return values into a configuration for that hook, the log prefix, and any given args and kwargs. Collect any return
dict from hook name to return value. values into a dict from hook name to return value.
Raise AttributeError if the function name is not found in the module. Raise AttributeError if the function name is not found in the module.
Raise anything else that a called function raises. An error stops calls to subsequent functions. Raise anything else that a called function raises. An error stops calls to subsequent functions.
''' '''
return { return {
hook_name: call_hook(function_name, config, log_prefix, hook_name, *args, **kwargs) hook_name: call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs)
for hook_name in hook_names for hook_name in hook_names
} }

View File

@ -6,35 +6,34 @@ from borgmatic.borg.state import DEFAULT_BORGMATIC_SOURCE_DIRECTORY
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
DATA_SOURCE_HOOK_NAMES = ( DATABASE_HOOK_NAMES = (
'mariadb_databases', 'postgresql_databases',
'mysql_databases', 'mysql_databases',
'mongodb_databases', 'mongodb_databases',
'postgresql_databases',
'sqlite_databases', 'sqlite_databases',
) )
def make_data_source_dump_path(borgmatic_source_directory, data_source_hook_name): def make_database_dump_path(borgmatic_source_directory, database_hook_name):
''' '''
Given a borgmatic source directory (or None) and a data source hook name, construct a data Given a borgmatic source directory (or None) and a database hook name, construct a database dump
source dump path. path.
''' '''
if not borgmatic_source_directory: if not borgmatic_source_directory:
borgmatic_source_directory = DEFAULT_BORGMATIC_SOURCE_DIRECTORY borgmatic_source_directory = DEFAULT_BORGMATIC_SOURCE_DIRECTORY
return os.path.join(borgmatic_source_directory, data_source_hook_name) return os.path.join(borgmatic_source_directory, database_hook_name)
def make_data_source_dump_filename(dump_path, name, hostname=None): def make_database_dump_filename(dump_path, name, hostname=None):
''' '''
Based on the given dump directory path, data source name, and hostname, return a filename to use Based on the given dump directory path, database name, and hostname, return a filename to use
for the data source dump. The hostname defaults to localhost. for the database dump. The hostname defaults to localhost.
Raise ValueError if the data source name is invalid. Raise ValueError if the database name is invalid.
''' '''
if os.path.sep in name: if os.path.sep in name:
raise ValueError(f'Invalid data source name {name}') raise ValueError(f'Invalid database name {name}')
return os.path.join(os.path.expanduser(dump_path), hostname or 'localhost', name) return os.path.join(os.path.expanduser(dump_path), hostname or 'localhost', name)
@ -54,14 +53,14 @@ def create_named_pipe_for_dump(dump_path):
os.mkfifo(dump_path, mode=0o600) os.mkfifo(dump_path, mode=0o600)
def remove_data_source_dumps(dump_path, data_source_type_name, log_prefix, dry_run): def remove_database_dumps(dump_path, database_type_name, log_prefix, dry_run):
''' '''
Remove all data source dumps in the given dump directory path (including the directory itself). Remove all database dumps in the given dump directory path (including the directory itself). If
If this is a dry run, then don't actually remove anything. this is a dry run, then don't actually remove anything.
''' '''
dry_run_label = ' (dry run; not actually removing anything)' if dry_run else '' dry_run_label = ' (dry run; not actually removing anything)' if dry_run else ''
logger.debug(f'{log_prefix}: Removing {data_source_type_name} data source dumps{dry_run_label}') logger.debug(f'{log_prefix}: Removing {database_type_name} database dumps{dry_run_label}')
expanded_path = os.path.expanduser(dump_path) expanded_path = os.path.expanduser(dump_path)

View File

@ -1,9 +1,7 @@
import logging import logging
import re
import requests import requests
import borgmatic.hooks.logs
from borgmatic.hooks import monitor from borgmatic.hooks import monitor
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -15,11 +13,64 @@ MONITOR_STATE_TO_HEALTHCHECKS = {
monitor.State.LOG: 'log', monitor.State.LOG: 'log',
} }
DEFAULT_PING_BODY_LIMIT_BYTES = 1500 PAYLOAD_TRUNCATION_INDICATOR = '...\n'
HANDLER_IDENTIFIER = 'healthchecks' DEFAULT_PING_BODY_LIMIT_BYTES = 100000
def initialize_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run): class Forgetful_buffering_handler(logging.Handler):
'''
A buffering log handler that stores log messages in memory, and throws away messages (oldest
first) once a particular capacity in bytes is reached. But if the given byte capacity is zero,
don't throw away any messages.
'''
def __init__(self, byte_capacity, log_level):
super().__init__()
self.byte_capacity = byte_capacity
self.byte_count = 0
self.buffer = []
self.forgot = False
self.setLevel(log_level)
def emit(self, record):
message = record.getMessage() + '\n'
self.byte_count += len(message)
self.buffer.append(message)
if not self.byte_capacity:
return
while self.byte_count > self.byte_capacity and self.buffer:
self.byte_count -= len(self.buffer[0])
self.buffer.pop(0)
self.forgot = True
def format_buffered_logs_for_payload():
'''
Get the handler previously added to the root logger, and slurp buffered logs out of it to
send to Healthchecks.
'''
try:
buffering_handler = next(
handler
for handler in logging.getLogger().handlers
if isinstance(handler, Forgetful_buffering_handler)
)
except StopIteration:
# No handler means no payload.
return ''
payload = ''.join(message for message in buffering_handler.buffer)
if buffering_handler.forgot:
return PAYLOAD_TRUNCATION_INDICATOR + payload
return payload
def initialize_monitor(hook_config, config_filename, monitoring_log_level, dry_run):
''' '''
Add a handler to the root logger that stores in memory the most recent logs emitted. That way, Add a handler to the root logger that stores in memory the most recent logs emitted. That way,
we can send them all to Healthchecks upon a finish or failure state. But skip this if the we can send them all to Healthchecks upon a finish or failure state. But skip this if the
@ -30,18 +81,16 @@ def initialize_monitor(hook_config, config, config_filename, monitoring_log_leve
ping_body_limit = max( ping_body_limit = max(
hook_config.get('ping_body_limit', DEFAULT_PING_BODY_LIMIT_BYTES) hook_config.get('ping_body_limit', DEFAULT_PING_BODY_LIMIT_BYTES)
- len(borgmatic.hooks.logs.PAYLOAD_TRUNCATION_INDICATOR), - len(PAYLOAD_TRUNCATION_INDICATOR),
0, 0,
) )
borgmatic.hooks.logs.add_handler( logging.getLogger().addHandler(
borgmatic.hooks.logs.Forgetful_buffering_handler( Forgetful_buffering_handler(ping_body_limit, monitoring_log_level)
HANDLER_IDENTIFIER, ping_body_limit, monitoring_log_level
)
) )
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run): def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
''' '''
Ping the configured Healthchecks URL or UUID, modified with the monitor.State. Use the given Ping the configured Healthchecks URL or UUID, modified with the monitor.State. Use the given
configuration filename in any log entries, and log to Healthchecks with the giving log level. configuration filename in any log entries, and log to Healthchecks with the giving log level.
@ -60,25 +109,15 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
) )
return return
ping_url_is_uuid = re.search(r'\w{8}-\w{4}-\w{4}-\w{4}-\w{12}$', ping_url)
healthchecks_state = MONITOR_STATE_TO_HEALTHCHECKS.get(state) healthchecks_state = MONITOR_STATE_TO_HEALTHCHECKS.get(state)
if healthchecks_state: if healthchecks_state:
ping_url = f'{ping_url}/{healthchecks_state}' ping_url = f'{ping_url}/{healthchecks_state}'
if hook_config.get('create_slug'):
if ping_url_is_uuid:
logger.warning(
f'{config_filename}: Healthchecks UUIDs do not support auto provisionning; ignoring'
)
else:
ping_url = f'{ping_url}?create=1'
logger.info(f'{config_filename}: Pinging Healthchecks {state.name.lower()}{dry_run_label}') logger.info(f'{config_filename}: Pinging Healthchecks {state.name.lower()}{dry_run_label}')
logger.debug(f'{config_filename}: Using Healthchecks ping URL {ping_url}') logger.debug(f'{config_filename}: Using Healthchecks ping URL {ping_url}')
if state in (monitor.State.FINISH, monitor.State.FAIL, monitor.State.LOG): if state in (monitor.State.FINISH, monitor.State.FAIL, monitor.State.LOG):
payload = borgmatic.hooks.logs.format_buffered_logs_for_payload(HANDLER_IDENTIFIER) payload = format_buffered_logs_for_payload()
else: else:
payload = '' payload = ''
@ -94,9 +133,13 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
logger.warning(f'{config_filename}: Healthchecks error: {error}') logger.warning(f'{config_filename}: Healthchecks error: {error}')
def destroy_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run): def destroy_monitor(hook_config, config_filename, monitoring_log_level, dry_run):
''' '''
Remove the monitor handler that was added to the root logger. This prevents the handler from Remove the monitor handler that was added to the root logger. This prevents the handler from
getting reused by other instances of this monitor. getting reused by other instances of this monitor.
''' '''
borgmatic.hooks.logs.remove_handler(HANDLER_IDENTIFIER) logger = logging.getLogger()
for handler in tuple(logger.handlers):
if isinstance(handler, Forgetful_buffering_handler):
logger.removeHandler(handler)

View File

@ -1,91 +0,0 @@
import logging
PAYLOAD_TRUNCATION_INDICATOR = '...\n'
class Forgetful_buffering_handler(logging.Handler):
'''
A buffering log handler that stores log messages in memory, and throws away messages (oldest
first) once a particular capacity in bytes is reached. But if the given byte capacity is zero,
don't throw away any messages.
The given identifier is used to distinguish the instance of this handler used for one monitoring
hook from those instances used for other monitoring hooks.
'''
def __init__(self, identifier, byte_capacity, log_level):
super().__init__()
self.identifier = identifier
self.byte_capacity = byte_capacity
self.byte_count = 0
self.buffer = []
self.forgot = False
self.setLevel(log_level)
def emit(self, record):
message = record.getMessage() + '\n'
self.byte_count += len(message)
self.buffer.append(message)
if not self.byte_capacity:
return
while self.byte_count > self.byte_capacity and self.buffer:
self.byte_count -= len(self.buffer[0])
self.buffer.pop(0)
self.forgot = True
def add_handler(handler): # pragma: no cover
'''
Add the given handler to the global logger.
'''
logging.getLogger().addHandler(handler)
def get_handler(identifier):
'''
Given the identifier for an existing Forgetful_buffering_handler instance, return the handler.
Raise ValueError if the handler isn't found.
'''
try:
return next(
handler
for handler in logging.getLogger().handlers
if isinstance(handler, Forgetful_buffering_handler) and handler.identifier == identifier
)
except StopIteration:
raise ValueError(f'A buffering handler for {identifier} was not found')
def format_buffered_logs_for_payload(identifier):
'''
Get the handler previously added to the root logger, and slurp buffered logs out of it to
send to Healthchecks.
'''
try:
buffering_handler = get_handler(identifier)
except ValueError:
# No handler means no payload.
return ''
payload = ''.join(message for message in buffering_handler.buffer)
if buffering_handler.forgot:
return PAYLOAD_TRUNCATION_INDICATOR + payload
return payload
def remove_handler(identifier):
'''
Given the identifier for an existing Forgetful_buffering_handler instance, remove it.
'''
logger = logging.getLogger()
try:
logger.removeHandler(get_handler(identifier))
except ValueError:
pass

View File

@ -1,154 +0,0 @@
import json
import logging
import os
import platform
import time
import requests
from borgmatic.hooks import monitor
logger = logging.getLogger(__name__)
MONITOR_STATE_TO_LOKI = {
monitor.State.START: 'Started',
monitor.State.FINISH: 'Finished',
monitor.State.FAIL: 'Failed',
}
# Threshold at which logs get flushed to loki
MAX_BUFFER_LINES = 100
class Loki_log_buffer:
'''
A log buffer that allows to output the logs as loki requests in json. Allows
adding labels to the log stream and takes care of communication with loki.
'''
def __init__(self, url, dry_run):
self.url = url
self.dry_run = dry_run
self.root = {'streams': [{'stream': {}, 'values': []}]}
def add_value(self, value):
'''
Add a log entry to the stream.
'''
timestamp = str(time.time_ns())
self.root['streams'][0]['values'].append((timestamp, value))
def add_label(self, label, value):
'''
Add a label to the logging stream.
'''
self.root['streams'][0]['stream'][label] = value
def to_request(self):
return json.dumps(self.root)
def __len__(self):
'''
Gets the number of lines currently in the buffer.
'''
return len(self.root['streams'][0]['values'])
def flush(self):
if self.dry_run:
# Just empty the buffer and skip
self.root['streams'][0]['values'] = []
logger.info('Skipped uploading logs to loki due to dry run')
return
if len(self) == 0:
# Skip as there are not logs to send yet
return
request_body = self.to_request()
self.root['streams'][0]['values'] = []
request_header = {'Content-Type': 'application/json'}
try:
result = requests.post(self.url, headers=request_header, data=request_body, timeout=5)
result.raise_for_status()
except requests.RequestException:
logger.warning('Failed to upload logs to loki')
class Loki_log_handler(logging.Handler):
'''
A log handler that sends logs to loki.
'''
def __init__(self, url, dry_run):
super().__init__()
self.buffer = Loki_log_buffer(url, dry_run)
def emit(self, record):
'''
Add a log record from the logging module to the stream.
'''
self.raw(record.getMessage())
def add_label(self, key, value):
'''
Add a label to the logging stream.
'''
self.buffer.add_label(key, value)
def raw(self, msg):
'''
Add an arbitrary string as a log entry to the stream.
'''
self.buffer.add_value(msg)
if len(self.buffer) > MAX_BUFFER_LINES:
self.buffer.flush()
def flush(self):
'''
Send the logs to loki and empty the buffer.
'''
self.buffer.flush()
def initialize_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
'''
Add a handler to the root logger to regularly send the logs to loki.
'''
url = hook_config.get('url')
loki = Loki_log_handler(url, dry_run)
for key, value in hook_config.get('labels').items():
if value == '__hostname':
loki.add_label(key, platform.node())
elif value == '__config':
loki.add_label(key, os.path.basename(config_filename))
elif value == '__config_path':
loki.add_label(key, config_filename)
else:
loki.add_label(key, value)
logging.getLogger().addHandler(loki)
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run):
'''
Add an entry to the loki logger with the current state.
'''
for handler in tuple(logging.getLogger().handlers):
if isinstance(handler, Loki_log_handler):
if state in MONITOR_STATE_TO_LOKI.keys():
handler.raw(f'{config_filename}: {MONITOR_STATE_TO_LOKI[state]} backup')
def destroy_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
'''
Remove the monitor handler that was added to the root logger.
'''
logger = logging.getLogger()
for handler in tuple(logger.handlers):
if isinstance(handler, Loki_log_handler):
handler.flush()
logger.removeHandler(handler)

View File

@ -1,257 +0,0 @@
import copy
import logging
import os
import shlex
from borgmatic.execute import (
execute_command,
execute_command_and_capture_output,
execute_command_with_processes,
)
from borgmatic.hooks import dump
logger = logging.getLogger(__name__)
def make_dump_path(config): # pragma: no cover
'''
Make the dump path from the given configuration dict and the name of this hook.
'''
return dump.make_data_source_dump_path(
config.get('borgmatic_source_directory'), 'mariadb_databases'
)
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
'''
Given a requested database config, return the corresponding sequence of database names to dump.
In the case of "all", query for the names of databases on the configured host and return them,
excluding any system databases that will cause problems during restore.
'''
if database['name'] != 'all':
return (database['name'],)
if dry_run:
return ()
mariadb_show_command = tuple(
shlex.quote(part) for part in shlex.split(database.get('mariadb_command') or 'mariadb')
)
show_command = (
mariadb_show_command
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
+ ('--skip-column-names', '--batch')
+ ('--execute', 'show schemas')
)
logger.debug(f'{log_prefix}: Querying for "all" MariaDB databases to dump')
show_output = execute_command_and_capture_output(
show_command, extra_environment=extra_environment
)
return tuple(
show_name
for show_name in show_output.strip().splitlines()
if show_name not in SYSTEM_DATABASE_NAMES
)
def execute_dump_command(
database, log_prefix, dump_path, database_names, extra_environment, dry_run, dry_run_label
):
'''
Kick off a dump for the given MariaDB database (provided as a configuration dict) to a named
pipe constructed from the given dump path and database name. Use the given log prefix in any
log entries.
Return a subprocess.Popen instance for the dump process ready to spew to a named pipe. But if
this is a dry run, then don't actually dump anything and return None.
'''
database_name = database['name']
dump_filename = dump.make_data_source_dump_filename(
dump_path, database['name'], database.get('hostname')
)
if os.path.exists(dump_filename):
logger.warning(
f'{log_prefix}: Skipping duplicate dump of MariaDB database "{database_name}" to {dump_filename}'
)
return None
mariadb_dump_command = tuple(
shlex.quote(part)
for part in shlex.split(database.get('mariadb_dump_command') or 'mariadb-dump')
)
dump_command = (
mariadb_dump_command
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (('--add-drop-database',) if database.get('add_drop_database', True) else ())
+ (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ())
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
+ (('--user', database['username']) if 'username' in database else ())
+ ('--databases',)
+ database_names
+ ('--result-file', dump_filename)
)
logger.debug(
f'{log_prefix}: Dumping MariaDB database "{database_name}" to {dump_filename}{dry_run_label}'
)
if dry_run:
return None
dump.create_named_pipe_for_dump(dump_filename)
return execute_command(
dump_command,
extra_environment=extra_environment,
run_to_completion=False,
)
def use_streaming(databases, config, log_prefix):
'''
Given a sequence of MariaDB database configuration dicts, a configuration dict (ignored), and a
log prefix (ignored), return whether streaming will be using during dumps.
'''
return any(databases)
def dump_data_sources(databases, config, log_prefix, dry_run):
'''
Dump the given MariaDB databases to a named pipe. The databases are supplied as a sequence of
dicts, one dict describing each database as per the configuration schema. Use the given
configuration dict to construct the destination path and the given log prefix in any log
entries.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
'''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = []
logger.info(f'{log_prefix}: Dumping MariaDB databases{dry_run_label}')
for database in databases:
dump_path = make_dump_path(config)
extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
dump_database_names = database_names_to_dump(
database, extra_environment, log_prefix, dry_run
)
if not dump_database_names:
if dry_run:
continue
raise ValueError('Cannot find any MariaDB databases to dump.')
if database['name'] == 'all' and database.get('format'):
for dump_name in dump_database_names:
renamed_database = copy.copy(database)
renamed_database['name'] = dump_name
processes.append(
execute_dump_command(
renamed_database,
log_prefix,
dump_path,
(dump_name,),
extra_environment,
dry_run,
dry_run_label,
)
)
else:
processes.append(
execute_dump_command(
database,
log_prefix,
dump_path,
dump_database_names,
extra_environment,
dry_run,
dry_run_label,
)
)
return [process for process in processes if process]
def remove_data_source_dumps(databases, config, log_prefix, dry_run): # pragma: no cover
'''
Remove all database dump files for this hook regardless of the given databases. Use the given
configuration dict to construct the destination path and the log prefix in any log entries. If
this is a dry run, then don't actually remove anything.
'''
dump.remove_data_source_dumps(make_dump_path(config), 'MariaDB', log_prefix, dry_run)
def make_data_source_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover
'''
Given a sequence of configurations dicts, a configuration dict, a prefix to log with, and a
database name to match, return the corresponding glob patterns to match the database dump in an
archive.
'''
return dump.make_data_source_dump_filename(make_dump_path(config), name, hostname='*')
def restore_data_source_dump(
hook_config, config, log_prefix, data_source, dry_run, extract_process, connection_params
):
'''
Restore a database from the given extract stream. The database is supplied as a data source
configuration dict, but the given hook configuration is ignored. The given configuration dict is
used to construct the destination path, and the given log prefix is used for any log entries. If
this is a dry run, then don't actually restore anything. Trigger the given active extract
process (an instance of subprocess.Popen) to produce output to consume.
'''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
hostname = connection_params['hostname'] or data_source.get(
'restore_hostname', data_source.get('hostname')
)
port = str(
connection_params['port'] or data_source.get('restore_port', data_source.get('port', ''))
)
username = connection_params['username'] or data_source.get(
'restore_username', data_source.get('username')
)
password = connection_params['password'] or data_source.get(
'restore_password', data_source.get('password')
)
mariadb_restore_command = tuple(
shlex.quote(part) for part in shlex.split(data_source.get('mariadb_command') or 'mariadb')
)
restore_command = (
mariadb_restore_command
+ ('--batch',)
+ (
tuple(data_source['restore_options'].split(' '))
if 'restore_options' in data_source
else ()
)
+ (('--host', hostname) if hostname else ())
+ (('--port', str(port)) if port else ())
+ (('--protocol', 'tcp') if hostname or port else ())
+ (('--user', username) if username else ())
)
extra_environment = {'MYSQL_PWD': password} if password else None
logger.debug(f"{log_prefix}: Restoring MariaDB database {data_source['name']}{dry_run_label}")
if dry_run:
return
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
# if the restore paths don't exist in the archive.
execute_command_with_processes(
restore_command,
[extract_process],
output_log_level=logging.DEBUG,
input_file=extract_process.stdout,
extra_environment=extra_environment,
)

View File

@ -1,5 +1,4 @@
import logging import logging
import shlex
from borgmatic.execute import execute_command, execute_command_with_processes from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.hooks import dump from borgmatic.hooks import dump
@ -7,28 +6,21 @@ from borgmatic.hooks import dump
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def make_dump_path(config): # pragma: no cover def make_dump_path(location_config): # pragma: no cover
''' '''
Make the dump path from the given configuration dict and the name of this hook. Make the dump path from the given location configuration and the name of this hook.
''' '''
return dump.make_data_source_dump_path( return dump.make_database_dump_path(
config.get('borgmatic_source_directory'), 'mongodb_databases' location_config.get('borgmatic_source_directory'), 'mongodb_databases'
) )
def use_streaming(databases, config, log_prefix): def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Given a sequence of MongoDB database configuration dicts, a configuration dict (ignored), and a
log prefix (ignored), return whether streaming will be using during dumps.
'''
return any(database.get('format') != 'directory' for database in databases)
def dump_data_sources(databases, config, log_prefix, dry_run):
''' '''
Dump the given MongoDB databases to a named pipe. The databases are supplied as a sequence of Dump the given MongoDB databases to a named pipe. The databases are supplied as a sequence of
dicts, one dict describing each database as per the configuration schema. Use the configuration dicts, one dict describing each database as per the configuration schema. Use the given log
dict to construct the destination path and the given log prefix in any log entries. prefix in any log entries. Use the given location configuration dict to construct the
destination path.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence. pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
@ -40,8 +32,8 @@ def dump_data_sources(databases, config, log_prefix, dry_run):
processes = [] processes = []
for database in databases: for database in databases:
name = database['name'] name = database['name']
dump_filename = dump.make_data_source_dump_filename( dump_filename = dump.make_database_dump_filename(
make_dump_path(config), name, database.get('hostname') make_dump_path(location_config), name, database.get('hostname')
) )
dump_format = database.get('format', 'archive') dump_format = database.get('format', 'archive')
@ -68,69 +60,75 @@ def build_dump_command(database, dump_filename, dump_format):
Return the mongodump command from a single database configuration. Return the mongodump command from a single database configuration.
''' '''
all_databases = database['name'] == 'all' all_databases = database['name'] == 'all'
command = ['mongodump']
return ( if dump_format == 'directory':
('mongodump',) command.extend(('--out', dump_filename))
+ (('--out', shlex.quote(dump_filename)) if dump_format == 'directory' else ()) if 'hostname' in database:
+ (('--host', shlex.quote(database['hostname'])) if 'hostname' in database else ()) command.extend(('--host', database['hostname']))
+ (('--port', shlex.quote(str(database['port']))) if 'port' in database else ()) if 'port' in database:
+ (('--username', shlex.quote(database['username'])) if 'username' in database else ()) command.extend(('--port', str(database['port'])))
+ (('--password', shlex.quote(database['password'])) if 'password' in database else ()) if 'username' in database:
+ ( command.extend(('--username', database['username']))
('--authenticationDatabase', shlex.quote(database['authentication_database'])) if 'password' in database:
if 'authentication_database' in database command.extend(('--password', database['password']))
else () if 'authentication_database' in database:
) command.extend(('--authenticationDatabase', database['authentication_database']))
+ (('--db', shlex.quote(database['name'])) if not all_databases else ()) if not all_databases:
+ ( command.extend(('--db', database['name']))
tuple(shlex.quote(option) for option in database['options'].split(' ')) if 'options' in database:
if 'options' in database command.extend(database['options'].split(' '))
else () if dump_format != 'directory':
) command.extend(('--archive', '>', dump_filename))
+ (('--archive', '>', shlex.quote(dump_filename)) if dump_format != 'directory' else ()) return command
)
def remove_data_source_dumps(databases, config, log_prefix, dry_run): # pragma: no cover def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
''' '''
Remove all database dump files for this hook regardless of the given databases. Use the log Remove all database dump files for this hook regardless of the given databases. Use the log
prefix in any log entries. Use the given configuration dict to construct the destination path. prefix in any log entries. Use the given location configuration dict to construct the
If this is a dry run, then don't actually remove anything. destination path. If this is a dry run, then don't actually remove anything.
''' '''
dump.remove_data_source_dumps(make_dump_path(config), 'MongoDB', log_prefix, dry_run) dump.remove_database_dumps(make_dump_path(location_config), 'MongoDB', log_prefix, dry_run)
def make_data_source_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
''' '''
Given a sequence of database configurations dicts, a configuration dict, a prefix to log with, Given a sequence of configurations dicts, a prefix to log with, a location configuration dict,
and a database name to match, return the corresponding glob patterns to match the database dump and a database name to match, return the corresponding glob patterns to match the database dump
in an archive. in an archive.
''' '''
return dump.make_data_source_dump_filename(make_dump_path(config), name, hostname='*') return dump.make_database_dump_filename(make_dump_path(location_config), name, hostname='*')
def restore_data_source_dump( def restore_database_dump(
hook_config, config, log_prefix, data_source, dry_run, extract_process, connection_params database_config, log_prefix, location_config, dry_run, extract_process, connection_params
): ):
''' '''
Restore a database from the given extract stream. The database is supplied as a data source Restore the given MongoDB database from an extract stream. The database is supplied as a
configuration dict, but the given hook configuration is ignored. The given configuration dict is one-element sequence containing a dict describing the database, as per the configuration schema.
used to construct the destination path, and the given log prefix is used for any log entries. If Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
this is a dry run, then don't actually restore anything. Trigger the given active extract anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
process (an instance of subprocess.Popen) to produce output to consume. output to consume.
If the extract process is None, then restore the dump from the filesystem rather than from an If the extract process is None, then restore the dump from the filesystem rather than from an
extract stream. extract stream.
''' '''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else '' dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
dump_filename = dump.make_data_source_dump_filename(
make_dump_path(config), data_source['name'], data_source.get('hostname') if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
database = database_config[0]
dump_filename = dump.make_database_dump_filename(
make_dump_path(location_config), database['name'], database.get('hostname')
) )
restore_command = build_restore_command( restore_command = build_restore_command(
extract_process, data_source, dump_filename, connection_params extract_process, database, dump_filename, connection_params
) )
logger.debug(f"{log_prefix}: Restoring MongoDB database {data_source['name']}{dry_run_label}") logger.debug(f"{log_prefix}: Restoring MongoDB database {database['name']}{dry_run_label}")
if dry_run: if dry_run:
return return
@ -165,7 +163,7 @@ def build_restore_command(extract_process, database, dump_filename, connection_p
else: else:
command.extend(('--dir', dump_filename)) command.extend(('--dir', dump_filename))
if database['name'] != 'all': if database['name'] != 'all':
command.extend(('--drop',)) command.extend(('--drop', '--db', database['name']))
if hostname: if hostname:
command.extend(('--host', hostname)) command.extend(('--host', hostname))
if port: if port:
@ -178,8 +176,7 @@ def build_restore_command(extract_process, database, dump_filename, connection_p
command.extend(('--authenticationDatabase', database['authentication_database'])) command.extend(('--authenticationDatabase', database['authentication_database']))
if 'restore_options' in database: if 'restore_options' in database:
command.extend(database['restore_options'].split(' ')) command.extend(database['restore_options'].split(' '))
if database.get('schemas'): if database['schemas']:
for schema in database['schemas']: for schema in database['schemas']:
command.extend(('--nsInclude', schema)) command.extend(('--nsInclude', schema))
return command return command

View File

@ -1,6 +1,6 @@
from enum import Enum from enum import Enum
MONITOR_HOOK_NAMES = ('apprise', 'healthchecks', 'cronitor', 'cronhub', 'pagerduty', 'ntfy', 'loki') MONITOR_HOOK_NAMES = ('healthchecks', 'cronitor', 'cronhub', 'pagerduty', 'ntfy')
class State(Enum): class State(Enum):

View File

@ -1,7 +1,6 @@
import copy import copy
import logging import logging
import os import os
import shlex
from borgmatic.execute import ( from borgmatic.execute import (
execute_command, execute_command,
@ -13,12 +12,12 @@ from borgmatic.hooks import dump
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def make_dump_path(config): # pragma: no cover def make_dump_path(location_config): # pragma: no cover
''' '''
Make the dump path from the given configuration dict and the name of this hook. Make the dump path from the given location configuration and the name of this hook.
''' '''
return dump.make_data_source_dump_path( return dump.make_database_dump_path(
config.get('borgmatic_source_directory'), 'mysql_databases' location_config.get('borgmatic_source_directory'), 'mysql_databases'
) )
@ -36,11 +35,8 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
if dry_run: if dry_run:
return () return ()
mysql_show_command = tuple(
shlex.quote(part) for part in shlex.split(database.get('mysql_command') or 'mysql')
)
show_command = ( show_command = (
mysql_show_command ('mysql',)
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ()) + (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
+ (('--host', database['hostname']) if 'hostname' in database else ()) + (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ()) + (('--port', str(database['port'])) if 'port' in database else ())
@ -66,28 +62,24 @@ def execute_dump_command(
): ):
''' '''
Kick off a dump for the given MySQL/MariaDB database (provided as a configuration dict) to a Kick off a dump for the given MySQL/MariaDB database (provided as a configuration dict) to a
named pipe constructed from the given dump path and database name. Use the given log prefix in named pipe constructed from the given dump path and database names. Use the given log prefix in
any log entries. any log entries.
Return a subprocess.Popen instance for the dump process ready to spew to a named pipe. But if Return a subprocess.Popen instance for the dump process ready to spew to a named pipe. But if
this is a dry run, then don't actually dump anything and return None. this is a dry run, then don't actually dump anything and return None.
''' '''
database_name = database['name'] database_name = database['name']
dump_filename = dump.make_data_source_dump_filename( dump_filename = dump.make_database_dump_filename(
dump_path, database['name'], database.get('hostname') dump_path, database['name'], database.get('hostname')
) )
if os.path.exists(dump_filename): if os.path.exists(dump_filename):
logger.warning( logger.warning(
f'{log_prefix}: Skipping duplicate dump of MySQL database "{database_name}" to {dump_filename}' f'{log_prefix}: Skipping duplicate dump of MySQL database "{database_name}" to {dump_filename}'
) )
return None return None
mysql_dump_command = tuple(
shlex.quote(part) for part in shlex.split(database.get('mysql_dump_command') or 'mysqldump')
)
dump_command = ( dump_command = (
mysql_dump_command ('mysqldump',)
+ (tuple(database['options'].split(' ')) if 'options' in database else ()) + (tuple(database['options'].split(' ')) if 'options' in database else ())
+ (('--add-drop-database',) if database.get('add_drop_database', True) else ()) + (('--add-drop-database',) if database.get('add_drop_database', True) else ())
+ (('--host', database['hostname']) if 'hostname' in database else ()) + (('--host', database['hostname']) if 'hostname' in database else ())
@ -114,19 +106,12 @@ def execute_dump_command(
) )
def use_streaming(databases, config, log_prefix): def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Given a sequence of MySQL database configuration dicts, a configuration dict (ignored), and a
log prefix (ignored), return whether streaming will be using during dumps.
'''
return any(databases)
def dump_data_sources(databases, config, log_prefix, dry_run):
''' '''
Dump the given MySQL/MariaDB databases to a named pipe. The databases are supplied as a sequence Dump the given MySQL/MariaDB databases to a named pipe. The databases are supplied as a sequence
of dicts, one dict describing each database as per the configuration schema. Use the given of dicts, one dict describing each database as per the configuration schema. Use the given log
configuration dict to construct the destination path and the given log prefix in any log entries. prefix in any log entries. Use the given location configuration dict to construct the
destination path.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence. pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
@ -137,7 +122,7 @@ def dump_data_sources(databases, config, log_prefix, dry_run):
logger.info(f'{log_prefix}: Dumping MySQL databases{dry_run_label}') logger.info(f'{log_prefix}: Dumping MySQL databases{dry_run_label}')
for database in databases: for database in databases:
dump_path = make_dump_path(config) dump_path = make_dump_path(location_config)
extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
dump_database_names = database_names_to_dump( dump_database_names = database_names_to_dump(
database, extra_environment, log_prefix, dry_run database, extra_environment, log_prefix, dry_run
@ -180,59 +165,57 @@ def dump_data_sources(databases, config, log_prefix, dry_run):
return [process for process in processes if process] return [process for process in processes if process]
def remove_data_source_dumps(databases, config, log_prefix, dry_run): # pragma: no cover def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
''' '''
Remove all database dump files for this hook regardless of the given databases. Use the given Remove all database dump files for this hook regardless of the given databases. Use the log
configuration dict to construct the destination path and the log prefix in any log entries. If prefix in any log entries. Use the given location configuration dict to construct the
this is a dry run, then don't actually remove anything. destination path. If this is a dry run, then don't actually remove anything.
''' '''
dump.remove_data_source_dumps(make_dump_path(config), 'MySQL', log_prefix, dry_run) dump.remove_database_dumps(make_dump_path(location_config), 'MySQL', log_prefix, dry_run)
def make_data_source_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
''' '''
Given a sequence of configurations dicts, a configuration dict, a prefix to log with, and a Given a sequence of configurations dicts, a prefix to log with, a location configuration dict,
database name to match, return the corresponding glob patterns to match the database dump in an and a database name to match, return the corresponding glob patterns to match the database dump
archive. in an archive.
''' '''
return dump.make_data_source_dump_filename(make_dump_path(config), name, hostname='*') return dump.make_database_dump_filename(make_dump_path(location_config), name, hostname='*')
def restore_data_source_dump( def restore_database_dump(
hook_config, config, log_prefix, data_source, dry_run, extract_process, connection_params database_config, log_prefix, location_config, dry_run, extract_process, connection_params
): ):
''' '''
Restore a database from the given extract stream. The database is supplied as a data source Restore the given MySQL/MariaDB database from an extract stream. The database is supplied as a
configuration dict, but the given hook configuration is ignored. The given configuration dict is one-element sequence containing a dict describing the database, as per the configuration schema.
used to construct the destination path, and the given log prefix is used for any log entries. If Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
this is a dry run, then don't actually restore anything. Trigger the given active extract anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
process (an instance of subprocess.Popen) to produce output to consume. output to consume.
''' '''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else '' dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
hostname = connection_params['hostname'] or data_source.get(
'restore_hostname', data_source.get('hostname') if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
database = database_config[0]
hostname = connection_params['hostname'] or database.get(
'restore_hostname', database.get('hostname')
) )
port = str( port = str(connection_params['port'] or database.get('restore_port', database.get('port', '')))
connection_params['port'] or data_source.get('restore_port', data_source.get('port', '')) username = connection_params['username'] or database.get(
'restore_username', database.get('username')
) )
username = connection_params['username'] or data_source.get( password = connection_params['password'] or database.get(
'restore_username', data_source.get('username') 'restore_password', database.get('password')
)
password = connection_params['password'] or data_source.get(
'restore_password', data_source.get('password')
) )
mysql_restore_command = tuple(
shlex.quote(part) for part in shlex.split(data_source.get('mysql_command') or 'mysql')
)
restore_command = ( restore_command = (
mysql_restore_command ('mysql', '--batch')
+ ('--batch',) + (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
+ (
tuple(data_source['restore_options'].split(' '))
if 'restore_options' in data_source
else ()
)
+ (('--host', hostname) if hostname else ()) + (('--host', hostname) if hostname else ())
+ (('--port', str(port)) if port else ()) + (('--port', str(port)) if port else ())
+ (('--protocol', 'tcp') if hostname or port else ()) + (('--protocol', 'tcp') if hostname or port else ())
@ -240,7 +223,7 @@ def restore_data_source_dump(
) )
extra_environment = {'MYSQL_PWD': password} if password else None extra_environment = {'MYSQL_PWD': password} if password else None
logger.debug(f"{log_prefix}: Restoring MySQL database {data_source['name']}{dry_run_label}") logger.debug(f"{log_prefix}: Restoring MySQL database {database['name']}{dry_run_label}")
if dry_run: if dry_run:
return return

View File

@ -6,7 +6,7 @@ logger = logging.getLogger(__name__)
def initialize_monitor( def initialize_monitor(
ping_url, config, config_filename, monitoring_log_level, dry_run ping_url, config_filename, monitoring_log_level, dry_run
): # pragma: no cover ): # pragma: no cover
''' '''
No initialization is necessary for this monitor. No initialization is necessary for this monitor.
@ -14,7 +14,7 @@ def initialize_monitor(
pass pass
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run): def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
''' '''
Ping the configured Ntfy topic. Use the given configuration filename in any log entries. Ping the configured Ntfy topic. Use the given configuration filename in any log entries.
If this is a dry run, then don't actually ping anything. If this is a dry run, then don't actually ping anything.
@ -28,8 +28,8 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
state_config = hook_config.get( state_config = hook_config.get(
state.name.lower(), state.name.lower(),
{ {
'title': f'A borgmatic {state.name} event happened', 'title': f'A Borgmatic {state.name} event happened',
'message': f'A borgmatic {state.name} event happened', 'message': f'A Borgmatic {state.name} event happened',
'priority': 'default', 'priority': 'default',
'tags': 'borgmatic', 'tags': 'borgmatic',
}, },
@ -50,16 +50,9 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
username = hook_config.get('username') username = hook_config.get('username')
password = hook_config.get('password') password = hook_config.get('password')
access_token = hook_config.get('access_token')
auth = None
if access_token is not None: auth = None
if username or password: if (username and password) is not None:
logger.warning(
f'{config_filename}: ntfy access_token is set but so is username/password, only using access_token'
)
auth = requests.auth.HTTPBasicAuth('', access_token)
elif (username and password) is not None:
auth = requests.auth.HTTPBasicAuth(username, password) auth = requests.auth.HTTPBasicAuth(username, password)
logger.info(f'{config_filename}: Using basic auth with user {username} for ntfy') logger.info(f'{config_filename}: Using basic auth with user {username} for ntfy')
elif username is not None: elif username is not None:
@ -82,7 +75,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
def destroy_monitor( def destroy_monitor(
ping_url_or_uuid, config, config_filename, monitoring_log_level, dry_run ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
): # pragma: no cover ): # pragma: no cover
''' '''
No destruction is necessary for this monitor. No destruction is necessary for this monitor.

View File

@ -13,7 +13,7 @@ EVENTS_API_URL = 'https://events.pagerduty.com/v2/enqueue'
def initialize_monitor( def initialize_monitor(
integration_key, config, config_filename, monitoring_log_level, dry_run integration_key, config_filename, monitoring_log_level, dry_run
): # pragma: no cover ): # pragma: no cover
''' '''
No initialization is necessary for this monitor. No initialization is necessary for this monitor.
@ -21,7 +21,7 @@ def initialize_monitor(
pass pass
def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run): def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
''' '''
If this is an error state, create a PagerDuty event with the configured integration key. Use If this is an error state, create a PagerDuty event with the configured integration key. Use
the given configuration filename in any log entries. If this is a dry run, then don't actually the given configuration filename in any log entries. If this is a dry run, then don't actually
@ -40,7 +40,9 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
return return
hostname = platform.node() hostname = platform.node()
local_timestamp = datetime.datetime.now(datetime.timezone.utc).astimezone().isoformat() local_timestamp = (
datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).astimezone().isoformat()
)
payload = json.dumps( payload = json.dumps(
{ {
'routing_key': hook_config['integration_key'], 'routing_key': hook_config['integration_key'],
@ -73,7 +75,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
def destroy_monitor( def destroy_monitor(
ping_url_or_uuid, config, config_filename, monitoring_log_level, dry_run ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
): # pragma: no cover ): # pragma: no cover
''' '''
No destruction is necessary for this monitor. No destruction is necessary for this monitor.

View File

@ -14,19 +14,19 @@ from borgmatic.hooks import dump
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def make_dump_path(config): # pragma: no cover def make_dump_path(location_config): # pragma: no cover
''' '''
Make the dump path from the given configuration dict and the name of this hook. Make the dump path from the given location configuration and the name of this hook.
''' '''
return dump.make_data_source_dump_path( return dump.make_database_dump_path(
config.get('borgmatic_source_directory'), 'postgresql_databases' location_config.get('borgmatic_source_directory'), 'postgresql_databases'
) )
def make_extra_environment(database, restore_connection_params=None): def make_extra_environment(database, restore_connection_params=None):
''' '''
Make the extra_environment dict from the given database configuration. If restore connection Make the extra_environment dict from the given database configuration.
params are given, this is for a restore operation. If restore connection params are given, this is for a restore operation.
''' '''
extra = dict() extra = dict()
@ -40,8 +40,7 @@ def make_extra_environment(database, restore_connection_params=None):
except (AttributeError, KeyError): except (AttributeError, KeyError):
pass pass
if 'ssl_mode' in database: extra['PGSSLMODE'] = database.get('ssl_mode', 'disable')
extra['PGSSLMODE'] = database['ssl_mode']
if 'ssl_cert' in database: if 'ssl_cert' in database:
extra['PGSSLCERT'] = database['ssl_cert'] extra['PGSSLCERT'] = database['ssl_cert']
if 'ssl_key' in database: if 'ssl_key' in database:
@ -50,7 +49,6 @@ def make_extra_environment(database, restore_connection_params=None):
extra['PGSSLROOTCERT'] = database['ssl_root_cert'] extra['PGSSLROOTCERT'] = database['ssl_root_cert']
if 'ssl_crl' in database: if 'ssl_crl' in database:
extra['PGSSLCRL'] = database['ssl_crl'] extra['PGSSLCRL'] = database['ssl_crl']
return extra return extra
@ -73,11 +71,9 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
if dry_run: if dry_run:
return () return ()
psql_command = tuple( psql_command = shlex.split(database.get('psql_command') or 'psql')
shlex.quote(part) for part in shlex.split(database.get('psql_command') or 'psql')
)
list_command = ( list_command = (
psql_command tuple(psql_command)
+ ('--list', '--no-password', '--no-psqlrc', '--csv', '--tuples-only') + ('--list', '--no-password', '--no-psqlrc', '--csv', '--tuples-only')
+ (('--host', database['hostname']) if 'hostname' in database else ()) + (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', str(database['port'])) if 'port' in database else ()) + (('--port', str(database['port'])) if 'port' in database else ())
@ -96,20 +92,12 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
) )
def use_streaming(databases, config, log_prefix): def dump_databases(databases, log_prefix, location_config, dry_run):
'''
Given a sequence of PostgreSQL database configuration dicts, a configuration dict (ignored), and
a log prefix (ignored), return whether streaming will be using during dumps.
'''
return any(database.get('format') != 'directory' for database in databases)
def dump_data_sources(databases, config, log_prefix, dry_run):
''' '''
Dump the given PostgreSQL databases to a named pipe. The databases are supplied as a sequence of Dump the given PostgreSQL databases to a named pipe. The databases are supplied as a sequence of
dicts, one dict describing each database as per the configuration schema. Use the given dicts, one dict describing each database as per the configuration schema. Use the given log
configuration dict to construct the destination path and the given log prefix in any log prefix in any log entries. Use the given location configuration dict to construct the
entries. destination path.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence. pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
@ -123,7 +111,7 @@ def dump_data_sources(databases, config, log_prefix, dry_run):
for database in databases: for database in databases:
extra_environment = make_extra_environment(database) extra_environment = make_extra_environment(database)
dump_path = make_dump_path(config) dump_path = make_dump_path(location_config)
dump_database_names = database_names_to_dump( dump_database_names = database_names_to_dump(
database, extra_environment, log_prefix, dry_run database, extra_environment, log_prefix, dry_run
) )
@ -137,11 +125,8 @@ def dump_data_sources(databases, config, log_prefix, dry_run):
for database_name in dump_database_names: for database_name in dump_database_names:
dump_format = database.get('format', None if database_name == 'all' else 'custom') dump_format = database.get('format', None if database_name == 'all' else 'custom')
default_dump_command = 'pg_dumpall' if database_name == 'all' else 'pg_dump' default_dump_command = 'pg_dumpall' if database_name == 'all' else 'pg_dump'
dump_command = tuple( dump_command = database.get('pg_dump_command') or default_dump_command
shlex.quote(part) dump_filename = dump.make_database_dump_filename(
for part in shlex.split(database.get('pg_dump_command') or default_dump_command)
)
dump_filename = dump.make_data_source_dump_filename(
dump_path, database_name, database.get('hostname') dump_path, database_name, database.get('hostname')
) )
if os.path.exists(dump_filename): if os.path.exists(dump_filename):
@ -151,32 +136,24 @@ def dump_data_sources(databases, config, log_prefix, dry_run):
continue continue
command = ( command = (
dump_command (
+ ( dump_command,
'--no-password', '--no-password',
'--clean', '--clean',
'--if-exists', '--if-exists',
) )
+ (('--host', shlex.quote(database['hostname'])) if 'hostname' in database else ()) + (('--host', database['hostname']) if 'hostname' in database else ())
+ (('--port', shlex.quote(str(database['port']))) if 'port' in database else ()) + (('--port', str(database['port'])) if 'port' in database else ())
+ ( + (('--username', database['username']) if 'username' in database else ())
('--username', shlex.quote(database['username']))
if 'username' in database
else ()
)
+ (('--no-owner',) if database.get('no_owner', False) else ()) + (('--no-owner',) if database.get('no_owner', False) else ())
+ (('--format', shlex.quote(dump_format)) if dump_format else ()) + (('--format', dump_format) if dump_format else ())
+ (('--file', shlex.quote(dump_filename)) if dump_format == 'directory' else ()) + (('--file', dump_filename) if dump_format == 'directory' else ())
+ ( + (tuple(database['options'].split(' ')) if 'options' in database else ())
tuple(shlex.quote(option) for option in database['options'].split(' ')) + (() if database_name == 'all' else (database_name,))
if 'options' in database
else ()
)
+ (() if database_name == 'all' else (shlex.quote(database_name),))
# Use shell redirection rather than the --file flag to sidestep synchronization issues # Use shell redirection rather than the --file flag to sidestep synchronization issues
# when pg_dump/pg_dumpall tries to write to a named pipe. But for the directory dump # when pg_dump/pg_dumpall tries to write to a named pipe. But for the directory dump
# format in a particular, a named destination is required, and redirection doesn't work. # format in a particular, a named destination is required, and redirection doesn't work.
+ (('>', shlex.quote(dump_filename)) if dump_format != 'directory' else ()) + (('>', dump_filename) if dump_format != 'directory' else ())
) )
logger.debug( logger.debug(
@ -206,33 +183,35 @@ def dump_data_sources(databases, config, log_prefix, dry_run):
return processes return processes
def remove_data_source_dumps(databases, config, log_prefix, dry_run): # pragma: no cover def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
''' '''
Remove all database dump files for this hook regardless of the given databases. Use the given Remove all database dump files for this hook regardless of the given databases. Use the log
configuration dict to construct the destination path and the log prefix in any log entries. If prefix in any log entries. Use the given location configuration dict to construct the
this is a dry run, then don't actually remove anything. destination path. If this is a dry run, then don't actually remove anything.
''' '''
dump.remove_data_source_dumps(make_dump_path(config), 'PostgreSQL', log_prefix, dry_run) dump.remove_database_dumps(make_dump_path(location_config), 'PostgreSQL', log_prefix, dry_run)
def make_data_source_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
''' '''
Given a sequence of configurations dicts, a configuration dict, a prefix to log with, and a Given a sequence of configurations dicts, a prefix to log with, a location configuration dict,
database name to match, return the corresponding glob patterns to match the database dump in an and a database name to match, return the corresponding glob patterns to match the database dump
archive. in an archive.
''' '''
return dump.make_data_source_dump_filename(make_dump_path(config), name, hostname='*') return dump.make_database_dump_filename(make_dump_path(location_config), name, hostname='*')
def restore_data_source_dump( def restore_database_dump(
hook_config, config, log_prefix, data_source, dry_run, extract_process, connection_params database_config, log_prefix, location_config, dry_run, extract_process, connection_params
): ):
''' '''
Restore a database from the given extract stream. The database is supplied as a data source Restore the given PostgreSQL database from an extract stream. The database is supplied as a
configuration dict, but the given hook configuration is ignored. The given configuration dict is one-element sequence containing a dict describing the database, as per the configuration schema.
used to construct the destination path, and the given log prefix is used for any log entries. If Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
this is a dry run, then don't actually restore anything. Trigger the given active extract anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
process (an instance of subprocess.Popen) to produce output to consume. output to consume.
If the extract process is None, then restore the dump from the filesystem rather than from an If the extract process is None, then restore the dump from the filesystem rather than from an
extract stream. extract stream.
@ -241,71 +220,60 @@ def restore_data_source_dump(
hostname, port, username, and password. hostname, port, username, and password.
''' '''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else '' dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
hostname = connection_params['hostname'] or data_source.get(
'restore_hostname', data_source.get('hostname') if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
database = database_config[0]
hostname = connection_params['hostname'] or database.get(
'restore_hostname', database.get('hostname')
) )
port = str( port = str(connection_params['port'] or database.get('restore_port', database.get('port', '')))
connection_params['port'] or data_source.get('restore_port', data_source.get('port', '')) username = connection_params['username'] or database.get(
) 'restore_username', database.get('username')
username = connection_params['username'] or data_source.get(
'restore_username', data_source.get('username')
) )
all_databases = bool(data_source['name'] == 'all') all_databases = bool(database['name'] == 'all')
dump_filename = dump.make_data_source_dump_filename( dump_filename = dump.make_database_dump_filename(
make_dump_path(config), data_source['name'], data_source.get('hostname') make_dump_path(location_config), database['name'], database.get('hostname')
)
psql_command = tuple(
shlex.quote(part) for part in shlex.split(data_source.get('psql_command') or 'psql')
) )
psql_command = shlex.split(database.get('psql_command') or 'psql')
analyze_command = ( analyze_command = (
psql_command tuple(psql_command)
+ ('--no-password', '--no-psqlrc', '--quiet') + ('--no-password', '--no-psqlrc', '--quiet')
+ (('--host', hostname) if hostname else ()) + (('--host', hostname) if hostname else ())
+ (('--port', port) if port else ()) + (('--port', port) if port else ())
+ (('--username', username) if username else ()) + (('--username', username) if username else ())
+ (('--dbname', data_source['name']) if not all_databases else ()) + (('--dbname', database['name']) if not all_databases else ())
+ ( + (tuple(database['analyze_options'].split(' ')) if 'analyze_options' in database else ())
tuple(data_source['analyze_options'].split(' '))
if 'analyze_options' in data_source
else ()
)
+ ('--command', 'ANALYZE') + ('--command', 'ANALYZE')
) )
use_psql_command = all_databases or data_source.get('format') == 'plain' use_psql_command = all_databases or database.get('format') == 'plain'
pg_restore_command = tuple( pg_restore_command = shlex.split(database.get('pg_restore_command') or 'pg_restore')
shlex.quote(part)
for part in shlex.split(data_source.get('pg_restore_command') or 'pg_restore')
)
restore_command = ( restore_command = (
(psql_command if use_psql_command else pg_restore_command) tuple(psql_command if use_psql_command else pg_restore_command)
+ ('--no-password',) + ('--no-password',)
+ (('--no-psqlrc',) if use_psql_command else ('--if-exists', '--exit-on-error', '--clean')) + (('--no-psqlrc',) if use_psql_command else ('--if-exists', '--exit-on-error', '--clean'))
+ (('--dbname', data_source['name']) if not all_databases else ()) + (('--dbname', database['name']) if not all_databases else ())
+ (('--host', hostname) if hostname else ()) + (('--host', hostname) if hostname else ())
+ (('--port', port) if port else ()) + (('--port', port) if port else ())
+ (('--username', username) if username else ()) + (('--username', username) if username else ())
+ (('--no-owner',) if data_source.get('no_owner', False) else ()) + (('--no-owner',) if database.get('no_owner', False) else ())
+ ( + (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
tuple(data_source['restore_options'].split(' '))
if 'restore_options' in data_source
else ()
)
+ (() if extract_process else (dump_filename,)) + (() if extract_process else (dump_filename,))
+ tuple( + tuple(
itertools.chain.from_iterable(('--schema', schema) for schema in data_source['schemas']) itertools.chain.from_iterable(('--schema', schema) for schema in database['schemas'])
if data_source.get('schemas') if database['schemas']
else () else ()
) )
) )
extra_environment = make_extra_environment( extra_environment = make_extra_environment(
data_source, restore_connection_params=connection_params database, restore_connection_params=connection_params
) )
logger.debug( logger.debug(f"{log_prefix}: Restoring PostgreSQL database {database['name']}{dry_run_label}")
f"{log_prefix}: Restoring PostgreSQL database {data_source['name']}{dry_run_label}"
)
if dry_run: if dry_run:
return return

View File

@ -1,6 +1,5 @@
import logging import logging
import os import os
import shlex
from borgmatic.execute import execute_command, execute_command_with_processes from borgmatic.execute import execute_command, execute_command_with_processes
from borgmatic.hooks import dump from borgmatic.hooks import dump
@ -8,31 +7,21 @@ from borgmatic.hooks import dump
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def make_dump_path(config): # pragma: no cover def make_dump_path(location_config): # pragma: no cover
''' '''
Make the dump path from the given configuration dict and the name of this hook. Make the dump path from the given location configuration and the name of this hook.
''' '''
return dump.make_data_source_dump_path( return dump.make_database_dump_path(
config.get('borgmatic_source_directory'), 'sqlite_databases' location_config.get('borgmatic_source_directory'), 'sqlite_databases'
) )
def use_streaming(databases, config, log_prefix): def dump_databases(databases, log_prefix, location_config, dry_run):
''' '''
Given a sequence of SQLite database configuration dicts, a configuration dict (ignored), and a Dump the given SQLite3 databases to a file. The databases are supplied as a sequence of
log prefix (ignored), return whether streaming will be using during dumps. configuration dicts, as per the configuration schema. Use the given log prefix in any log
''' entries. Use the given location configuration dict to construct the destination path. If this
return any(databases) is a dry run, then don't actually dump anything.
def dump_data_sources(databases, config, log_prefix, dry_run):
'''
Dump the given SQLite databases to a named pipe. The databases are supplied as a sequence of
configuration dicts, as per the configuration schema. Use the given configuration dict to
construct the destination path and the given log prefix in any log entries.
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
''' '''
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else '' dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
processes = [] processes = []
@ -43,15 +32,14 @@ def dump_data_sources(databases, config, log_prefix, dry_run):
database_path = database['path'] database_path = database['path']
if database['name'] == 'all': if database['name'] == 'all':
logger.warning('The "all" database name has no meaning for SQLite databases') logger.warning('The "all" database name has no meaning for SQLite3 databases')
if not os.path.exists(database_path): if not os.path.exists(database_path):
logger.warning( logger.warning(
f'{log_prefix}: No SQLite database at {database_path}; an empty database will be created and dumped' f'{log_prefix}: No SQLite database at {database_path}; An empty database will be created and dumped'
) )
dump_path = make_dump_path(config) dump_path = make_dump_path(location_config)
dump_filename = dump.make_data_source_dump_filename(dump_path, database['name']) dump_filename = dump.make_database_dump_filename(dump_path, database['name'])
if os.path.exists(dump_filename): if os.path.exists(dump_filename):
logger.warning( logger.warning(
f'{log_prefix}: Skipping duplicate dump of SQLite database at {database_path} to {dump_filename}' f'{log_prefix}: Skipping duplicate dump of SQLite database at {database_path} to {dump_filename}'
@ -60,10 +48,10 @@ def dump_data_sources(databases, config, log_prefix, dry_run):
command = ( command = (
'sqlite3', 'sqlite3',
shlex.quote(database_path), database_path,
'.dump', '.dump',
'>', '>',
shlex.quote(dump_filename), dump_filename,
) )
logger.debug( logger.debug(
f'{log_prefix}: Dumping SQLite database at {database_path} to {dump_filename}{dry_run_label}' f'{log_prefix}: Dumping SQLite database at {database_path} to {dump_filename}{dry_run_label}'
@ -71,43 +59,49 @@ def dump_data_sources(databases, config, log_prefix, dry_run):
if dry_run: if dry_run:
continue continue
dump.create_named_pipe_for_dump(dump_filename) dump.create_parent_directory_for_dump(dump_filename)
processes.append(execute_command(command, shell=True, run_to_completion=False)) processes.append(execute_command(command, shell=True, run_to_completion=False))
return processes return processes
def remove_data_source_dumps(databases, config, log_prefix, dry_run): # pragma: no cover def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
''' '''
Remove the given SQLite database dumps from the filesystem. The databases are supplied as a Remove the given SQLite3 database dumps from the filesystem. The databases are supplied as a
sequence of configuration dicts, as per the configuration schema. Use the given configuration sequence of configuration dicts, as per the configuration schema. Use the given log prefix in
dict to construct the destination path and the given log prefix in any log entries. If this is a any log entries. Use the given location configuration dict to construct the destination path.
dry run, then don't actually remove anything. If this is a dry run, then don't actually remove anything.
''' '''
dump.remove_data_source_dumps(make_dump_path(config), 'SQLite', log_prefix, dry_run) dump.remove_database_dumps(make_dump_path(location_config), 'SQLite', log_prefix, dry_run)
def make_data_source_dump_pattern(databases, config, log_prefix, name=None): # pragma: no cover def make_database_dump_pattern(
databases, log_prefix, location_config, name=None
): # pragma: no cover
''' '''
Make a pattern that matches the given SQLite databases. The databases are supplied as a sequence Make a pattern that matches the given SQLite3 databases. The databases are supplied as a
of configuration dicts, as per the configuration schema. sequence of configuration dicts, as per the configuration schema.
''' '''
return dump.make_data_source_dump_filename(make_dump_path(config), name) return dump.make_database_dump_filename(make_dump_path(location_config), name)
def restore_data_source_dump( def restore_database_dump(
hook_config, config, log_prefix, data_source, dry_run, extract_process, connection_params database_config, log_prefix, location_config, dry_run, extract_process, connection_params
): ):
''' '''
Restore a database from the given extract stream. The database is supplied as a data source Restore the given SQLite3 database from an extract stream. The database is supplied as a
configuration dict, but the given hook configuration is ignored. The given configuration dict is one-element sequence containing a dict describing the database, as per the configuration schema.
used to construct the destination path, and the given log prefix is used for any log entries. If Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
this is a dry run, then don't actually restore anything. Trigger the given active extract anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
process (an instance of subprocess.Popen) to produce output to consume. output to consume.
''' '''
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else '' dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
database_path = connection_params['restore_path'] or data_source.get(
'restore_path', data_source.get('path') if len(database_config) != 1:
raise ValueError('The database configuration value is invalid')
database_path = connection_params['restore_path'] or database_config[0].get(
'restore_path', database_config[0].get('path')
) )
logger.debug(f'{log_prefix}: Restoring SQLite database at {database_path}{dry_run_label}') logger.debug(f'{log_prefix}: Restoring SQLite database at {database_path}{dry_run_label}')

View File

@ -41,9 +41,6 @@ def should_do_markup(no_color, configs):
if any(config.get('output', {}).get('color') is False for config in configs.values()): if any(config.get('output', {}).get('color') is False for config in configs.values()):
return False return False
if os.environ.get('NO_COLOR', None):
return False
py_colors = os.environ.get('PY_COLORS', None) py_colors = os.environ.get('PY_COLORS', None)
if py_colors is not None: if py_colors is not None:
@ -88,11 +85,6 @@ class Multi_stream_handler(logging.Handler):
handler.setLevel(level) handler.setLevel(level)
class Console_no_color_formatter(logging.Formatter):
def format(self, record): # pragma: no cover
return record.msg
class Console_color_formatter(logging.Formatter): class Console_color_formatter(logging.Formatter):
def format(self, record): def format(self, record):
add_custom_log_levels() add_custom_log_levels()
@ -167,23 +159,22 @@ def configure_logging(
monitoring_log_level=None, monitoring_log_level=None,
log_file=None, log_file=None,
log_file_format=None, log_file_format=None,
color_enabled=True,
): ):
''' '''
Configure logging to go to both the console and (syslog or log file). Use the given log levels, Configure logging to go to both the console and (syslog or log file). Use the given log levels,
respectively. If color is enabled, set up log formatting accordingly. respectively.
Raise FileNotFoundError or PermissionError if the log file could not be opened for writing. Raise FileNotFoundError or PermissionError if the log file could not be opened for writing.
''' '''
add_custom_log_levels()
if syslog_log_level is None: if syslog_log_level is None:
syslog_log_level = logging.DISABLED syslog_log_level = console_log_level
if log_file_log_level is None: if log_file_log_level is None:
log_file_log_level = console_log_level log_file_log_level = console_log_level
if monitoring_log_level is None: if monitoring_log_level is None:
monitoring_log_level = console_log_level monitoring_log_level = console_log_level
add_custom_log_levels()
# Log certain log levels to console stderr and others to stdout. This supports use cases like # Log certain log levels to console stderr and others to stdout. This supports use cases like
# grepping (non-error) output. # grepping (non-error) output.
console_disabled = logging.NullHandler() console_disabled = logging.NullHandler()
@ -200,19 +191,11 @@ def configure_logging(
logging.DEBUG: console_standard_handler, logging.DEBUG: console_standard_handler,
} }
) )
console_handler.setFormatter(Console_color_formatter())
if color_enabled:
console_handler.setFormatter(Console_color_formatter())
else:
console_handler.setFormatter(Console_no_color_formatter())
console_handler.setLevel(console_log_level) console_handler.setLevel(console_log_level)
handlers = [console_handler] syslog_path = None
if log_file is None and syslog_log_level != logging.DISABLED:
if syslog_log_level != logging.DISABLED:
syslog_path = None
if os.path.exists('/dev/log'): if os.path.exists('/dev/log'):
syslog_path = '/dev/log' syslog_path = '/dev/log'
elif os.path.exists('/var/run/syslog'): elif os.path.exists('/var/run/syslog'):
@ -220,15 +203,14 @@ def configure_logging(
elif os.path.exists('/var/run/log'): elif os.path.exists('/var/run/log'):
syslog_path = '/var/run/log' syslog_path = '/var/run/log'
if syslog_path: if syslog_path and not interactive_console():
syslog_handler = logging.handlers.SysLogHandler(address=syslog_path) syslog_handler = logging.handlers.SysLogHandler(address=syslog_path)
syslog_handler.setFormatter( syslog_handler.setFormatter(
logging.Formatter('borgmatic: {levelname} {message}', style='{') # noqa: FS003 logging.Formatter('borgmatic: {levelname} {message}', style='{') # noqa: FS003
) )
syslog_handler.setLevel(syslog_log_level) syslog_handler.setLevel(syslog_log_level)
handlers.append(syslog_handler) handlers = (console_handler, syslog_handler)
elif log_file and log_file_log_level != logging.DISABLED:
if log_file and log_file_log_level != logging.DISABLED:
file_handler = logging.handlers.WatchedFileHandler(log_file) file_handler = logging.handlers.WatchedFileHandler(log_file)
file_handler.setFormatter( file_handler.setFormatter(
logging.Formatter( logging.Formatter(
@ -236,9 +218,11 @@ def configure_logging(
) )
) )
file_handler.setLevel(log_file_log_level) file_handler.setLevel(log_file_log_level)
handlers.append(file_handler) handlers = (console_handler, file_handler)
else:
handlers = (console_handler,)
logging.basicConfig( logging.basicConfig(
level=min(handler.level for handler in handlers), level=min(console_log_level, syslog_log_level, log_file_log_level, monitoring_log_level),
handlers=handlers, handlers=handlers,
) )

View File

@ -23,20 +23,12 @@ def handle_signal(signal_number, frame):
if signal_number == signal.SIGTERM: if signal_number == signal.SIGTERM:
logger.critical('Exiting due to TERM signal') logger.critical('Exiting due to TERM signal')
sys.exit(EXIT_CODE_FROM_SIGNAL + signal.SIGTERM) sys.exit(EXIT_CODE_FROM_SIGNAL + signal.SIGTERM)
elif signal_number == signal.SIGINT:
raise KeyboardInterrupt()
def configure_signals(): def configure_signals():
''' '''
Configure borgmatic's signal handlers to pass relevant signals through to any child processes Configure borgmatic's signal handlers to pass relevant signals through to any child processes
like Borg. like Borg. Note that SIGINT gets passed through even without these changes.
''' '''
for signal_number in ( for signal_number in (signal.SIGHUP, signal.SIGTERM, signal.SIGUSR1, signal.SIGUSR2):
signal.SIGHUP,
signal.SIGINT,
signal.SIGTERM,
signal.SIGUSR1,
signal.SIGUSR2,
):
signal.signal(signal_number, handle_signal) signal.signal(signal_number, handle_signal)

View File

@ -16,4 +16,4 @@ each.
If you find a security vulnerability, please [file a If you find a security vulnerability, please [file a
ticket](https://torsion.org/borgmatic/#issues) or [send email ticket](https://torsion.org/borgmatic/#issues) or [send email
directly](mailto:witten@torsion.org) as appropriate. You should expect to hear directly](mailto:witten@torsion.org) as appropriate. You should expect to hear
back within a few days at most and generally sooner. back within a few days at most, and generally sooner.

View File

@ -1,5 +0,0 @@
module.exports = function() {
return {
environment: process.env.NODE_ENV || "development"
};
};

View File

@ -1,5 +1,5 @@
<h2>Improve this documentation</h2> <h2>Improve this documentation</h2>
<p>Have an idea on how to make this documentation even better? Use our <a <p>Have an idea on how to make this documentation even better? Use our <a
href="https://torsion.org/borgmatic/#support-and-contributing">issue href="https://projects.torsion.org/borgmatic-collective/borgmatic/issues">issue tracker</a> to send your
tracker</a> to send your feedback!</p> feedback!</p>

View File

@ -2,13 +2,13 @@
font-size: 1rem; /* Reset */ font-size: 1rem; /* Reset */
} }
.elv-toc details { .elv-toc details {
--details-force-closed: (max-width: 79.9375em); /* 1023px */ --details-force-closed: (max-width: 63.9375em); /* 1023px */
} }
.elv-toc details > summary { .elv-toc details > summary {
font-size: 1.375rem; /* 22px /16 */ font-size: 1.375rem; /* 22px /16 */
margin-bottom: .5em; margin-bottom: .5em;
} }
@media (min-width: 80em) { @media (min-width: 64em) { /* 1024px */
.elv-toc { .elv-toc {
position: absolute; position: absolute;
left: 3rem; left: 3rem;

View File

@ -121,7 +121,7 @@ main h1:first-child,
main .elv-toc + h1 { main .elv-toc + h1 {
border-bottom: 2px dotted #666; border-bottom: 2px dotted #666;
} }
@media (min-width: 80em) { @media (min-width: 64em) { /* 1024px */
main .elv-toc + h1, main .elv-toc + h1,
main .elv-toc + h2 { main .elv-toc + h2 {
margin-top: 0; margin-top: 0;
@ -243,10 +243,10 @@ footer.elv-layout {
.elv-layout-full { .elv-layout-full {
max-width: none; max-width: none;
} }
@media (min-width: 80em) { @media (min-width: 64em) { /* 1024px */
.elv-layout-toc { .elv-layout-toc {
padding-left: 15rem; padding-left: 15rem;
max-width: 76rem; max-width: 60rem;
margin-right: 1rem; margin-right: 1rem;
position: relative; position: relative;
} }

View File

@ -11,7 +11,7 @@ headerClass: elv-header-default
{% set navPages = collections.all | eleventyNavigation %} {% set navPages = collections.all | eleventyNavigation %}
{% macro renderNavListItem(entry) -%} {% macro renderNavListItem(entry) -%}
<li{% if entry.url == page.url %} class="elv-toc-active"{% endif %}> <li{% if entry.url == page.url %} class="elv-toc-active"{% endif %}>
<a {% if entry.url %}href="{% if borgmatic.environment == "production" %}https://torsion.org/borgmatic/docs{% else %}http://localhost:8080/docs{% endif %}{{ entry.url | url }}"{% endif %}>{{ entry.title }}</a> <a {% if entry.url %}href="https://torsion.org/borgmatic/docs{{ entry.url | url }}"{% endif %}>{{ entry.title }}</a>
{%- if entry.children.length -%} {%- if entry.children.length -%}
<ul> <ul>
{%- for child in entry.children %}{{ renderNavListItem(child) }}{% endfor -%} {%- for child in entry.children %}{{ renderNavListItem(child) }}{% endfor -%}

View File

@ -1,3 +1,4 @@
version: '3'
services: services:
docs: docs:
image: borgmatic-docs image: borgmatic-docs
@ -8,7 +9,7 @@ services:
dockerfile: docs/Dockerfile dockerfile: docs/Dockerfile
context: .. context: ..
args: args:
ENVIRONMENT: development ENVIRONMENT: dev
message: message:
image: alpine image: alpine
container_name: message container_name: message

View File

@ -17,14 +17,15 @@ feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
instead. instead.
You can specify `before_backup` hooks to perform preparation steps before You can specify `before_backup` hooks to perform preparation steps before
running backups and specify `after_backup` hooks to perform cleanup steps running backups, and specify `after_backup` hooks to perform cleanup steps
afterwards. Here's an example: afterwards. Here's an example:
```yaml ```yaml
before_backup: hooks:
- mount /some/filesystem before_backup:
after_backup: - mount /some/filesystem
- umount /some/filesystem after_backup:
- umount /some/filesystem
``` ```
If your command contains a special YAML character such as a colon, you may If your command contains a special YAML character such as a colon, you may
@ -32,23 +33,11 @@ need to quote the entire string (or use a [multiline
string](https://yaml-multiline.info/)) to avoid an error: string](https://yaml-multiline.info/)) to avoid an error:
```yaml ```yaml
before_backup: hooks:
- "echo Backup: start" before_backup:
- "echo Backup: start"
``` ```
There are additional hooks that run before/after other actions as well. For
instance, `before_prune` runs before a `prune` action for a repository, while
`after_prune` runs after it.
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `hooks:` section of your configuration.
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
`before_actions` and `after_actions` hooks run before/after all the actions
(like `create`, `prune`, etc.) for each repository. These hooks are a good
place to run per-repository steps like mounting/unmounting a remote
filesystem.
<span class="minilink minilink-addedin">New in version 1.6.0</span> The <span class="minilink minilink-addedin">New in version 1.6.0</span> The
`before_backup` and `after_backup` hooks each run once per repository in a `before_backup` and `after_backup` hooks each run once per repository in a
configuration file. `before_backup` hooks runs right before the `create` configuration file. `before_backup` hooks runs right before the `create`
@ -57,6 +46,16 @@ but not if an error occurs in a previous hook or in the backups themselves.
(Prior to borgmatic 1.6.0, these hooks instead ran once per configuration file (Prior to borgmatic 1.6.0, these hooks instead ran once per configuration file
rather than once per repository.) rather than once per repository.)
There are additional hooks that run before/after other actions as well. For
instance, `before_prune` runs before a `prune` action for a repository, while
`after_prune` runs after it.
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
`before_actions` and `after_actions` hooks run before/after all the actions
(like `create`, `prune`, etc.) for each repository. These hooks are a good
place to run per-repository steps like mounting/unmounting a remote
filesystem.
## Variable interpolation ## Variable interpolation
@ -65,13 +64,11 @@ variables into the hook command. Here's an example that assumes you provide a
separate shell script: separate shell script:
```yaml ```yaml
after_prune: hooks:
- record-prune.sh "{configuration_filename}" "{repository}" after_prune:
- record-prune.sh "{configuration_filename}" "{repository}"
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
In this example, when the hook is triggered, borgmatic interpolates runtime In this example, when the hook is triggered, borgmatic interpolates runtime
values into the hook command: the borgmatic configuration filename and the values into the hook command: the borgmatic configuration filename and the
paths of the current Borg repository. Here's the full set of supported paths of the current Borg repository. Here's the full set of supported
@ -84,9 +81,6 @@ variables you can use here:
path of the borgmatic log file, only set when the `--log-file` flag is used path of the borgmatic log file, only set when the `--log-file` flag is used
* `repository`: path of the current repository as configured in the current * `repository`: path of the current repository as configured in the current
borgmatic configuration file borgmatic configuration file
* `repository_label` <span class="minilink minilink-addedin">New in version
1.8.12</span>: label of the current repository as configured in the current
borgmatic configuration file
Note that you can also interpolate in [arbitrary environment Note that you can also interpolate in [arbitrary environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/). variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
@ -98,15 +92,13 @@ You can also use `before_everything` and `after_everything` hooks to perform
global setup or cleanup: global setup or cleanup:
```yaml ```yaml
before_everything: hooks:
- set-up-stuff-globally before_everything:
after_everything: - set-up-stuff-globally
- clean-up-stuff-globally after_everything:
- clean-up-stuff-globally
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `hooks:` section of your configuration.
`before_everything` hooks collected from all borgmatic configuration files run `before_everything` hooks collected from all borgmatic configuration files run
once before all configuration files (prior to all actions), but only if there once before all configuration files (prior to all actions), but only if there
is a `create` action. An error encountered during a `before_everything` hook is a `create` action. An error encountered during a `before_everything` hook
@ -117,7 +109,6 @@ but only if there is a `create` action. It runs even if an error occurs during
a backup or a backup hook, but not if an error occurs during a a backup or a backup hook, but not if an error occurs during a
`before_everything` hook. `before_everything` hook.
## Error hooks ## Error hooks
borgmatic also runs `on_error` hooks if an error occurs, either when creating borgmatic also runs `on_error` hooks if an error occurs, either when creating
@ -125,15 +116,13 @@ a backup or running a backup hook. See the [monitoring and alerting
documentation](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/) documentation](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/)
for more information. for more information.
## Hook output ## Hook output
Any output produced by your hooks shows up both at the console and in syslog Any output produced by your hooks shows up both at the console and in syslog
(when enabled). For more information, read about <a (when run in a non-interactive console). For more information, read about <a
href="https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/">inspecting href="https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/">inspecting
your backups</a>. your backups</a>.
## Security ## Security
An important security note about hooks: borgmatic executes all hook commands An important security note about hooks: borgmatic executes all hook commands

View File

@ -44,16 +44,14 @@ file](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/),
say at `/etc/borgmatic.d/removable.yaml`: say at `/etc/borgmatic.d/removable.yaml`:
```yaml ```yaml
source_directories: location:
- /home source_directories:
- /home
repositories: repositories:
- path: /mnt/removable/backup.borg - path: /mnt/removable/backup.borg
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `location:` section of your configuration.
<span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit <span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit
the `path:` portion of the `repositories` list. the `path:` portion of the `repositories` list.
@ -62,13 +60,11 @@ the external `findmnt` utility to see whether the drive is mounted before
proceeding. proceeding.
```yaml ```yaml
before_backup: hooks:
- findmnt /mnt/removable > /dev/null || exit 75 before_backup:
- findmnt /mnt/removable > /dev/null || exit 75
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put this
option in the `hooks:` section of your configuration.
What this does is check if the `findmnt` command errors when probing for a What this does is check if the `findmnt` command errors when probing for a
particular mount point. If it does error, then it returns exit code 75 to particular mount point. If it does error, then it returns exit code 75 to
borgmatic. borgmatic logs the soft failure, skips all further actions in that borgmatic. borgmatic logs the soft failure, skips all further actions in that
@ -81,21 +77,27 @@ optionally using `before_actions` instead.
You can imagine a similar check for the sometimes-online server case: You can imagine a similar check for the sometimes-online server case:
```yaml ```yaml
source_directories: location:
- /home source_directories:
- /home
repositories: repositories:
- path: ssh://me@buddys-server.org/./backup.borg - path: ssh://me@buddys-server.org/./backup.borg
before_backup: hooks:
- ping -q -c 1 buddys-server.org > /dev/null || exit 75 before_backup:
- ping -q -c 1 buddys-server.org > /dev/null || exit 75
``` ```
<span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit
the `path:` portion of the `repositories` list.
Or to only run backups if the battery level is high enough: Or to only run backups if the battery level is high enough:
```yaml ```yaml
before_backup: hooks:
- is_battery_percent_at_least.sh 25 before_backup:
- is_battery_percent_at_least.sh 25
``` ```
(Writing the battery script is left as an exercise to the reader.) (Writing the battery script is left as an exercise to the reader.)

View File

@ -15,45 +15,34 @@ consistent snapshot that is more suited for backups.
Fortunately, borgmatic includes built-in support for creating database dumps Fortunately, borgmatic includes built-in support for creating database dumps
prior to running backups. For example, here is everything you need to dump and prior to running backups. For example, here is everything you need to dump and
backup a couple of local PostgreSQL databases and a MySQL database. backup a couple of local PostgreSQL databases and a MySQL/MariaDB database.
```yaml ```yaml
postgresql_databases: hooks:
- name: users postgresql_databases:
- name: orders - name: users
mysql_databases: - name: orders
- name: posts mysql_databases:
- name: posts
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these and other database options in the `hooks:` section of your
configuration.
<span class="minilink minilink-addedin">New in version 1.5.22</span> You can <span class="minilink minilink-addedin">New in version 1.5.22</span> You can
also dump MongoDB databases. For example: also dump MongoDB databases. For example:
```yaml ```yaml
mongodb_databases: hooks:
- name: messages mongodb_databases:
- name: messages
``` ```
<span class="minilink minilink-addedin">New in version 1.7.9</span> <span class="minilink minilink-addedin">New in version 1.7.9</span>
Additionally, you can dump SQLite databases. For example: Additionally, you can dump SQLite databases. For example:
```yaml ```yaml
sqlite_databases: hooks:
- name: mydb sqlite_databases:
path: /var/lib/sqlite3/mydb.sqlite - name: mydb
``` path: /var/lib/sqlite3/mydb.sqlite
<span class="minilink minilink-addedin">New in version 1.8.2</span> If you're
using MariaDB, use the MariaDB database hook instead of `mysql_databases:` as
the MariaDB hook calls native MariaDB commands instead of the deprecated MySQL
ones. For instance:
```yaml
mariadb_databases:
- name: comments
``` ```
As part of each backup, borgmatic streams a database dump for each configured As part of each backup, borgmatic streams a database dump for each configured
@ -65,7 +54,7 @@ temporary disk space.)
To support this, borgmatic creates temporary named pipes in `~/.borgmatic` by To support this, borgmatic creates temporary named pipes in `~/.borgmatic` by
default. To customize this path, set the `borgmatic_source_directory` option default. To customize this path, set the `borgmatic_source_directory` option
in borgmatic's configuration. in the `location` section of borgmatic's configuration.
Also note that using a database hook implicitly enables both the Also note that using a database hook implicitly enables both the
`read_special` and `one_file_system` configuration settings (even if they're `read_special` and `one_file_system` configuration settings (even if they're
@ -75,41 +64,35 @@ See Limitations below for more on this.
Here's a more involved example that connects to remote databases: Here's a more involved example that connects to remote databases:
```yaml ```yaml
postgresql_databases: hooks:
- name: users postgresql_databases:
hostname: database1.example.org - name: users
- name: orders hostname: database1.example.org
hostname: database2.example.org - name: orders
port: 5433 hostname: database2.example.org
username: postgres port: 5433
password: trustsome1 username: postgres
format: tar password: trustsome1
options: "--role=someone" format: tar
mariadb_databases: options: "--role=someone"
- name: photos mysql_databases:
hostname: database3.example.org - name: posts
port: 3307 hostname: database3.example.org
username: root port: 3307
password: trustsome1 username: root
options: "--skip-comments" password: trustsome1
mysql_databases: options: "--skip-comments"
- name: posts mongodb_databases:
hostname: database4.example.org - name: messages
port: 3307 hostname: database4.example.org
username: root port: 27018
password: trustsome1 username: dbuser
options: "--skip-comments" password: trustsome1
mongodb_databases: authentication_database: mongousers
- name: messages options: "--ssl"
hostname: database5.example.org sqlite_databases:
port: 27018 - name: mydb
username: dbuser path: /var/lib/sqlite3/mydb.sqlite
password: trustsome1
authentication_database: mongousers
options: "--ssl"
sqlite_databases:
- name: mydb
path: /var/lib/sqlite3/mydb.sqlite
``` ```
See your [borgmatic configuration See your [borgmatic configuration
@ -123,14 +106,13 @@ listing databases, restoring databases, etc.).
If you want to dump all databases on a host, use `all` for the database name: If you want to dump all databases on a host, use `all` for the database name:
```yaml ```yaml
postgresql_databases: hooks:
- name: all postgresql_databases:
mariadb_databases: - name: all
- name: all mysql_databases:
mysql_databases: - name: all
- name: all mongodb_databases:
mongodb_databases: - name: all
- name: all
``` ```
Note that you may need to use a `username` of the `postgres` superuser for Note that you may need to use a `username` of the `postgres` superuser for
@ -138,25 +120,20 @@ this to work with PostgreSQL.
The SQLite hook in particular does not consider "all" a special database name. The SQLite hook in particular does not consider "all" a special database name.
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `hooks:` section of your configuration.
<span class="minilink minilink-addedin">New in version 1.7.6</span> With <span class="minilink minilink-addedin">New in version 1.7.6</span> With
PostgreSQL, MariaDB, and MySQL, you can optionally dump "all" databases to PostgreSQL and MySQL, you can optionally dump "all" databases to separate
separate files instead of one combined dump file, allowing more convenient files instead of one combined dump file, allowing more convenient restores of
restores of individual databases. Enable this by specifying your desired individual databases. Enable this by specifying your desired database dump
database dump `format`: `format`:
```yaml ```yaml
postgresql_databases: hooks:
- name: all postgresql_databases:
format: custom - name: all
mariadb_databases: format: custom
- name: all mysql_databases:
format: sql - name: all
mysql_databases: format: sql
- name: all
format: sql
``` ```
### Containers ### Containers
@ -166,17 +143,15 @@ problem—configure borgmatic to connect to the container's name on its exposed
port. For instance: port. For instance:
```yaml ```yaml
postgresql_databases: hooks:
- name: users postgresql_databases:
hostname: your-database-container-name - name: users
port: 5433 hostname: your-database-container-name
username: postgres port: 5433
password: trustsome1 username: postgres
password: trustsome1
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `hooks:` section of your configuration.
But what if borgmatic is running on the host? You can still connect to a But what if borgmatic is running on the host? You can still connect to a
database container if its ports are properly exposed to the host. For database container if its ports are properly exposed to the host. For
instance, when running the database container, you can specify `--publish instance, when running the database container, you can specify `--publish
@ -204,37 +179,8 @@ hooks:
password: trustsome1 password: trustsome1
``` ```
Alter the ports in these examples to suit your particular database system. You can alter the ports in these examples to suit your particular database
system.
Normally, borgmatic dumps a database by running a database dump command (e.g.
`pg_dump`) on the host or wherever borgmatic is running, and this command
connects to your containerized database via the given `hostname` and `port`.
But if you don't have any database dump commands installed on your host and
you'd rather use the commands inside your database container itself, borgmatic
supports that too. Just configure borgmatic to `exec` into your container to
run the dump command.
For instance, if using Docker and PostgreSQL, something like this might work:
```yaml
hooks:
postgresql_databases:
- name: users
hostname: 127.0.0.1
port: 5433
username: postgres
password: trustsome1
pg_dump_command: docker exec my_pg_container pg_dump
```
... where `my_pg_container` is the name of your database container. In this
example, you'd also need to set the `pg_restore_command` and `psql_command`
options.
Similar command override options are available for (some of) the other
supported database types as well. See the [configuration
reference](https://torsion.org/borgmatic/docs/reference/configuration/) for
details.
### No source directories ### No source directories
@ -250,7 +196,6 @@ it is a mandatory option there:
```yaml ```yaml
location: location:
source_directories: [] source_directories: []
hooks: hooks:
mysql_databases: mysql_databases:
- name: all - name: all
@ -274,16 +219,10 @@ to prepare for this situation, it's a good idea to include borgmatic's own
configuration files as part of your regular backups. That way, you can always configuration files as part of your regular backups. That way, you can always
bring back any missing configuration files in order to restore a database. bring back any missing configuration files in order to restore a database.
<span class="minilink minilink-addedin">New in version 1.7.15</span> borgmatic
automatically includes configuration files in your backup. See [the
documentation on the `config bootstrap`
action](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/#extract-the-configuration-files-used-to-create-an-archive)
for more information.
## Supported databases ## Supported databases
As of now, borgmatic supports PostgreSQL, MariaDB, MySQL, MongoDB, and SQLite As of now, borgmatic supports PostgreSQL, MySQL/MariaDB, MongoDB, and SQLite
databases directly. But see below about general-purpose preparation and databases directly. But see below about general-purpose preparation and
cleanup hooks as a work-around with other database systems. Also, please [file cleanup hooks as a work-around with other database systems. Also, please [file
a ticket](https://torsion.org/borgmatic/#issues) for additional database a ticket](https://torsion.org/borgmatic/#issues) for additional database
@ -292,10 +231,6 @@ systems that you'd like supported.
## Database restoration ## Database restoration
When you want to replace an existing database with its backed-up contents, you
can restore it with borgmatic. Note that the database must already exist;
borgmatic does not currently create a database upon restore.
To restore a database dump from an archive, use the `borgmatic restore` To restore a database dump from an archive, use the `borgmatic restore`
action. But the first step is to figure out which archive to restore from. A action. But the first step is to figure out which archive to restore from. A
good way to do that is to use the `rlist` action: good way to do that is to use the `rlist` action:
@ -344,8 +279,7 @@ problem: the `restore` action figures out which repository to use.
But if you have multiple repositories configured, then you'll need to specify But if you have multiple repositories configured, then you'll need to specify
the repository to use via the `--repository` flag. This can be done either the repository to use via the `--repository` flag. This can be done either
with the repository's path or its label as configured in your borgmatic with the repository's path or its label as configured in your borgmatic configuration file.
configuration file.
```bash ```bash
borgmatic restore --repository repo.borg --archive host-2023-... borgmatic restore --repository repo.borg --archive host-2023-...
@ -358,7 +292,7 @@ restore one of them, use the `--database` flag to select one or more
databases. For instance: databases. For instance:
```bash ```bash
borgmatic restore --archive host-2023-... --database users --database orders borgmatic restore --archive host-2023-... --database users
``` ```
<span class="minilink minilink-addedin">New in version 1.7.6</span> You can <span class="minilink minilink-addedin">New in version 1.7.6</span> You can
@ -401,28 +335,6 @@ within the database dump:
borgmatic restore --archive latest --database users --schema tentant1 borgmatic restore --archive latest --database users --schema tentant1
``` ```
### Restore to an alternate host
<span class="minilink minilink-addedin">New in version 1.7.15</span>
A database dump can be restored to a host other than the one from which it was
originally dumped. The connection parameters like the username, password, and
port can also be changed. This can be done from the command line:
```bash
borgmatic restore --archive latest --database users --hostname database2.example.org --port 5433 --username postgres --password trustsome1
```
Or from the configuration file:
```yaml
postgresql_databases:
- name: users
hostname: database1.example.org
restore_hostname: database1.example.org
restore_port: 5433
restore_username: postgres
restore_password: trustsome1
```
### Limitations ### Limitations
@ -437,28 +349,19 @@ borgmatic's own configuration file. So include your configuration file in
backups to avoid getting caught without a way to restore a database. backups to avoid getting caught without a way to restore a database.
3. borgmatic does not currently support backing up or restoring multiple 3. borgmatic does not currently support backing up or restoring multiple
databases that share the exact same name on different hosts. databases that share the exact same name on different hosts.
4. When database hooks are enabled, borgmatic instructs Borg to consume 4. Because database hooks implicitly enable the `read_special` configuration,
special files (via `--read-special`) to support database dump any special files are excluded from backups (named pipes, block devices,
streaming—regardless of the value of your `read_special` configuration option. character devices, and sockets) to prevent hanging. Try a command like `find
And because this can cause Borg to hang, borgmatic also automatically excludes /your/source/path -type b -or -type c -or -type p -or -type s` to find such
special files (and symlinks to them) that Borg may get stuck on. Even so, files. Common directories to exclude are `/dev` and `/run`, but that may not
there are still potential edge cases in which applications on your system be exhaustive. <span class="minilink minilink-addedin">New in version
create new special files *after* borgmatic constructs its exclude list, 1.7.3</span> When database hooks are enabled, borgmatic automatically excludes
resulting in Borg hangs. If that occurs, you can resort to manually excluding special files (and symlinks to special files) that may cause Borg to hang, so
those files. And if you explicitly set the `read-special` option to `true`, generally you no longer need to manually exclude them. There are potential
borgmatic will opt you out of the auto-exclude feature entirely, but will edge cases though in which applications on your system create new special files
still instruct Borg to consume special files—you will just be on your own to *after* borgmatic constructs its exclude list, resulting in Borg hangs. If that
exclude them. <span class="minilink minilink-addedin">Prior to version occurs, you can resort to the manual excludes described above. And to opt out
1.7.3</span>Special files were not auto-excluded, and you were responsible for of the auto-exclude feature entirely, explicitly set `read_special` to true.
excluding them yourself. Common directories to exclude are `/dev` and `/run`,
but that may not be exhaustive.
5. Database hooks also implicitly enable the `one_file_system` option, which
means Borg won't cross filesystem boundaries when looking for files to backup.
This is especially important when running borgmatic in a container, as
container volumes are mounted as separate filesystems. One work-around is to
explicitly add each mounted volume you'd like to backup to
`source_directories` instead of relying on Borg to include them implicitly via
a parent directory.
### Manual restoration ### Manual restoration
@ -492,9 +395,9 @@ dumps with any database system.
## Troubleshooting ## Troubleshooting
### Authentication errors ### PostgreSQL/MySQL authentication errors
With PostgreSQL, MariaDB, and MySQL, if you're getting authentication errors With PostgreSQL and MySQL/MariaDB, if you're getting authentication errors
when borgmatic tries to connect to your database, a natural reaction is to when borgmatic tries to connect to your database, a natural reaction is to
increase your borgmatic verbosity with `--verbosity 2` and go looking in the increase your borgmatic verbosity with `--verbosity 2` and go looking in the
logs. You'll notice though that your database password does not show up in the logs. You'll notice though that your database password does not show up in the
@ -508,26 +411,26 @@ authenticated. For instance, with PostgreSQL, check your
[pg_hba.conf](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html) [pg_hba.conf](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html)
file for that configuration. file for that configuration.
Additionally, MariaDB or MySQL may be picking up some of your credentials from Additionally, MySQL/MariaDB may be picking up some of your credentials from a
a defaults file like `~/mariadb.cnf` or `~/.my.cnf`. If that's the case, then defaults file like `~/.my.cnf`. If that's the case, then it's possible
it's possible MariaDB or MySQL end up using, say, a username from borgmatic's MySQL/MariaDB ends up using, say, a username from borgmatic's configuration
configuration and a password from `~/mariadb.cnf` or `~/.my.cnf`. This may and a password from `~/.my.cnf`. This may result in authentication errors if
result in authentication errors if this combination of credentials is not what this combination of credentials is not what you intend.
you intend.
### MariaDB or MySQL table lock errors ### MySQL table lock errors
If you encounter table lock errors during a database dump with MariaDB or If you encounter table lock errors during a database dump with MySQL/MariaDB,
MySQL, you may need to [use a you may need to [use a
transaction](https://mariadb.com/docs/skysql-dbaas/ref/mdb/cli/mariadb-dump/single-transaction/). transaction](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html#option_mysqldump_single-transaction).
You can add any additional flags to the `options:` in your database You can add any additional flags to the `options:` in your database
configuration. Here's an example with MariaDB: configuration. Here's an example:
```yaml ```yaml
mariadb_databases: hooks:
- name: posts mysql_databases:
options: "--single-transaction --quick" - name: posts
options: "--single-transaction --quick"
``` ```
### borgmatic hangs during backup ### borgmatic hangs during backup

View File

@ -1,86 +0,0 @@
---
title: How to customize warnings and errors
eleventyNavigation:
key: đź’Ą Customize warnings/errors
parent: How-to guides
order: 12
---
## When things go wrong
After Borg runs, it indicates whether it succeeded via its exit code, a
numeric ID indicating success, warning, or error. borgmatic consumes this exit
code to decide how to respond. Normally, a Borg error results in a borgmatic
error, while a Borg warning or success doesn't.
But if that default behavior isn't sufficient for your needs, you can
customize how borgmatic interprets [Borg's exit
codes](https://borgbackup.readthedocs.io/en/stable/usage/general.html#return-codes).
For instance, to elevate Borg warnings to errors, thereby causing borgmatic to
error on them, use the following borgmatic configuration:
```yaml
borg_exit_codes:
- exit_code: 1
treat_as: error
```
Be aware though that Borg exits with a warning code for a variety of benign
situations such as files changing while they're being read, so this example
may not meet your needs. Keep reading though for more granular exit code
configuration.
Here's an example that squashes Borg errors to warnings:
```yaml
borg_exit_codes:
- exit_code: 2
treat_as: warning
```
Be careful with this example though, because it prevents borgmatic from
erroring when Borg errors, which may not be desirable.
### More granular configuration
<span class="minilink minilink-addedin">New in Borg version 1.4</span> Borg
support for [more granular exit
codes](https://borgbackup.readthedocs.io/en/1.4-maint/usage/general.html#return-codes)
means that you can configure borgmatic to respond to specific Borg conditions.
See the full list of [Borg 1.4 error and warning exit
codes](https://borgbackup.readthedocs.io/en/1.4.0b1/internals/frontends.html#message-ids).
The `rc:` numeric value there tells you the exit code for each.
For instance, this borgmatic configuration elevates all Borg backup file
permission warnings (exit code `105`)—and only those warnings—to errors:
```yaml
borg_exit_codes:
- exit_code: 105
treat_as: error
```
The following configuration does that *and* elevates backup file not found
warnings (exit code `107`) to errors as well:
```yaml
borg_exit_codes:
- exit_code: 105
treat_as: error
- exit_code: 107
treat_as: error
```
If you don't know the exit code for a particular Borg error or warning you're
experiencing, you can usually find it in your borgmatic output when
`--verbosity 2` is enabled. For instance, here's a snippet of that output when
a backup file is not found:
```
/noexist: stat: [Errno 2] No such file or directory: '/noexist'
...
terminating with warning status, rc 107
```
So if you want to configure borgmatic to treat this as an error instead of a
warning, the exit status to use is `107`.

View File

@ -20,7 +20,7 @@ default action ordering was `prune`, `compact`, `create`, and `check`.
### A la carte actions ### A la carte actions
If you find yourself wanting to customize the actions, you have some options. If you find yourself wanting to customize the actions, you have some options.
First, you can run borgmatic's `create`, `prune`, `compact`, or `check` First, you can run borgmatic's `prune`, `compact`, `create`, or `check`
actions separately. For instance, the following optional actions are actions separately. For instance, the following optional actions are
available (among others): available (among others):
@ -51,11 +51,6 @@ cron job), while only running expensive consistency checks with `check` on a
much less frequent basis (e.g. with `borgmatic check` called from a separate much less frequent basis (e.g. with `borgmatic check` called from a separate
cron job). cron job).
<span class="minilink minilink-addedin">New in version 1.8.5</span> Instead of
(or in addition to) specifying actions on the command-line, you can configure
borgmatic to [skip particular
actions](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#skipping-actions).
### Consistency check configuration ### Consistency check configuration
@ -70,20 +65,19 @@ configure borgmatic to run repository checks only. Configure this in the
`consistency` section of borgmatic configuration: `consistency` section of borgmatic configuration:
```yaml ```yaml
checks: consistency:
- name: repository checks:
- name: repository
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `consistency:` section of your configuration.
<span class="minilink minilink-addedin">Prior to version 1.6.2</span> The <span class="minilink minilink-addedin">Prior to version 1.6.2</span> The
`checks` option was a plain list of strings without the `name:` part, and `checks` option was a plain list of strings without the `name:` part, and
borgmatic ran each configured check every time checks were run. For example: borgmatic ran each configured check every time checks were run. For example:
```yaml ```yaml
checks: consistency:
- repository checks:
- repository
``` ```
@ -91,9 +85,8 @@ Here are the available checks from fastest to slowest:
* `repository`: Checks the consistency of the repository itself. * `repository`: Checks the consistency of the repository itself.
* `archives`: Checks all of the archives in the repository. * `archives`: Checks all of the archives in the repository.
* `extract`: Performs an extraction dry-run of the latest archive. * `extract`: Performs an extraction dry-run of the most recent archive.
* `data`: Verifies the data integrity of all archives contents, decrypting and decompressing all data. * `data`: Verifies the data integrity of all archives contents, decrypting and decompressing all data.
* `spot`: Compares file counts and contents between your source files and the latest archive.
Note that the `data` check is a more thorough version of the `archives` check, Note that the `data` check is a more thorough version of the `archives` check,
so enabling the `data` check implicitly enables the `archives` check as well. so enabling the `data` check implicitly enables the `archives` check as well.
@ -103,89 +96,6 @@ documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html)
for more information. for more information.
### Spot check
The various consistency checks all have trade-offs around speed and
thoroughness, but most of them don't even look at your original source
files—arguably one important way to ensure your backups contain the files
you'll want to restore in the case of catastrophe (or just an accidentally
deleted file). Because if something goes wrong with your source files, most
consistency checks will still pass with flying colors and you won't discover
there's a problem until you go to restore.
<span class="minilink minilink-addedin">New in version 1.8.10</span> <span
class="minilink minilink-addedin">Beta feature</span> That's where the spot
check comes in. This check actually compares your source file counts and data
against those in the latest archive, potentially catching problems like
incorrect excludes, inadvertent deletes, files changed by malware, etc.
However, because an exhaustive comparison of all source files against the
latest archive might be too slow, the spot check supports *sampling* a
percentage of your source files for the comparison, ensuring they fall within
configured tolerances.
Here's how it works. Start by installing the `xxhash` OS package if you don't
already have it, so the spot check can run the `xxh64sum` command and
efficiently hash files for comparison. Then add something like the following
to your borgmatic configuration:
```yaml
checks:
- name: spot
count_tolerance_percentage: 10
data_sample_percentage: 1
data_tolerance_percentage: 0.5
```
The `count_tolerance_percentage` is the percentage delta between the source
directories file count and the latest backup archive file count that is
allowed before the entire consistency check fails. For instance, if the spot
check runs and finds 100 source files on disk and 105 files in the latest
archive, that would be within the configured 10% count tolerance and the check
would succeed. But if there were 100 source files and 200 archive files, the
check would fail. (100 source files and only 50 archive files would also
fail.)
The `data_sample_percentage` is the percentage of total files in the source
directories to randomly sample and compare to their corresponding files in the
latest backup archive. A higher value allows a more accurate check—and a
slower one. The comparison is performed by hashing the selected source files
and counting hashes that don't match the latest archive. For instance, if you
have 1,000 source files and your sample percentage is 1%, then only 10 source
files will be compared against the latest archive. These sampled files are
selected randomly each time, so in effect the spot check is probabilistic.
The `data_tolerance_percentage` is the percentage of total files in the source
directories that can fail a spot check data comparison without failing the
entire consistency check. The value must be lower than or equal to the
`data_sample_percentage`, because `data_tolerance_percentage` only looks at
at the sampled files as determined by `data_sample_percentage`.
All three options are required when using the spot check. And because the
check relies on these configured tolerances, it may not be a
set-it-and-forget-it type of consistency check, at least until you get the
tolerances dialed in so there are minimal false positives or negatives. It is
recommended you run `borgmatic check` several times after configuring the spot
check, tweaking your tolerances as needed. For certain workloads where your
source files experience wild swings of file contents or counts, the spot check
may not suitable at all.
What if you add, delete, or change a bunch of your source files and you don't
want the spot check to fail the next time it's run? Run `borgmatic create` to
create a new backup, thereby allowing the next spot check to run against an
archive that contains your recent changes.
Because the spot check only looks at the most recent archive, you may not want
to run it immediately after a `create` action (borgmatic's default behavior).
Instead, it may make more sense to run the spot check on a separate schedule
from `create`.
As long as the spot check feature is in beta, it may be subject to breaking
changes. But feel free to use it in production if you're okay with that
caveat, and please [provide any
feedback](https://torsion.org/borgmatic/#issues) you have on this feature.
### Check frequency ### Check frequency
<span class="minilink minilink-addedin">New in version 1.6.2</span> You can <span class="minilink minilink-addedin">New in version 1.6.2</span> You can
@ -193,29 +103,18 @@ optionally configure checks to run on a periodic basis rather than every time
borgmatic runs checks. For instance: borgmatic runs checks. For instance:
```yaml ```yaml
checks: consistency:
- name: repository checks:
frequency: 2 weeks - name: repository
- name: archives frequency: 2 weeks
frequency: 1 month - name: archives
frequency: 1 month
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `consistency:` section of your configuration.
This tells borgmatic to run the `repository` consistency check at most once This tells borgmatic to run the `repository` consistency check at most once
every two weeks for a given repository and the `archives` check at most once a every two weeks for a given repository and the `archives` check at most once a
month. The `frequency` value is a number followed by a unit of time, e.g. `3 month. The `frequency` value is a number followed by a unit of time, e.g. "3
days`, `1 week`, `2 months`, etc. The set of possible time units is as days", "1 week", "2 months", etc.
follows (singular or plural):
* `second`
* `minute`
* `hour`
* `day`
* `week` (7 days)
* `month` (30 days)
* `year` (365 days)
The `frequency` defaults to `always` for a check configured without a The `frequency` defaults to `always` for a check configured without a
`frequency`, which means run this check every time checks run. But if you omit `frequency`, which means run this check every time checks run. But if you omit
@ -237,10 +136,6 @@ though—or the most frequently configured check will apply.
If you want to temporarily ignore your configured frequencies, you can invoke If you want to temporarily ignore your configured frequencies, you can invoke
`borgmatic check --force` to run checks unconditionally. `borgmatic check --force` to run checks unconditionally.
<span class="minilink minilink-addedin">New in version 1.8.6</span> `borgmatic
check --force` runs `check` even if it's specified in the `skip_actions`
option.
### Running only checks ### Running only checks
@ -264,31 +159,21 @@ location:
If that's still too slow, you can disable consistency checks entirely, If that's still too slow, you can disable consistency checks entirely,
either for a single repository or for all repositories. either for a single repository or for all repositories.
<span class="minilink minilink-addedin">New in version 1.8.5</span> Disabling Disabling all consistency checks looks like this:
all consistency checks looks like this:
```yaml ```yaml
skip_actions: consistency:
- check checks:
- name: disabled
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.5</span> Use this <span class="minilink minilink-addedin">Prior to version 1.6.2</span> `checks`
configuration instead: was a plain list of strings without the `name:` part. For instance:
```yaml ```yaml
checks: consistency:
- name: disabled checks:
``` - disabled
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
`checks:` in the `consistency:` section of your configuration.
<span class="minilink minilink-addedin">Prior to version 1.6.2</span>
`checks:` was a plain list of strings without the `name:` part. For instance:
```yaml
checks:
- disabled
``` ```
If you have multiple repositories in your borgmatic configuration file, If you have multiple repositories in your borgmatic configuration file,
@ -296,11 +181,12 @@ you can keep running consistency checks, but only against a subset of the
repositories: repositories:
```yaml ```yaml
check_repositories: consistency:
- path/of/repository_to_check.borg check_repositories:
- path/of/repository_to_check.borg
``` ```
Finally, you can override your configuration file's consistency checks and Finally, you can override your configuration file's consistency checks, and
run particular checks via the command-line. For instance: run particular checks via the command-line. For instance:
```bash ```bash

View File

@ -3,16 +3,11 @@ title: How to develop on borgmatic
eleventyNavigation: eleventyNavigation:
key: 🏗️ Develop on borgmatic key: 🏗️ Develop on borgmatic
parent: How-to guides parent: How-to guides
order: 14 order: 13
--- ---
## Source code ## Source code
To get set up to develop on borgmatic, first [`install To get set up to hack on borgmatic, first clone it via HTTPS or SSH:
pipx`](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#installation)
to make managing your borgmatic environment easy without impacting other
Python applications on your system.
Then, clone borgmatic via HTTPS or SSH:
```bash ```bash
git clone https://projects.torsion.org/borgmatic-collective/borgmatic.git git clone https://projects.torsion.org/borgmatic-collective/borgmatic.git
@ -24,42 +19,36 @@ Or:
git clone ssh://git@projects.torsion.org:3022/borgmatic-collective/borgmatic.git git clone ssh://git@projects.torsion.org:3022/borgmatic-collective/borgmatic.git
``` ```
Finally, install borgmatic Then, install borgmatic
"[editable](https://pip.pypa.io/en/stable/topics/local-project-installs/#editable-installs)" "[editable](https://pip.pypa.io/en/stable/cli/pip_install/#editable-installs)"
so that you can run borgmatic actions during development to make sure your so that you can run borgmatic commands while you're hacking on them to
changes work: make sure your changes work.
```bash ```bash
cd borgmatic cd borgmatic
pipx ensurepath pip3 install --user --editable .
pipx install --editable .
``` ```
Or to work on the [Apprise Note that this will typically install the borgmatic commands into
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#apprise-hook), `~/.local/bin`, which may or may not be on your PATH. There are other ways to
change that last line to: install borgmatic editable as well, for instance into the system Python
install (so without `--user`, as root), or even into a
```bash [virtualenv](https://virtualenv.pypa.io/en/stable/). How or where you install
pipx install --editable .[Apprise] borgmatic is up to you, but generally an editable install makes development
``` and testing easier.
To get oriented with the borgmatic source code, have a look at the [source
code reference](https://torsion.org/borgmatic/docs/reference/source-code/).
## Automated tests ## Automated tests
Assuming you've cloned the borgmatic source code as described above and you're Assuming you've cloned the borgmatic source code as described above, and
in the `borgmatic/` working copy, install tox, which is used for setting up you're in the `borgmatic/` working copy, install tox, which is used for
testing environments. You can either install a system package of tox (likely setting up testing environments:
called `tox` or `python-tox`) or you can install tox with pipx:
```bash ```bash
pipx install tox pip3 install --user tox
``` ```
Finally, to actually run tests, run tox from inside the borgmatic Finally, to actually run tests, run:
sourcedirectory:
```bash ```bash
tox tox
@ -100,14 +89,14 @@ with Borg and supported databases for a few representative scenarios. These
tests don't run by default when running `tox`, because they're relatively slow tests don't run by default when running `tox`, because they're relatively slow
and depend on containers for runtime dependencies. These tests do run on the and depend on containers for runtime dependencies. These tests do run on the
continuous integration (CI) server, and running them on your developer machine continuous integration (CI) server, and running them on your developer machine
is the closest thing to dev-CI parity. is the closest thing to CI-test parity.
If you would like to run the full test suite, first install Docker (or Podman; If you would like to run the full test suite, first install Docker (or Podman;
see below) and [Docker Compose](https://docs.docker.com/compose/install/). see below) and [Docker Compose](https://docs.docker.com/compose/install/).
Then run: Then run:
```bash ```bash
scripts/run-end-to-end-tests scripts/run-end-to-end-dev-tests
``` ```
This script assumes you have permission to run `docker`. If you don't, then This script assumes you have permission to run `docker`. If you don't, then
@ -149,9 +138,6 @@ the following deviations from it:
separate from their contents. separate from their contents.
* Within multiline constructs, use standard four-space indentation. Don't align * Within multiline constructs, use standard four-space indentation. Don't align
indentation with an opening delimiter. indentation with an opening delimiter.
* In general, spell out words in variable names instead of shortening them.
So, think `index` instead of `idx`. There are some notable exceptions to
this though (like `config`).
borgmatic code uses the [Black](https://black.readthedocs.io/en/stable/) code borgmatic code uses the [Black](https://black.readthedocs.io/en/stable/) code
formatter, the [Flake8](http://flake8.pycqa.org/en/latest/) code checker, and formatter, the [Flake8](http://flake8.pycqa.org/en/latest/) code checker, and
@ -159,17 +145,12 @@ the [isort](https://github.com/timothycrosley/isort) import orderer, so
certain code style requirements will be enforced when running automated tests. certain code style requirements will be enforced when running automated tests.
See the Black, Flake8, and isort documentation for more information. See the Black, Flake8, and isort documentation for more information.
## Continuous integration ## Continuous integration
Each commit to Each pull request triggers a continuous integration build which runs the test
[main](https://projects.torsion.org/borgmatic-collective/borgmatic/branches) suite. You can view these builds on
triggers [a continuous integration [build.torsion.org](https://build.torsion.org/borgmatic-collective/borgmatic), and they're
build](https://projects.torsion.org/borgmatic-collective/borgmatic/actions) also linked from the commits list on each pull request.
which runs the test suite and updates
[documentation](https://torsion.org/borgmatic/). These builds are also linked
from the [commits for the main
branch](https://projects.torsion.org/borgmatic-collective/borgmatic/commits/branch/main).
## Documentation development ## Documentation development

View File

@ -44,6 +44,7 @@ entire contents of the archive to the current directory, so make sure you're
in the right place before running the command—or see below about the in the right place before running the command—or see below about the
`--destination` flag. `--destination` flag.
## Repository selection ## Repository selection
If you have a single repository in your borgmatic configuration file(s), no If you have a single repository in your borgmatic configuration file(s), no
@ -64,7 +65,7 @@ everything from an archive. To do that, tack on one or more `--path` values.
For instance: For instance:
```bash ```bash
borgmatic extract --archive latest --path path/1 --path path/2 borgmatic extract --archive latest --path path/1 path/2
``` ```
Note that the specified restore paths should not have a leading slash. Like a Note that the specified restore paths should not have a leading slash. Like a
@ -144,55 +145,3 @@ When you're all done exploring your files, unmount your mount point. No
```bash ```bash
borgmatic umount --mount-point /mnt borgmatic umount --mount-point /mnt
``` ```
## Extract the configuration files used to create an archive
<span class="minilink minilink-addedin">New in version 1.7.15</span> borgmatic
automatically stores all the configuration files used to create an archive
inside the archive itself. They are stored in the archive using their full
paths from the machine being backed up. This is useful in cases where you've
lost a configuration file or you want to see what configurations were used to
create a particular archive.
To extract the configuration files from an archive, use the `config bootstrap`
action. For example:
```bash
borgmatic config bootstrap --repository repo.borg --destination /tmp
```
This extracts the configuration file from the latest archive in the repository
`repo.borg` to `/tmp/etc/borgmatic/config.yaml`, assuming that the only
configuration file used to create this archive was located at
`/etc/borgmatic/config.yaml` when the archive was created.
Note that to run the `config bootstrap` action, you don't need to have a
borgmatic configuration file. You only need to specify the repository to use
via the `--repository` flag; borgmatic will figure out the rest.
If a destination directory is not specified, the configuration files will be
extracted to their original locations, silently *overwriting* any configuration
files that may already exist. For example, if a configuration file was located
at `/etc/borgmatic/config.yaml` when the archive was created, it will be
extracted to `/etc/borgmatic/config.yaml` too.
If you want to extract the configuration file from a specific archive, use the
`--archive` flag:
```bash
borgmatic config bootstrap --repository repo.borg --archive host-2023-01-02T04:06:07.080910 --destination /tmp
```
See the output of `config bootstrap --help` for additional flags you may need
for bootstrapping.
<span class="minilink minilink-addedin">New in version 1.8.1</span> Set the
`store_config_files` option to `false` to disable the automatic backup of
borgmatic configuration files, for instance if they contain sensitive
information you don't want to store even inside your encrypted backups. If you
do this though, the `config bootstrap` action will no longer work.
<span class="minilink minilink-addedin">New in version 1.8.7</span> Included
configuration files are stored in each backup archive. This means that the
`config bootstrap` action not only extracts the top-level configuration files
but also the includes they depend upon.

View File

@ -1,6 +1,5 @@
--- ---
eleventyNavigation: eleventyNavigation:
key: How-to guides key: How-to guides
order: 1
permalink: false permalink: false
--- ---

View File

@ -60,7 +60,7 @@ with `--format`. Refer to the [borg list --format
documentation](https://borgbackup.readthedocs.io/en/stable/usage/list.html#the-format-specifier-syntax) documentation](https://borgbackup.readthedocs.io/en/stable/usage/list.html#the-format-specifier-syntax)
for available values. for available values.
(No borgmatic `list` or `info` actions? Upgrade borgmatic!) *(No borgmatic `list` or `info` actions? Upgrade borgmatic!)*
<span class="minilink minilink-addedin">New in borgmatic version 1.7.0</span> <span class="minilink minilink-addedin">New in borgmatic version 1.7.0</span>
There are also `rlist` and `rinfo` actions for displaying repository There are also `rlist` and `rinfo` actions for displaying repository
@ -116,30 +116,27 @@ archive, complete with file sizes.
## Logging ## Logging
By default, borgmatic logs to the console. You can enable simultaneous syslog By default, borgmatic logs to a local syslog-compatible daemon if one is
logging and customize its log level with the `--syslog-verbosity` flag, which present and borgmatic is running in a non-interactive console. Where those
is independent from the console logging `--verbosity` flag described above. logs show up depends on your particular system. If you're using systemd, try
For instance, to enable syslog logging, run: running `journalctl -xe`. Otherwise, try viewing `/var/log/syslog` or
similar.
You can customize the log level used for syslog logging with the
`--syslog-verbosity` flag, and this is independent from the console logging
`--verbosity` flag described above. For instance, to get additional
information about the progress of the backup as it proceeds:
```bash ```bash
borgmatic --syslog-verbosity 1 borgmatic --syslog-verbosity 1
``` ```
To increase syslog logging further to include debugging information, run: Or to increase syslog logging to include debug spew:
```bash ```bash
borgmatic --syslog-verbosity 2 borgmatic --syslog-verbosity 2
``` ```
See above for further details about the verbosity levels.
Where these logs show up depends on your particular system. If you're using
systemd, try running `journalctl -xe`. Otherwise, try viewing
`/var/log/syslog` or similar.
<span class="minilink minilink-addedin">Prior to version 1.8.3</span>borgmatic
logged to syslog by default whenever run at a non-interactive console.
### Rate limiting ### Rate limiting
If you are using rsyslog or systemd's journal, be aware that by default they If you are using rsyslog or systemd's journal, be aware that by default they
@ -168,7 +165,7 @@ Note that if you use the `--log-file` flag, you are responsible for rotating
the log file so it doesn't grow too large, for example with the log file so it doesn't grow too large, for example with
[logrotate](https://wiki.archlinux.org/index.php/Logrotate). [logrotate](https://wiki.archlinux.org/index.php/Logrotate).
You can use the `--log-file-verbosity` flag to customize the log file's log level: You can the `--log-file-verbosity` flag to customize the log file's log level:
```bash ```bash
borgmatic --log-file /path/to/file.log --log-file-verbosity 2 borgmatic --log-file /path/to/file.log --log-file-verbosity 2
@ -200,5 +197,5 @@ See the [Python logging
documentation](https://docs.python.org/3/library/logging.html#logrecord-attributes) documentation](https://docs.python.org/3/library/logging.html#logrecord-attributes)
for additional placeholders. for additional placeholders.
Note that this `--log-file-format` flag only applies to the specified Note that this `--log-file-format` flg only applies to the specified
`--log-file` and not to syslog or other logging. `--log-file` and not to syslog or other logging.

View File

@ -12,20 +12,18 @@ it. borgmatic supports this in its configuration by specifying multiple backup
repositories. Here's an example: repositories. Here's an example:
```yaml ```yaml
# List of source directories to backup. location:
source_directories: # List of source directories to backup.
- /home source_directories:
- /etc - /home
- /etc
# Paths of local or remote repositories to backup to. # Paths of local or remote repositories to backup to.
repositories: repositories:
- path: ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo - path: ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo
- path: /var/lib/backups/local.borg - path: /var/lib/backups/local.borg
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
these options in the `location:` section of your configuration.
<span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit <span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit
the `path:` portion of the `repositories` list. the `path:` portion of the `repositories` list.

View File

@ -74,15 +74,14 @@ and borgmatic uses that format to name any new archive it creates. For
instance: instance:
```yaml ```yaml
archive_name_format: home-directories-{now} storage:
...
archive_name_format: home-directories-{now}
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put This means that when borgmatic creates an archive, its name will start with
this option in the `storage:` section of your configuration. the string `home-directories-` and end with a timestamp for its creation time.
If `archive_name_format` is unspecified, the default is
This example means that when borgmatic creates an archive, its name will start
with the string `home-directories-` and end with a timestamp for its creation
time. If `archive_name_format` is unspecified, the default is
`{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}`, meaning your system hostname plus a `{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}`, meaning your system hostname plus a
timestamp in a particular format. timestamp in a particular format.
@ -104,12 +103,11 @@ to filter archives when running supported actions.
For instance, let's say that you have this in your configuration: For instance, let's say that you have this in your configuration:
```yaml ```yaml
archive_name_format: {hostname}-user-data-{now} storage:
...
archive_name_format: {hostname}-user-data-{now}
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `storage:` section of your configuration.
borgmatic considers `{now}` an emphemeral data placeholder that will probably borgmatic considers `{now}` an emphemeral data placeholder that will probably
change per archive, while `{hostname}` won't. So it turns the example value change per archive, while `{hostname}` won't. So it turns the example value
into `{hostname}-user-data-*` and applies it to filter down the set of into `{hostname}-user-data-*` and applies it to filter down the set of
@ -125,8 +123,10 @@ If this behavior isn't quite smart enough for your needs, you can use the
filtering archives. For example: filtering archives. For example:
```yaml ```yaml
archive_name_format: {hostname}-user-data-{now} storage:
match_archives: sh:myhost-user-data-* ...
archive_name_format: {hostname}-user-data-{now}
match_archives: sh:myhost-user-data-*
``` ```
For Borg 1.x, use a shell pattern for the `match_archives` value and see the For Borg 1.x, use a shell pattern for the `match_archives` value and see the
@ -139,8 +139,8 @@ Some borgmatic command-line actions also have a `--match-archives` flag that
overrides both the auto-matching behavior and the `match_archives` overrides both the auto-matching behavior and the `match_archives`
configuration option. configuration option.
<span class="minilink minilink-addedin">Prior to version 1.7.11</span> The way <span class="minilink minilink-addedin">Prior to 1.7.11</span> The way to
to limit the archives used for the `prune` action was a `prefix` option in the limit the archives used for the `prune` action was a `prefix` option in the
`retention` section for matching against the start of archive names. And the `retention` section for matching against the start of archive names. And the
option for limiting the archives used for the `check` action was a separate option for limiting the archives used for the `check` action was a separate
`prefix` in the `consistency` section. Both of these options are deprecated in `prefix` in the `consistency` section. Both of these options are deprecated in
@ -151,33 +151,28 @@ in newer versions of borgmatic.
## Configuration includes ## Configuration includes
Once you have multiple different configuration files, you might want to share Once you have multiple different configuration files, you might want to share
common configuration options across these files without having to copy and paste common configuration options across these files with having to copy and paste
them. To achieve this, you can put fragments of common configuration options them. To achieve this, you can put fragments of common configuration options
into a file and then include or inline that file into one or more borgmatic into a file, and then include or inline that file into one or more borgmatic
configuration files. configuration files.
Let's say that you want to include common consistency check configuration across all Let's say that you want to include common retention configuration across all
of your configuration files. You could do that in each configuration file with of your configuration files. You could do that in each configuration file with
the following: the following:
```yaml ```yaml
repositories: location:
- path: repo.borg ...
checks: retention:
!include /etc/borgmatic/common_checks.yaml !include /etc/borgmatic/common_retention.yaml
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> These And then the contents of `common_retention.yaml` could be:
options were organized into sections like `location:` and `consistency:`.
The contents of `common_checks.yaml` could be:
```yaml ```yaml
- name: repository keep_hourly: 24
frequency: 3 weeks keep_daily: 7
- name: archives
frequency: 2 weeks
``` ```
To prevent borgmatic from trying to load these configuration fragments by To prevent borgmatic from trying to load these configuration fragments by
@ -189,18 +184,18 @@ When a configuration include is a relative path, borgmatic loads it from either
the current working directory or from the directory containing the file doing the current working directory or from the directory containing the file doing
the including. the including.
Note that this form of include must be a value rather than an option name. For Note that this form of include must be a YAML value rather than a key. For
example, this will not work: example, this will not work:
```yaml ```yaml
repositories: location:
- path: repo.borg ...
# Don't do this. It won't work! # Don't do this. It won't work!
!include /etc/borgmatic/common_checks.yaml !include /etc/borgmatic/common_retention.yaml
``` ```
But if you do want to merge in a option name *and* its values, keep reading! But if you do want to merge in a YAML key *and* its values, keep reading!
## Include merging ## Include merging
@ -208,82 +203,45 @@ But if you do want to merge in a option name *and* its values, keep reading!
If you need to get even fancier and merge in common configuration options, you If you need to get even fancier and merge in common configuration options, you
can perform a YAML merge of included configuration using the YAML `<<` key. can perform a YAML merge of included configuration using the YAML `<<` key.
For instance, here's an example of a main configuration file that pulls in For instance, here's an example of a main configuration file that pulls in
retention and consistency checks options via a single include: retention and consistency options via a single include:
```yaml ```yaml
repositories:
- path: repo.borg
<<: !include /etc/borgmatic/common.yaml <<: !include /etc/borgmatic/common.yaml
location:
...
``` ```
This is what `common.yaml` might look like: This is what `common.yaml` might look like:
```yaml ```yaml
keep_hourly: 24 retention:
keep_daily: 7 keep_hourly: 24
keep_daily: 7
checks: consistency:
- name: repository checks:
frequency: 3 weeks - name: repository
- name: archives
frequency: 2 weeks
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> These Once this include gets merged in, the resulting configuration would have all
options were organized into sections like `retention:` and `consistency:`. of the `location` options from the original configuration file *and* the
`retention` and `consistency` options from the include.
Once this include gets merged in, the resulting configuration has all of the Prior to borgmatic version 1.6.0, when there's a section collision between the
options from the original configuration file *and* the options from the local file and the merged include, the local file's section takes precedence.
include. So if the `retention` section appears in both the local file and the include
file, the included `retention` is ignored in favor of the local `retention`.
But see below about deep merge in version 1.6.0+.
Note that this `<<` include merging syntax is only for merging in mappings Note that this `<<` include merging syntax is only for merging in mappings
(configuration options and their values). If you'd like to include a single (configuration options and their values). But if you'd like to include a
value directly, please see above about standard includes. single value directly, please see the section above about standard includes.
Additionally, there is a limitation preventing multiple `<<` include merges
### Multiple merge includes per section. So for instance, that means you can do one `<<` merge at the
global level, another `<<` within each configuration section, etc. (This is a
borgmatic has a limitation preventing multiple `<<` include merges per file or YAML limitation.)
option value. This means you can do a single `<<` merge at the global level,
another `<<` within each nested option value, etc. (This is a YAML
limitation.) For instance:
```yaml
repositories:
- path: repo.borg
# This won't work! You can't do multiple merges like this at the same level.
<<: !include common1.yaml
<<: !include common2.yaml
```
But read on for a way around this.
<span class="minilink minilink-addedin">New in version 1.8.1</span> You can
include and merge multiple configuration files all at once. For instance:
```yaml
repositories:
- path: repo.borg
<<: !include [common1.yaml, common2.yaml, common3.yaml]
```
This merges in each included configuration file in turn, such that later files
replace the options in earlier ones.
Here's another way to do the same thing:
```yaml
repositories:
- path: repo.borg
<<: !include
- common1.yaml
- common2.yaml
- common3.yaml
```
### Deep merge ### Deep merge
@ -294,30 +252,29 @@ at all levels in the two configuration files. This allows you to include
common configuration—up to full borgmatic configuration files—while overriding common configuration—up to full borgmatic configuration files—while overriding
only the parts you want to customize. only the parts you want to customize.
For instance, here's an example of a main configuration file that pulls in For instance, here's an example of a main configuration file that pulls in two
options via an include and then overrides one of them locally: retention options via an include and then overrides one of them locally:
```yaml ```yaml
<<: !include /etc/borgmatic/common.yaml <<: !include /etc/borgmatic/common.yaml
constants: location:
base_directory: /opt ...
repositories: retention:
- path: repo.borg keep_daily: 5
``` ```
This is what `common.yaml` might look like: This is what `common.yaml` might look like:
```yaml ```yaml
constants: retention:
app_name: myapp keep_hourly: 24
base_directory: /var/lib keep_daily: 7
``` ```
Once this include gets merged in, the resulting configuration would have an Once this include gets merged in, the resulting configuration would have a
`app_name` value of `myapp` and an overridden `base_directory` value of `keep_hourly` value of `24` and an overridden `keep_daily` value of `5`.
`/opt`.
When there's an option collision between the local file and the merged When there's an option collision between the local file and the merged
include, the local file's option takes precedence. include, the local file's option takes precedence.
@ -335,22 +292,21 @@ configuration file, you can omit it with an `!omit` tag. For instance:
```yaml ```yaml
<<: !include /etc/borgmatic/common.yaml <<: !include /etc/borgmatic/common.yaml
source_directories: location:
- !omit /home source_directories:
- /var - !omit /home
- /var
``` ```
And `common.yaml` like this: And `common.yaml` like this:
```yaml ```yaml
source_directories: location:
- /home source_directories:
- /etc - /home
- /etc
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put the
`source_directories` option in the `location:` section of your configuration.
Once this include gets merged in, the resulting configuration will have a Once this include gets merged in, the resulting configuration will have a
`source_directories` value of `/etc` and `/var`—with `/home` omitted. `source_directories` value of `/etc` and `/var`—with `/home` omitted.
@ -363,15 +319,16 @@ an example of some things not to do:
```yaml ```yaml
<<: !include /etc/borgmatic/common.yaml <<: !include /etc/borgmatic/common.yaml
source_directories: location:
# Do not do this! It will not work. "!omit" belongs before "/home". source_directories:
- /home !omit # Do not do this! It will not work. "!omit" belongs before "/home".
- /home !omit
# Do not do this either! "!omit" only works on scalar list items. # Do not do this either! "!omit" only works on scalar list items.
repositories: !omit repositories: !omit
# Also do not do this for the same reason! This is a list item, but it's # Also do not do this for the same reason! This is a list item, but it's
# not a scalar. # not a scalar.
- !omit path: repo.borg - !omit path: repo.borg
``` ```
Additionally, the `!omit` tag only works in a configuration file that also Additionally, the `!omit` tag only works in a configuration file that also
@ -385,8 +342,8 @@ includes.
### Shallow merge ### Shallow merge
Even though deep merging is generally pretty handy for included files, Even though deep merging is generally pretty handy for included files,
sometimes you want specific options in the local file to take precedence over sometimes you want specific sections in the local file to take precedence over
included options—without any merging occurring for them. included sections—without any merging occurring for them.
<span class="minilink minilink-addedin">New in version 1.7.12</span> That's <span class="minilink minilink-addedin">New in version 1.7.12</span> That's
where the `!retain` tag comes in. Whenever you're merging an included file where the `!retain` tag comes in. Whenever you're merging an included file
@ -400,38 +357,37 @@ on the `retention` mapping:
```yaml ```yaml
<<: !include /etc/borgmatic/common.yaml <<: !include /etc/borgmatic/common.yaml
repositories: location:
- path: repo.borg repositories:
- path: repo.borg
checks: !retain retention: !retain
- name: repository keep_daily: 5
``` ```
And `common.yaml` like this: And `common.yaml` like this:
```yaml ```yaml
repositories: location:
- path: common.borg repositories:
- path: common.borg
checks: retention:
- name: archives keep_hourly: 24
keep_daily: 7
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> These
options were organized into sections like `location:` and `consistency:`.
Once this include gets merged in, the resulting configuration will have a Once this include gets merged in, the resulting configuration will have a
`checks` value with a name of `repository` and no other values. That's because `keep_daily` value of `5` and nothing else in the `retention` section. That's
the `!retain` tag says to retain the local version of `checks` and ignore any because the `!retain` tag says to retain the local version of `retention` and
values coming in from the include. But because the `repositories` list doesn't ignore any values coming in from the include. But because the `repositories`
have a `!retain` tag, it still gets merged together to contain both list doesn't have a `!retain` tag, it still gets merged together to contain
`common.borg` and `repo.borg`. both `common.borg` and `repo.borg`.
The `!retain` tag can only be placed on mappings (keys/values) and lists, and The `!retain` tag can only be placed on mappings and lists, and it goes right
it goes right after the name of the option (and its colon) on the same line. after the name of the option (and its colon) on the same line. The effects of
The effects of `!retain` are recursive, meaning that if you place a `!retain` `!retain` are recursive, meaning that if you place a `!retain` tag on a
tag on a top-level mapping, even deeply nested values within it will not be top-level mapping, even deeply nested values within it will not be merged.
merged.
Additionally, the `!retain` tag only works in a configuration file that also Additionally, the `!retain` tag only works in a configuration file that also
performs a merge include with `<<: !include`. It doesn't make sense within, performs a merge include with `<<: !include`. It doesn't make sense within,
@ -478,63 +434,59 @@ Whatever the reason, you can override borgmatic configuration options at the
command-line via the `--override` flag. Here's an example: command-line via the `--override` flag. Here's an example:
```bash ```bash
borgmatic create --override remote_path=/usr/local/bin/borg1 borgmatic create --override location.remote_path=/usr/local/bin/borg1
``` ```
What this does is load your configuration files and for each one, disregard What this does is load your configuration files, and for each one, disregard
the configured value for the `remote_path` option and use the value of the configured value for the `remote_path` option in the `location` section,
`/usr/local/bin/borg1` instead. and use the value of `/usr/local/bin/borg1` instead.
You can even override nested values or multiple values at once. For instance: You can even override multiple values at once. For instance:
```bash ```bash
borgmatic create --override parent_option.option1=value1 --override parent_option.option2=value2 borgmatic create --override section.option1=value1 section.option2=value2
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Don't This will accomplish the same thing:
forget to specify the section that an option is in. That looks like a prefix
on the option name, e.g. `location.repositories`.
Note that each value is parsed as an actual YAML string, so you can set list
values by using brackets. For instance:
```bash ```bash
borgmatic create --override repositories=[test1.borg,test2.borg] borgmatic create --override section.option1=value1 --override section.option2=value2
``` ```
Or a single list element: Note that each value is parsed as an actual YAML string, so you can even set
list values by using brackets. For instance:
```bash ```bash
borgmatic create --override repositories=[/root/test.borg] borgmatic create --override location.repositories=[test1.borg,test2.borg]
``` ```
Or a single list element that is a key/value pair: Or even a single list element:
```bash ```bash
borgmatic create --override repositories="[{path: test.borg, label: test}]" borgmatic create --override location.repositories=[/root/test.borg]
``` ```
If your override value contains characters like colons or spaces, then you'll If your override value contains special YAML characters like colons, then
need to use quotes for it to parse correctly. you'll need quotes for it to parse correctly:
Another example:
```bash ```bash
borgmatic create --override repositories="['user@server:test.borg']" borgmatic create --override location.repositories="['user@server:test.borg']"
``` ```
There is not currently a way to override a single element of a list without There is not currently a way to override a single element of a list without
replacing the whole list. replacing the whole list.
Using the `[ ]` list syntax is required when overriding an option of the list Note that if you override an option of the list type (like
type (like `location.repositories`). See the [configuration `location.repositories`), you do need to use the `[ ]` list syntax. See the
[configuration
reference](https://torsion.org/borgmatic/docs/reference/configuration/) for reference](https://torsion.org/borgmatic/docs/reference/configuration/) for
which options are list types. (YAML list values look like `- this` with an which options are list types. (YAML list values look like `- this` with an
indentation and a leading dash.) indentation and a leading dash.)
An alternate to command-line overrides is passing in your values via Be sure to quote your overrides if they contain spaces or other characters
[environment that your shell may interpret.
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
An alternate to command-line overrides is passing in your values via [environment variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
## Constant interpolation ## Constant interpolation
@ -544,7 +496,8 @@ tool is borgmatic's support for defining custom constants. This is similar to
the [variable interpolation the [variable interpolation
feature](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation) feature](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation)
for command hooks, but the constants feature lets you substitute your own for command hooks, but the constants feature lets you substitute your own
custom values into any option values in the entire configuration file. custom values into anywhere in the entire configuration file. (Constants don't
work across includes or separate configuration files though.)
Here's an example usage: Here's an example usage:
@ -553,58 +506,32 @@ constants:
user: foo user: foo
archive_prefix: bar archive_prefix: bar
source_directories: location:
- /home/{user}/.config source_directories:
- /home/{user}/.ssh - /home/{user}/.config
- /home/{user}/.ssh
...
... storage:
archive_name_format: '{archive_prefix}-{now}'
archive_name_format: '{archive_prefix}-{now}'
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Don't
forget to specify the section (like `location:` or `storage:`) that any option
is in.
In this example, when borgmatic runs, all instances of `{user}` get replaced In this example, when borgmatic runs, all instances of `{user}` get replaced
with `foo` and all instances of `{archive_prefix}` get replaced with `bar`. with `foo` and all instances of `{archive-prefix}` get replaced with `bar-`.
And `{now}` doesn't get replaced with anything, but gets passed directly to (And in this particular example, `{now}` doesn't get replaced with anything,
Borg, which has its own but gets passed directly to Borg.) After substitution, the logical result
[placeholders](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-help-placeholders) looks something like this:
using the same syntax as borgmatic constants. So borgmatic options like
`archive_name_format` that get passed directly to Borg can use either Borg
placeholders or borgmatic constants or both!
After substitution, the logical result looks something like this:
```yaml ```yaml
source_directories: location:
- /home/foo/.config source_directories:
- /home/foo/.ssh - /home/foo/.config
- /home/foo/.ssh
...
... storage:
archive_name_format: 'bar-{now}'
archive_name_format: 'bar-{now}'
``` ```
Note that if you'd like to interpolate a constant into the beginning of a
value, you'll need to quote it. For instance, this won't work:
```yaml
source_directories:
- {my_home_directory}/.config # This will error!
```
Instead, do this:
```yaml
source_directories:
- "{my_home_directory}/.config"
```
<span class="minilink minilink-addedin">New in version 1.8.5</span> Constants
work across includes, meaning you can define a constant and then include a
separate configuration file that uses that constant.
An alternate to constants is passing in your values via [environment An alternate to constants is passing in your values via [environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/). variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).

View File

@ -36,24 +36,25 @@ below for how to configure this.
### Third-party monitoring services ### Third-party monitoring services
borgmatic integrates with these monitoring services and libraries, pinging borgmatic integrates with monitoring services like
them as backups happen: [Healthchecks](https://healthchecks.io/), [Cronitor](https://cronitor.io),
[Cronhub](https://cronhub.io), [PagerDuty](https://www.pagerduty.com/), and
* [Healthchecks](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#healthchecks-hook) [ntfy](https://ntfy.sh/) and pings these services whenever borgmatic runs.
* [Cronitor](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronitor-hook) That way, you'll receive an alert when something goes wrong or (for certain
* [Cronhub](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronhub-hook) hooks) the service doesn't hear from borgmatic for a configured interval. See
* [PagerDuty](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook) [Healthchecks
* [ntfy](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook) hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#healthchecks-hook),
* [Grafana Loki](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#loki-hook) [Cronitor
* [Apprise](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#apprise-hook) hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronitor-hook),
[Cronhub
The idea is that you'll receive an alert when something goes wrong or when the hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronhub-hook),
service doesn't hear from borgmatic for a configured interval (if supported). [PagerDuty
See the documentation links above for configuration information. hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook),
and [ntfy hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook)
While these services and libraries offer different features, you probably only below for how to configure this.
need to use one of them at most.
While these services offer different features, you probably only need to use
one of them at most.
### Third-party monitoring software ### Third-party monitoring software
@ -88,24 +89,23 @@ notifications or take other actions, so you can get alerted as soon as
something goes wrong. Here's a not-so-useful example: something goes wrong. Here's a not-so-useful example:
```yaml ```yaml
on_error: hooks:
- echo "Error while creating a backup or running a backup hook." on_error:
- echo "Error while creating a backup or running a backup hook."
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
this option in the `hooks:` section of your configuration.
The `on_error` hook supports interpolating particular runtime variables into The `on_error` hook supports interpolating particular runtime variables into
the hook command. Here's an example that assumes you provide a separate shell the hook command. Here's an example that assumes you provide a separate shell
script to handle the alerting: script to handle the alerting:
```yaml ```yaml
on_error: hooks:
- send-text-message.sh {configuration_filename} {repository} on_error:
- send-text-message.sh "{configuration_filename}" "{repository}"
``` ```
In this example, when the error occurs, borgmatic interpolates runtime values In this example, when the error occurs, borgmatic interpolates runtime values
into the hook command: the borgmatic configuration filename and the path of into the hook command: the borgmatic configuration filename, and the path of
the repository. Here's the full set of supported variables you can use here: the repository. Here's the full set of supported variables you can use here:
* `configuration_filename`: borgmatic configuration filename in which the * `configuration_filename`: borgmatic configuration filename in which the
@ -117,66 +117,45 @@ the repository. Here's the full set of supported variables you can use here:
occurred without running a command) occurred without running a command)
Note that borgmatic runs the `on_error` hooks only for `create`, `prune`, Note that borgmatic runs the `on_error` hooks only for `create`, `prune`,
`compact`, or `check` actions/hooks in which an error occurs and not other `compact`, or `check` actions or hooks in which an error occurs, and not other
actions. borgmatic does not run `on_error` hooks if an error occurs within a actions. borgmatic does not run `on_error` hooks if an error occurs within a
`before_everything` or `after_everything` hook. For more about hooks, see the `before_everything` or `after_everything` hook. For more about hooks, see the
[borgmatic hooks [borgmatic hooks
documentation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/), documentation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/),
especially the security information. especially the security information.
<span class="minilink minilink-addedin">New in version 1.8.7</span> borgmatic
automatically escapes these interpolated values to prevent shell injection
attacks. One implication of this change is that you shouldn't wrap the
interpolated values in your own quotes, as that will interfere with the
quoting performed by borgmatic and result in your command receiving incorrect
arguments. For instance, this won't work:
```yaml
on_error:
# Don't do this! It won't work, as the {error} value is already quoted.
- send-text-message.sh "Uh oh: {error}"
```
Do this instead:
```yaml
on_error:
- send-text-message.sh {error}
```
## Healthchecks hook ## Healthchecks hook
[Healthchecks](https://healthchecks.io/) is a service that provides "instant [Healthchecks](https://healthchecks.io/) is a service that provides "instant
alerts when your cron jobs fail silently," and borgmatic has built-in alerts when your cron jobs fail silently", and borgmatic has built-in
integration with it. Once you create a Healthchecks account and project on integration with it. Once you create a Healthchecks account and project on
their site, all you need to do is configure borgmatic with the unique "Ping their site, all you need to do is configure borgmatic with the unique "Ping
URL" for your project. Here's an example: URL" for your project. Here's an example:
```yaml ```yaml
healthchecks: hooks:
ping_url: https://hc-ping.com/addffa72-da17-40ae-be9c-ff591afb942a healthchecks:
ping_url: https://hc-ping.com/addffa72-da17-40ae-be9c-ff591afb942a
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put With this hook in place, borgmatic pings your Healthchecks project when a
this option in the `hooks:` section of your configuration. backup begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
With this configuration, borgmatic pings your Healthchecks project when a hooks</a> run, borgmatic lets Healthchecks know that it has started if any of
backup begins, ends, or errors, but only when any of the `create`, `prune`, the `create`, `prune`, `compact`, or `check` actions are run.
`compact`, or `check` actions are run.
Then, if the actions complete successfully, borgmatic notifies Healthchecks of Then, if the actions complete successfully, borgmatic notifies Healthchecks of
the success and includes borgmatic logs in the payload data sent to the success after the `after_backup` hooks run, and includes borgmatic logs in
Healthchecks. This means that borgmatic logs show up in the Healthchecks UI, the payload data sent to Healthchecks. This means that borgmatic logs show up
although be aware that Healthchecks currently has a 100-kilobyte limit for the in the Healthchecks UI, although be aware that Healthchecks currently has a
logs in each ping. 10-kilobyte limit for the logs in each ping.
If an error occurs during any action or hook, borgmatic notifies Healthchecks, If an error occurs during any action or hook, borgmatic notifies Healthchecks
also tacking on logs including the error itself. But the logs are only after the `on_error` hooks run, also tacking on logs including the error
included for errors that occur when a `create`, `prune`, `compact`, or `check` itself. But the logs are only included for errors that occur when a `create`,
action is run. `prune`, `compact`, or `check` action is run.
You can customize the verbosity of the logs that are sent to Healthchecks with You can customize the verbosity of the logs that are sent to Healthchecks with
borgmatic's `--monitoring-verbosity` flag. The `--list` and `--stats` flags borgmatic's `--monitoring-verbosity` flag. The `--list` and `--stats` flags
@ -193,24 +172,26 @@ or it doesn't hear from borgmatic for a certain period of time.
## Cronitor hook ## Cronitor hook
[Cronitor](https://cronitor.io/) provides "Cron monitoring and uptime healthchecks [Cronitor](https://cronitor.io/) provides "Cron monitoring and uptime healthchecks
for websites, services and APIs," and borgmatic has built-in for websites, services and APIs", and borgmatic has built-in
integration with it. Once you create a Cronitor account and cron job monitor on integration with it. Once you create a Cronitor account and cron job monitor on
their site, all you need to do is configure borgmatic with the unique "Ping their site, all you need to do is configure borgmatic with the unique "Ping
API URL" for your monitor. Here's an example: API URL" for your monitor. Here's an example:
```yaml ```yaml
cronitor: hooks:
ping_url: https://cronitor.link/d3x0c1 cronitor:
ping_url: https://cronitor.link/d3x0c1
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put With this hook in place, borgmatic pings your Cronitor monitor when a backup
this option in the `hooks:` section of your configuration. begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
With this configuration, borgmatic pings your Cronitor monitor when a backup hooks</a> run, borgmatic lets Cronitor know that it has started if any of the
begins, ends, or errors, but only when any of the `create`, `prune`, `prune`, `compact`, `create`, or `check` actions are run. Then, if the actions
`compact`, or `check` actions are run. Then, if the actions complete complete successfully, borgmatic notifies Cronitor of the success after the
successfully or errors, borgmatic notifies Cronitor accordingly. `after_backup` hooks run. And if an error occurs during any action or hook,
borgmatic notifies Cronitor after the `on_error` hooks run.
You can configure Cronitor to notify you by a [variety of You can configure Cronitor to notify you by a [variety of
mechanisms](https://cronitor.io/docs/cron-job-notifications) when backups fail mechanisms](https://cronitor.io/docs/cron-job-notifications) when backups fail
@ -220,24 +201,26 @@ or it doesn't hear from borgmatic for a certain period of time.
## Cronhub hook ## Cronhub hook
[Cronhub](https://cronhub.io/) provides "instant alerts when any of your [Cronhub](https://cronhub.io/) provides "instant alerts when any of your
background jobs fail silently or run longer than expected," and borgmatic has background jobs fail silently or run longer than expected", and borgmatic has
built-in integration with it. Once you create a Cronhub account and monitor on built-in integration with it. Once you create a Cronhub account and monitor on
their site, all you need to do is configure borgmatic with the unique "Ping their site, all you need to do is configure borgmatic with the unique "Ping
URL" for your monitor. Here's an example: URL" for your monitor. Here's an example:
```yaml ```yaml
cronhub: hooks:
ping_url: https://cronhub.io/start/1f5e3410-254c-11e8-b61d-55875966d031 cronhub:
ping_url: https://cronhub.io/start/1f5e3410-254c-11e8-b61d-55875966d031
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put With this hook in place, borgmatic pings your Cronhub monitor when a backup
this option in the `hooks:` section of your configuration. begins, ends, or errors. Specifically, after the <a
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
With this configuration, borgmatic pings your Cronhub monitor when a backup hooks</a> run, borgmatic lets Cronhub know that it has started if any of the
begins, ends, or errors, but only when any of the `create`, `prune`, `prune`, `compact`, `create`, or `check` actions are run. Then, if the actions
`compact`, or `check` actions are run. Then, if the actions complete complete successfully, borgmatic notifies Cronhub of the success after the
successfully or errors, borgmatic notifies Cronhub accordingly. `after_backup` hooks run. And if an error occurs during any action or hook,
borgmatic notifies Cronhub after the `on_error` hooks run.
Note that even though you configure borgmatic with the "start" variant of the Note that even though you configure borgmatic with the "start" variant of the
ping URL, borgmatic substitutes the correct state into the URL when pinging ping URL, borgmatic substitutes the correct state into the URL when pinging
@ -268,17 +251,16 @@ Here's an example:
```yaml ```yaml
pagerduty: hooks:
integration_key: a177cad45bd374409f78906a810a3074 pagerduty:
integration_key: a177cad45bd374409f78906a810a3074
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put With this hook in place, borgmatic creates a PagerDuty event for your service
this option in the `hooks:` section of your configuration. whenever backups fail. Specifically, if an error occurs during a `create`,
`prune`, `compact`, or `check` action, borgmatic sends an event to PagerDuty
With this configuration, borgmatic creates a PagerDuty event for your service before the `on_error` hooks run. Note that borgmatic does not contact
whenever backups fail, but only when any of the `create`, `prune`, `compact`, PagerDuty when a backup starts or ends without error.
or `check` actions are run. Note that borgmatic does not contact PagerDuty
when a backup starts or when it ends without error.
You can configure PagerDuty to notify you by a [variety of You can configure PagerDuty to notify you by a [variety of
mechanisms](https://support.pagerduty.com/docs/notifications) when backups mechanisms](https://support.pagerduty.com/docs/notifications) when backups
@ -290,222 +272,50 @@ us](https://torsion.org/borgmatic/#support-and-contributing).
## ntfy hook ## ntfy hook
<span class="minilink minilink-addedin">New in version 1.6.3</span> [ntfy](https://ntfy.sh) is a free, simple, service (either hosted or self-hosted)
[ntfy](https://ntfy.sh) is a free, simple, service (either hosted or which offers simple pub/sub push notifications to multiple platforms including
self-hosted) which offers simple pub/sub push notifications to multiple [web](https://ntfy.sh/stats), [Android](https://play.google.com/store/apps/details?id=io.heckel.ntfy)
platforms including [web](https://ntfy.sh/stats), and [iOS](https://apps.apple.com/us/app/ntfy/id1625396347).
[Android](https://play.google.com/store/apps/details?id=io.heckel.ntfy) and
[iOS](https://apps.apple.com/us/app/ntfy/id1625396347).
Since push notifications for regular events might soon become quite annoying, Since push notifications for regular events might soon become quite annoying,
this hook only fires on any errors by default in order to instantly alert you this hook only fires on any errors by default in order to instantly alert you to issues.
to issues. The `states` list can override this. Each state can have its own The `states` list can override this.
custom messages, priorities and tags or, if none are provided, will use the
default.
An example configuration is shown here with all the available options, As ntfy is unauthenticated, it isn't a suitable channel for any private information
including [priorities](https://ntfy.sh/docs/publish/#message-priority) and so the default messages are intentionally generic. These can be overridden, depending
on your risk assessment. Each `state` can have its own custom messages, priorities and tags
or, if none are provided, will use the default.
An example configuration is shown here, with all the available options, including
[priorities](https://ntfy.sh/docs/publish/#message-priority) and
[tags](https://ntfy.sh/docs/publish/#tags-emojis): [tags](https://ntfy.sh/docs/publish/#tags-emojis):
```yaml ```yaml
ntfy: hooks:
topic: my-unique-topic ntfy:
server: https://ntfy.my-domain.com topic: my-unique-topic
username: myuser server: https://ntfy.my-domain.com
password: secret start:
title: A Borgmatic backup started
start: message: Watch this space...
title: A borgmatic backup started tags: borgmatic
message: Watch this space... priority: min
tags: borgmatic finish:
priority: min title: A Borgmatic backup completed successfully
finish: message: Nice!
title: A borgmatic backup completed successfully tags: borgmatic,+1
message: Nice! priority: min
tags: borgmatic,+1 fail:
priority: min title: A Borgmatic backup failed
fail: message: You should probably fix it
title: A borgmatic backup failed tags: borgmatic,-1,skull
message: You should probably fix it priority: max
tags: borgmatic,-1,skull states:
priority: max - start
states: - finish
- start - fail
- finish
- fail
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
the `ntfy:` option in the `hooks:` section of your configuration.
<span class="minilink minilink-addedin">New in version 1.8.9</span> Instead of
`username`/`password`, you can specify an [ntfy access
token](https://docs.ntfy.sh/config/#access-tokens):
```yaml
ntfy:
topic: my-unique-topic
server: https://ntfy.my-domain.com
access_token: tk_AgQdq7mVBoFD37zQVN29RhuMzNIz2
````
## Loki hook
<span class="minilink minilink-addedin">New in version 1.8.3</span> [Grafana
Loki](https://grafana.com/oss/loki/) is a "horizontally scalable, highly
available, multi-tenant log aggregation system inspired by Prometheus."
borgmatic has built-in integration with Loki, sending both backup status and
borgmatic logs.
You can configure borgmatic to use either a [self-hosted Loki
instance](https://grafana.com/docs/loki/latest/installation/) or [a Grafana
Cloud account](https://grafana.com/auth/sign-up/create-user). Start by setting
your Loki API push URL. Here's an example:
```yaml
loki:
url: http://localhost:3100/loki/api/v1/push
```
With this configuration, borgmatic sends its logs to your Loki instance as any
of the `create`, `prune`, `compact`, or `check` actions are run. Then, after
the actions complete, borgmatic notifies Loki of success or failure.
This hook supports sending arbitrary labels to Loki. For instance:
```yaml
loki:
url: http://localhost:3100/loki/api/v1/push
labels:
app: borgmatic
hostname: example.org
```
There are also a few placeholders you can optionally use as label values:
* `__config`: name of the borgmatic configuration file
* `__config_path`: full path of the borgmatic configuration file
* `__hostname`: the local machine hostname
These placeholders are only substituted for the whole label value, not
interpolated into a larger string. For instance:
```yaml
loki:
url: http://localhost:3100/loki/api/v1/push
labels:
app: borgmatic
config: __config
hostname: __hostname
```
Also check out this [Loki dashboard for
borgmatic](https://grafana.com/grafana/dashboards/20736-borgmatic-logs/) if
you'd like to see your backup logs and statistics in one place.
## Apprise hook
<span class="minilink minilink-addedin">New in version 1.8.4</span>
[Apprise](https://github.com/caronc/apprise/wiki) is a local notification library
that "allows you to send a notification to almost all of the most popular
[notification services](https://github.com/caronc/apprise/wiki) available to
us today such as: Telegram, Discord, Slack, Amazon SNS, Gotify, etc."
Depending on how you installed borgmatic, it may not have come with Apprise.
For instance, if you originally [installed borgmatic with
pipx](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#installation),
run the following to install Apprise so borgmatic can use it:
```bash
sudo pipx uninstall borgmatic
sudo pipx install borgmatic[Apprise]
```
Omit `sudo` if borgmatic is installed as a non-root user.
Once Apprise is installed, configure borgmatic to notify one or more [Apprise
services](https://github.com/caronc/apprise/wiki). For example:
```yaml
apprise:
services:
- url: gotify://hostname/token
label: gotify
- url: mastodons://access_key@hostname/@user
label: mastodon
states:
- start
- finish
- fail
```
With this configuration, borgmatic pings each of the configured Apprise
services when a backup begins, ends, or errors, but only when any of the
`create`, `prune`, `compact`, or `check` actions are run. (By default, if
`states` is not specified, Apprise services are only pinged on error.)
You can optionally customize the contents of the default messages sent to
these services:
```yaml
apprise:
services:
- url: gotify://hostname/token
label: gotify
start:
title: Ping!
body: Starting backup process.
finish:
title: Ping!
body: Backups successfully made.
fail:
title: Ping!
body: Your backups have failed.
states:
- start
- finish
- fail
```
<span class="minilink minilink-addedin">New in version 1.8.9</span> borgmatic
logs are automatically included in the body data sent to your Apprise services
when a backup finishes or fails.
You can customize the verbosity of the logs that are sent with borgmatic's
`--monitoring-verbosity` flag. The `--list` and `--stats` flags may also be of
use. See `borgmatic create --help` for more information.
If you don't want any logs sent, you can disable this feature by setting
`send_logs` to `false`:
```yaml
apprise:
services:
- url: gotify://hostname/token
label: gotify
send_logs: false
```
Or to limit the size of logs sent to Apprise services:
```yaml
apprise:
services:
- url: gotify://hostname/token
label: gotify
logs_size_limit: 500
```
This may be necessary for some services that reject large requests.
See the [configuration
reference](https://torsion.org/borgmatic/docs/reference/configuration/) for
details.
## Scripting borgmatic ## Scripting borgmatic
To consume the output of borgmatic in other software, you can include an To consume the output of borgmatic in other software, you can include an
@ -514,7 +324,7 @@ output formatted as JSON.
Note that when you specify the `--json` flag, Borg's other non-JSON output is Note that when you specify the `--json` flag, Borg's other non-JSON output is
suppressed so as not to interfere with the captured JSON. Also note that JSON suppressed so as not to interfere with the captured JSON. Also note that JSON
output only shows up at the console and not in syslog. output only shows up at the console, and not in syslog.
### Latest backups ### Latest backups

View File

@ -5,31 +5,13 @@ eleventyNavigation:
parent: How-to guides parent: How-to guides
order: 2 order: 2
--- ---
## Providing passwords and secrets to borgmatic ## Environment variable interpolation
If you want to use a Borg repository passphrase or database passwords with If you want to use a Borg repository passphrase or database passwords with
borgmatic, you can set them directly in your borgmatic configuration file, borgmatic, you can set them directly in your borgmatic configuration file,
treating those secrets like any other option value. For instance, you can treating those secrets like any other option value. But if you'd rather store
specify your Borg passhprase with: them outside of borgmatic, whether for convenience or security reasons, read
on.
```yaml
encryption_passphrase: yourpassphrase
```
But if you'd rather store them outside of borgmatic, whether for convenience
or security reasons, read on.
### Delegating to another application
borgmatic supports calling another application such as a password manager to
obtain the Borg passphrase to a repository.
For example, to ask the *Pass* password manager to provide the passphrase:
```yaml
encryption_passcommand: pass path/to/borg-repokey
```
### Environment variable interpolation
<span class="minilink minilink-addedin">New in version 1.6.4</span> borgmatic <span class="minilink minilink-addedin">New in version 1.6.4</span> borgmatic
supports interpolating arbitrary environment variables directly into option supports interpolating arbitrary environment variables directly into option
@ -38,14 +20,12 @@ pull your repository passphrase, your database passwords, or any other option
values from environment variables. For instance: values from environment variables. For instance:
```yaml ```yaml
encryption_passphrase: ${YOUR_PASSPHRASE} storage:
encryption_passphrase: ${MY_PASSPHRASE}
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put This uses the `MY_PASSPHRASE` environment variable as your encryption
this option in the `storage:` section of your configuration. passphrase. Note that the `{` `}` brackets are required. `$MY_PASSPHRASE` by
This uses the `YOUR_PASSPHRASE` environment variable as your encryption
passphrase. Note that the `{` `}` brackets are required. `$YOUR_PASSPHRASE` by
itself will not work. itself will not work.
In the case of `encryption_passphrase` in particular, an alternate approach In the case of `encryption_passphrase` in particular, an alternate approach
@ -58,33 +38,31 @@ configuration](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
the same approach applies. For example: the same approach applies. For example:
```yaml ```yaml
postgresql_databases: hooks:
- name: users postgresql_databases:
password: ${YOUR_DATABASE_PASSWORD} - name: users
password: ${MY_DATABASE_PASSWORD}
``` ```
<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put This uses the `MY_DATABASE_PASSWORD` environment variable as your database
this option in the `hooks:` section of your configuration.
This uses the `YOUR_DATABASE_PASSWORD` environment variable as your database
password. password.
#### Interpolation defaults ### Interpolation defaults
If you'd like to set a default for your environment variables, you can do so If you'd like to set a default for your environment variables, you can do so with the following syntax:
with the following syntax:
```yaml ```yaml
encryption_passphrase: ${YOUR_PASSPHRASE:-defaultpass} storage:
encryption_passphrase: ${MY_PASSPHRASE:-defaultpass}
``` ```
Here, "`defaultpass`" is the default passphrase if the `YOUR_PASSPHRASE` Here, "`defaultpass`" is the default passphrase if the `MY_PASSPHRASE`
environment variable is not set. Without a default, if the environment environment variable is not set. Without a default, if the environment
variable doesn't exist, borgmatic will error. variable doesn't exist, borgmatic will error.
#### Disabling interpolation ### Disabling interpolation
To disable this environment variable interpolation feature entirely, you can To disable this environment variable interpolation feature entirely, you can
pass the `--no-environment-interpolation` flag on the command-line. pass the `--no-environment-interpolation` flag on the command-line.
@ -94,10 +72,11 @@ can escape it with a backslash. For instance, if your password is literally
`${A}@!`: `${A}@!`:
```yaml ```yaml
encryption_passphrase: \${A}@! storage:
encryption_passphrase: \${A}@!
``` ```
## Related features ### Related features
Another way to override particular options within a borgmatic configuration Another way to override particular options within a borgmatic configuration
file is to use a [configuration file is to use a [configuration
@ -109,9 +88,3 @@ Additionally, borgmatic action hooks support their own [variable
interpolation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation), interpolation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation),
although in that case it's for particular borgmatic runtime values rather than although in that case it's for particular borgmatic runtime values rather than
(only) environment variables. (only) environment variables.
Lastly, if you do want to specify your passhprase directly within borgmatic
configuration, but you'd like to keep it in a separate file from your main
configuration, you can [use a configuration include or a merge
include](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-includes)
to pull in an external password.

Some files were not shown because too many files have changed in this diff Show More