forked from borgmatic-collective/borgmatic
Compare commits
432 Commits
Author | SHA1 | Date | |
---|---|---|---|
8cec7c74d8 | |||
d3086788eb | |||
8d860ea02c | |||
b343363bb8 | |||
9db31bd1e9 | |||
d88bcc8be9 | |||
332f7c4bb6 | |||
5d19d86e4a | |||
044ae7869a | |||
62ae82f2c0 | |||
66194b7304 | |||
98e429594e | |||
4fcfddbe08 | |||
f442aeae9c | |||
e211863cba | |||
45256ae33f | |||
1573d68fe2 | |||
69f6695253 | |||
a7c055264d | |||
db18364a73 | |||
22498ebd4c | |||
e1f02d9fa5 | |||
9ec220c600 | |||
cf0275a3ed | |||
c71eb60cd2 | |||
675e54ba9f | |||
1793ad74bd | |||
767a7d900b | |||
903507bd03 | |||
b6cf7d2adc | |||
a071e02d20 | |||
3aa88085ed | |||
af1cc27988 | |||
dbf8301c19 | |||
2a306bef12 | |||
2a36a2a312 | |||
d7a07f0428 | |||
da321e180d | |||
c6582e1171 | |||
9b83afe491 | |||
2814ac3642 | |||
8a9d5d93f5 | |||
783a6d3b45 | |||
95575c3450 | |||
9b071ff92f | |||
d80e716822 | |||
418ebc8843 | |||
f5a448c7c2 | |||
37ac542b31 | |||
|
8c7d7e3e41 | ||
b811f125b2 | |||
061f3e7917 | |||
6055918907 | |||
4a90e090ad | |||
301b29ee11 | |||
c1eb210253 | |||
30cca62d09 | |||
113c0e7616 | |||
0e6b2c6773 | |||
22c750b949 | |||
504cce39a1 | |||
6c4abb6803 | |||
|
fd7ad86daa | ||
6f3b23c79d | |||
4838f5e810 | |||
116f1ab989 | |||
5e15c9f2bc | |||
442641f9f6 | |||
f67c544be6 | |||
437fd4dbae | |||
36873252d6 | |||
1ef82a27fa | |||
6837dcbf42 | |||
c657764367 | |||
f79286fc91 | |||
694d376d15 | |||
ab4c08019c | |||
fd39f54df7 | |||
ca7e18bb29 | |||
6975a5b155 | |||
b627d00595 | |||
9bd8f1a6df | |||
|
faf682ca35 | ||
6aeb74550d | |||
89500df429 | |||
82b072d0b7 | |||
018c0296fd | |||
9c42e7e817 | |||
953277a066 | |||
e2002b5488 | |||
c9742e1d04 | |||
906da838ef | |||
d7f1c10c8c | |||
e8e4d17168 | |||
a31ce337e9 | |||
902730df46 | |||
c969c822ee | |||
c31702d092 | |||
ba8fbe7a44 | |||
2774c2e4c0 | |||
ae036aebd7 | |||
|
2e9f70d496 | ||
90be5b84b1 | |||
80e95f20a3 | |||
ac7c7d4036 | |||
858b0b9fbe | |||
9cc043f60e | |||
276a27d485 | |||
679bb839d7 | |||
9e64d847ef | |||
61fb275896 | |||
ca0c79c93c | |||
87c97b7568 | |||
80b8c25bba | |||
d1837cd1d3 | |||
c46f2b8508 | |||
a274c0dbf7 | |||
ef7e95e22a | |||
3be99de5b1 | |||
e7b7560477 | |||
317dc7fbce | |||
97fad15009 | |||
462326406e | |||
bbdf4893d1 | |||
ef6617cfe6 | |||
dbef0a440f | |||
22628ba5d4 | |||
8576ac86b9 | |||
540f9f6b72 | |||
f9d7faf884 | |||
7dee6194a2 | |||
68f9c1b950 | |||
43d711463c | |||
00255a2437 | |||
b40e9b7da2 | |||
89d201c8ff | |||
f47c98c4a5 | |||
3b6ed06686 | |||
57009e22b5 | |||
3ab7a3b64a | |||
596dd49cf5 | |||
28d847b8b1 | |||
2a1c6b1477 | |||
30abd0e3de | |||
f36e38ec20 | |||
d807ce095e | |||
7626fe1189 | |||
cc04bf57df | |||
cce6d56661 | |||
a05d0f378e | |||
94321aec7a | |||
4a55749bd2 | |||
2898e63166 | |||
c7176bd00a | |||
647ecdac29 | |||
e7a8acfb96 | |||
622caa0c21 | |||
22149c6401 | |||
9aece3936a | |||
c7e4e6f6c9 | |||
bcad0de1a4 | |||
|
5c6407047f | ||
6ddae20fa1 | |||
23feac2f4c | |||
16066942e3 | |||
|
3720f22234 | ||
|
f7c8e89a9f | ||
|
ba377952fd | ||
|
1fdec480d6 | ||
e85d551eac | |||
2b23a63a08 | |||
c0f48e1071 | |||
6005426684 | |||
673ed1a2d3 | |||
992f62edd2 | |||
f1ffa1da1d | |||
457ed80744 | |||
1fc028ffae | |||
10723efc68 | |||
2e0b2a308f | |||
bd4d109009 | |||
ae25386336 | |||
d929313d45 | |||
d372a86fe6 | |||
e306f03e1d | |||
8336165f23 | |||
c664c6b17b | |||
b63c854509 | |||
aa013af25e | |||
cc32f0018b | |||
dfc4db1860 | |||
35706604ea | |||
6d76e8e5cb | |||
aecb6fcd74 | |||
ea45f6c4c8 | |||
97b5cd089d | |||
f2c2f3139e | |||
dc4e7093e5 | |||
b6f1025ecb | |||
65b2fe86c6 | |||
0e90a80680 | |||
7648bcff39 | |||
a8b8d507b6 | |||
3561c93d74 | |||
331a503a25 | |||
9aefb5179f | |||
d14f22e121 | |||
b6893f6455 | |||
80ec3e7d97 | |||
cd834311eb | |||
d751cceeb0 | |||
ce78b07e4b | |||
87f3c50931 | |||
8e9e06afe6 | |||
2bc91ac3d2 | |||
5b615d51a4 | |||
c7f5d5fd0b | |||
6ef7538eb0 | |||
8fa90053cf | |||
b3682b61d1 | |||
ad0e2e0d7c | |||
6629f40cab | |||
e76bfa555f | |||
8ddb7268eb | |||
cb5fe02ebd | |||
77b84f8a48 | |||
691ec96909 | |||
29b4666205 | |||
316a22701f | |||
be59a3e574 | |||
37327379bc | |||
22c2f13611 | |||
8708ca07f4 | |||
634d9e4946 | |||
54933ebef5 | |||
157e59ac88 | |||
666f0dd751 | |||
8b179e4647 | |||
865eff7d98 | |||
b9741f4d0b | |||
02781662f8 | |||
32a1043468 | |||
3e4aeec649 | |||
b98b827594 | |||
255cc6ec23 | |||
51fc37d57a | |||
1921f55a9d | |||
fbd381fcc1 | |||
cd88f9f2ea | |||
788281cfb9 | |||
cd234b689d | |||
92354a77ee | |||
48ff3e70d1 | |||
7e9adfb899 | |||
e238e256f7 | |||
|
3ecb92a8d2 | ||
d58d450628 | |||
dee9c6e293 | |||
897c4487de | |||
48b50b5209 | |||
13bae8c23b | |||
4a48e6aa04 | |||
525266ede6 | |||
d045eb55ac | |||
0e6b425ac5 | |||
bdc26f2117 | |||
ed7fe5c6d0 | |||
cbce6707f4 | |||
e40e726687 | |||
0c027a3050 | |||
9f44bbad65 | |||
413a079f51 | |||
6f3accf691 | |||
5b3cfc542d | |||
c838c1d11b | |||
4d1d8d7409 | |||
db7499db82 | |||
6b500c2a8b | |||
95c518e59b | |||
976516d0e1 | |||
574eb91921 | |||
28fef3264b | |||
9161dbcb7d | |||
4b3027e4fc | |||
0eb2634f9b | |||
7c5b68c98f | |||
9317cbaaf0 | |||
1b5f04b79f | |||
948c86f62c | |||
7e7209322a | |||
00a57fd947 | |||
6bf6ac310b | |||
4b5af2770d | |||
b525e70e1c | |||
4498671233 | |||
9997aa9a92 | |||
cbf7284f64 | |||
ee466f870d | |||
e3f4bf0293 | |||
46688f10b1 | |||
48f44d2f3d | |||
bff1347ba3 | |||
9582324c88 | |||
bb0716421d | |||
bec73245e9 | |||
dcead12e86 | |||
0119514c11 | |||
b39f08694d | |||
80bdf1430b | |||
2ee75546f5 | |||
07d7ae60d5 | |||
87001337b4 | |||
2e9964c200 | |||
3ec3d8d045 | |||
96384d5ee1 | |||
8ed5467435 | |||
7c6ce9399c | |||
6b7653484b | |||
|
85e0334826 | ||
|
2a80e48a92 | ||
|
5821c6782e | ||
|
f15498f6d9 | ||
a1673d1fa1 | |||
2e99a1898c | |||
7a086d8430 | |||
0e8e9ced64 | |||
f34951c088 | |||
c6f47d4d56 | |||
c3e76585fc | |||
0014b149f8 | |||
091c07bbe2 | |||
240547102f | |||
2bbd53e25a | |||
58f2f63977 | |||
7df6a78c30 | |||
c646edf2c7 | |||
bcc820d646 | |||
3729ba5ca3 | |||
9c19591768 | |||
38ebfd2969 | |||
180018fd81 | |||
794ae94ac4 | |||
4eb6359ed3 | |||
|
976a877a25 | ||
|
b4117916b8 | ||
|
19cad89978 | ||
6b182c9d2d | |||
4d6ed27f73 | |||
745a8f9b8a | |||
6299d8115d | |||
717cfd2d37 | |||
7881327004 | |||
549aa9a25f | |||
1c6890492b | |||
a7c8e7c823 | |||
c8fcf6b336 | |||
449896f661 | |||
1004500d65 | |||
0a8d4e5dfb | |||
38e35bdb12 | |||
65503e38b6 | |||
d0c5bf6f6f | |||
f129e4c301 | |||
fbbb096cec | |||
77980511c6 | |||
4ba206f8f4 | |||
ecc849dd07 | |||
7ff6066d47 | |||
2bb1fc9826 | |||
|
6df6176f3a | ||
acb2ca79d9 | |||
c9211320e1 | |||
760286abe1 | |||
5890a1cb48 | |||
b3f5a9d18f | |||
80b33fbf8a | |||
5389ff6160 | |||
|
e8b8d86592 | ||
92d729a9dd | |||
c63219936e | |||
0aff497430 | |||
1f3907a6a5 | |||
2a8692c64f | |||
1709f57ff0 | |||
|
89baf757cf | ||
|
4f36fe2b9f | ||
|
510449ce65 | ||
|
4cc4b8d484 | ||
9c972cb0e5 | |||
9b1779065e | |||
057ec3e59b | |||
bc2e611a74 | |||
b6d3a1e02f | |||
54d57e1349 | |||
af0b3da8ed | |||
27d37b606b | |||
77a860cc62 | |||
7bd6374751 | |||
cf8882f2bc | |||
b37dd1a79e | |||
fd59776f91 | |||
9fd28d2eed | |||
f5c61c8013 | |||
88cb49dcc4 | |||
73235e59be | |||
7076a7ff86 | |||
d6e376d32d | |||
9016f4be43 | |||
d1c403999f | |||
d543109ef4 | |||
7085a45649 | |||
cf4c603f1d | |||
d2533313bc | |||
c43b50b6e6 | |||
c072678936 | |||
631da1465e | |||
f29519a5cd | |||
|
5d82b42ab8 | ||
4897a78fd3 | |||
a1d986d952 | |||
717c90a7d0 | |||
8fde19a7dc | |||
ad7198ba66 | |||
eb4b4cc92b | |||
41bf520585 | |||
c0ae01f5d5 | |||
8b8f92d717 | |||
ccd1627175 | |||
b8a7e23f46 | |||
1f4f28b4dc | |||
ea6cd53067 | |||
267138776d |
119
.drone.yml
119
.drone.yml
|
@ -1,110 +1,29 @@
|
|||
---
|
||||
kind: pipeline
|
||||
name: python-3-5-alpine-3-10
|
||||
name: python-3-8-alpine-3-13
|
||||
|
||||
services:
|
||||
- name: postgresql
|
||||
image: postgres:11.6-alpine
|
||||
image: postgres:13.1-alpine
|
||||
environment:
|
||||
POSTGRES_PASSWORD: test
|
||||
POSTGRES_DB: test
|
||||
- name: mysql
|
||||
image: mariadb:10.3
|
||||
image: mariadb:10.5
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: test
|
||||
MYSQL_DATABASE: test
|
||||
- name: mongodb
|
||||
image: mongo:5.0.5
|
||||
environment:
|
||||
MONGO_INITDB_ROOT_USERNAME: root
|
||||
MONGO_INITDB_ROOT_PASSWORD: test
|
||||
|
||||
clone:
|
||||
skip_verify: true
|
||||
|
||||
steps:
|
||||
- name: build
|
||||
image: python:3.5-alpine3.10
|
||||
pull: always
|
||||
commands:
|
||||
- scripts/run-full-tests
|
||||
---
|
||||
kind: pipeline
|
||||
name: python-3-6-alpine-3-10
|
||||
|
||||
services:
|
||||
- name: postgresql
|
||||
image: postgres:11.6-alpine
|
||||
environment:
|
||||
POSTGRES_PASSWORD: test
|
||||
POSTGRES_DB: test
|
||||
- name: mysql
|
||||
image: mariadb:10.3
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: test
|
||||
MYSQL_DATABASE: test
|
||||
|
||||
steps:
|
||||
- name: build
|
||||
image: python:3.6-alpine3.10
|
||||
pull: always
|
||||
commands:
|
||||
- scripts/run-full-tests
|
||||
---
|
||||
kind: pipeline
|
||||
name: python-3-7-alpine-3-10
|
||||
|
||||
services:
|
||||
- name: postgresql
|
||||
image: postgres:11.6-alpine
|
||||
environment:
|
||||
POSTGRES_PASSWORD: test
|
||||
POSTGRES_DB: test
|
||||
- name: mysql
|
||||
image: mariadb:10.3
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: test
|
||||
MYSQL_DATABASE: test
|
||||
|
||||
steps:
|
||||
- name: build
|
||||
image: python:3.7-alpine3.10
|
||||
pull: always
|
||||
commands:
|
||||
- scripts/run-full-tests
|
||||
---
|
||||
kind: pipeline
|
||||
name: python-3-7-alpine-3-7
|
||||
|
||||
services:
|
||||
- name: postgresql
|
||||
image: postgres:10.11-alpine
|
||||
environment:
|
||||
POSTGRES_PASSWORD: test
|
||||
POSTGRES_DB: test
|
||||
- name: mysql
|
||||
image: mariadb:10.1
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: test
|
||||
MYSQL_DATABASE: test
|
||||
|
||||
steps:
|
||||
- name: build
|
||||
image: python:3.7-alpine3.7
|
||||
pull: always
|
||||
commands:
|
||||
- scripts/run-full-tests
|
||||
---
|
||||
kind: pipeline
|
||||
name: python-3-8-alpine-3-10
|
||||
|
||||
services:
|
||||
- name: postgresql
|
||||
image: postgres:11.6-alpine
|
||||
environment:
|
||||
POSTGRES_PASSWORD: test
|
||||
POSTGRES_DB: test
|
||||
- name: mysql
|
||||
image: mariadb:10.3
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: test
|
||||
MYSQL_DATABASE: test
|
||||
|
||||
steps:
|
||||
- name: build
|
||||
image: python:3.8-alpine3.10
|
||||
image: alpine:3.13
|
||||
pull: always
|
||||
commands:
|
||||
- scripts/run-full-tests
|
||||
|
@ -112,6 +31,9 @@ steps:
|
|||
kind: pipeline
|
||||
name: documentation
|
||||
|
||||
clone:
|
||||
skip_verify: true
|
||||
|
||||
steps:
|
||||
- name: build
|
||||
image: plugins/docker
|
||||
|
@ -120,8 +42,15 @@ steps:
|
|||
from_secret: docker_username
|
||||
password:
|
||||
from_secret: docker_password
|
||||
repo: witten/borgmatic-docs
|
||||
registry: projects.torsion.org
|
||||
repo: projects.torsion.org/borgmatic-collective/borgmatic
|
||||
tags: docs
|
||||
dockerfile: docs/Dockerfile
|
||||
when:
|
||||
|
||||
trigger:
|
||||
repo:
|
||||
- borgmatic-collective/borgmatic
|
||||
branch:
|
||||
- master
|
||||
event:
|
||||
- push
|
||||
|
|
|
@ -23,8 +23,7 @@ module.exports = function(eleventyConfig) {
|
|||
}
|
||||
};
|
||||
let markdownItAnchorOptions = {
|
||||
permalink: true,
|
||||
permalinkClass: "direct-link"
|
||||
permalink: markdownItAnchor.permalink.headerLink()
|
||||
};
|
||||
|
||||
eleventyConfig.setLibrary(
|
||||
|
@ -36,6 +35,8 @@ module.exports = function(eleventyConfig) {
|
|||
|
||||
eleventyConfig.addPassthroughCopy({"docs/static": "static"});
|
||||
|
||||
eleventyConfig.setLiquidOptions({dynamicPartials: false});
|
||||
|
||||
return {
|
||||
templateFormats: [
|
||||
"md",
|
||||
|
|
2
.gitignore
vendored
2
.gitignore
vendored
|
@ -2,7 +2,7 @@
|
|||
*.pyc
|
||||
*.swp
|
||||
.cache
|
||||
.coverage
|
||||
.coverage*
|
||||
.pytest_cache
|
||||
.tox
|
||||
__pycache__
|
||||
|
|
321
NEWS
321
NEWS
|
@ -1,3 +1,322 @@
|
|||
1.7.9.dev0
|
||||
* #295: Add a SQLite database dump/restore hook.
|
||||
* #304: Change the default action order when no actions are specified on the command-line to:
|
||||
"create", "prune", "compact", "check". If you'd like to retain the old ordering ("prune" and
|
||||
"compact" first), then specify actions explicitly on the command-line.
|
||||
* #304: Run any command-line actions in the order specified instead of using a fixed ordering.
|
||||
* #628: Add a Healthchecks "log" state to send borgmatic logs to Healthchecks without signalling
|
||||
success or failure.
|
||||
* #647: Add "--strip-components all" feature on the "extract" action to remove leading path
|
||||
components of files you extract. Must be used with the "--path" flag.
|
||||
|
||||
1.7.8
|
||||
* #620: With the "create" action and the "--list" ("--files") flag, only show excluded files at
|
||||
verbosity 2.
|
||||
* #621: Add optional authentication to the ntfy monitoring hook.
|
||||
* With the "create" action, only one of "--list" ("--files") and "--progress" flags can be used.
|
||||
This lines up with the new behavior in Borg 2.0.0b5.
|
||||
* Internally support new Borg 2.0.0b5 "--filter" status characters / item flags for the "create"
|
||||
action.
|
||||
* Fix the "create" action with the "--dry-run" flag querying for databases when a PostgreSQL/MySQL
|
||||
"all" database is configured. Now, these queries are skipped due to the dry run.
|
||||
* Add "--repository" flag to the "rcreate" action to optionally select one configured repository to
|
||||
create.
|
||||
* Add "--progress" flag to the "transfer" action, new in Borg 2.0.0b5.
|
||||
* Add "checkpoint_volume" configuration option to creates checkpoints every specified number of
|
||||
bytes during a long-running backup, new in Borg 2.0.0b5.
|
||||
|
||||
1.7.7
|
||||
* #642: Add MySQL database hook "add_drop_database" configuration option to control whether dumped
|
||||
MySQL databases get dropped right before restore.
|
||||
* #643: Fix for potential data loss (data not getting backed up) when dumping large "directory"
|
||||
format PostgreSQL/MongoDB databases. Prior to the fix, these dumps would not finish writing to
|
||||
disk before Borg consumed them. Now, the dumping process completes before Borg starts. This only
|
||||
applies to "directory" format databases; other formats still stream to Borg without using
|
||||
temporary disk space.
|
||||
* Fix MongoDB "directory" format to work with mongodump/mongorestore without error. Prior to this
|
||||
fix, only the "archive" format worked.
|
||||
|
||||
1.7.6
|
||||
* #393, #438, #560: Optionally dump "all" PostgreSQL/MySQL databases to separate files instead of
|
||||
one combined dump file, allowing more convenient restores of individual databases. You can enable
|
||||
this by specifying the database dump "format" option when the database is named "all".
|
||||
* #602: Fix logs that interfere with JSON output by making warnings go to stderr instead of stdout.
|
||||
* #622: Fix traceback when include merging configuration files on ARM64.
|
||||
* #629: Skip warning about excluded special files when no special files have been excluded.
|
||||
* #630: Add configuration options for database command customization: "list_options",
|
||||
"restore_options", and "analyze_options" for PostgreSQL, "restore_options" for MySQL, and
|
||||
"restore_options" for MongoDB.
|
||||
|
||||
1.7.5
|
||||
* #311: Override PostgreSQL dump/restore commands via configuration options.
|
||||
* #604: Fix traceback when a configuration section is present but lacking any options.
|
||||
* #607: Clarify documentation examples for include merging and deep merging.
|
||||
* #611: Fix "data" consistency check to support "check_last" and consistency "prefix" options.
|
||||
* #613: Clarify documentation about multiple repositories and separate configuration files.
|
||||
|
||||
1.7.4
|
||||
* #596: Fix special file detection erroring when broken symlinks are encountered.
|
||||
* #597, #598: Fix regression in which "check" action errored on certain systems ("Cannot determine
|
||||
Borg repository ID").
|
||||
|
||||
1.7.3
|
||||
* #357: Add "break-lock" action for removing any repository and cache locks leftover from Borg
|
||||
aborting.
|
||||
* #360: To prevent Borg hangs, unconditionally delete stale named pipes before dumping databases.
|
||||
* #587: When database hooks are enabled, auto-exclude special files from a "create" action to
|
||||
prevent Borg from hanging. You can override/prevent this behavior by explicitly setting the
|
||||
"read_special" option to true.
|
||||
* #587: Warn when ignoring a configured "read_special" value of false, as true is needed when
|
||||
database hooks are enabled.
|
||||
* #589: Update sample systemd service file to allow system "idle" (e.g. a video monitor turning
|
||||
off) while borgmatic is running.
|
||||
* #590: Fix for potential data loss (data not getting backed up) when the "patterns_from" option
|
||||
was used with "source_directories" (or the "~/.borgmatic" path existed, which got injected into
|
||||
"source_directories" implicitly). The fix is for borgmatic to convert "source_directories" into
|
||||
patterns whenever "patterns_from" is used, working around a Borg bug:
|
||||
https://github.com/borgbackup/borg/issues/6994
|
||||
* #590: In "borgmatic create --list" output, display which files get excluded from the backup due
|
||||
to patterns or excludes.
|
||||
* #591: Add support for Borg 2's "--match-archives" flag. This replaces "--glob-archives", which
|
||||
borgmatic now treats as an alias for "--match-archives". But note that the two flags have
|
||||
slightly different syntax. See the Borg 2 changelog for more information:
|
||||
https://borgbackup.readthedocs.io/en/2.0.0b3/changes.html#version-2-0-0b3-2022-10-02
|
||||
* Fix for "borgmatic --archive latest" not finding the latest archive when a verbosity is set.
|
||||
|
||||
1.7.2
|
||||
* #577: Fix regression in which "borgmatic info --archive ..." showed repository info instead of
|
||||
archive info with Borg 1.
|
||||
* #582: Fix hang when database hooks are enabled and "patterns" contains a parent directory of
|
||||
"~/.borgmatic".
|
||||
|
||||
1.7.1
|
||||
* #542: Make the "source_directories" option optional. This is useful for "check"-only setups or
|
||||
using "patterns" exclusively.
|
||||
* #574: Fix for potential data loss (data not getting backed up) when the "patterns" option was
|
||||
used with "source_directories" (or the "~/.borgmatic" path existed, which got injected into
|
||||
"source_directories" implicitly). The fix is for borgmatic to convert "source_directories" into
|
||||
patterns whenever "patterns" is used, working around a Borg bug:
|
||||
https://github.com/borgbackup/borg/issues/6994
|
||||
|
||||
1.7.0
|
||||
* #463: Add "before_actions" and "after_actions" command hooks that run before/after all the
|
||||
actions for each repository. These new hooks are a good place to run per-repository steps like
|
||||
mounting/unmounting a remote filesystem.
|
||||
* #463: Update documentation to cover per-repository configurations:
|
||||
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/
|
||||
* #557: Support for Borg 2 while still working with Borg 1. This includes new borgmatic actions
|
||||
like "rcreate" (replaces "init"), "rlist" (list archives in repository), "rinfo" (show repository
|
||||
info), and "transfer" (for upgrading Borg repositories). For the most part, borgmatic tries to
|
||||
smooth over differences between Borg 1 and 2 to make your upgrade process easier. However, there
|
||||
are still a few cases where Borg made breaking changes. See the Borg 2.0 changelog for more
|
||||
information: https://www.borgbackup.org/releases/borg-2.0.html
|
||||
* #557: If you install Borg 2, you'll need to manually upgrade your existing Borg 1 repositories
|
||||
before use. Note that Borg 2 stable is not yet released as of this borgmatic release, so don't
|
||||
use Borg 2 for production until it is! See the documentation for more information:
|
||||
https://torsion.org/borgmatic/docs/how-to/upgrade/#upgrading-borg
|
||||
* #557: Rename several configuration options to match Borg 2: "remote_rate_limit" is now
|
||||
"upload_rate_limit", "numeric_owner" is "numeric_ids", and "bsd_flags" is "flags". borgmatic
|
||||
still works with the old options.
|
||||
* #557: Remote repository paths without the "ssh://" syntax are deprecated but still supported for
|
||||
now. Remote repository paths containing "~" are deprecated in borgmatic and no longer work in
|
||||
Borg 2.
|
||||
* #557: Omitting the "--archive" flag on the "list" action is deprecated when using Borg 2. Use
|
||||
the new "rlist" action instead.
|
||||
* #557: The "--dry-run" flag can now be used with the "rcreate"/"init" action.
|
||||
* #565: Fix handling of "repository" and "data" consistency checks to prevent invalid Borg flags.
|
||||
* #566: Modify "mount" and "extract" actions to require the "--repository" flag when multiple
|
||||
repositories are configured.
|
||||
* #571: BREAKING: Remove old-style command-line action flags like "--create, "--list", etc. If
|
||||
you're already using actions like "create" and "list" instead, this change should not affect you.
|
||||
* #571: BREAKING: Rename "--files" flag on "prune" action to "--list", as it lists archives, not
|
||||
files.
|
||||
* #571: Add "--list" as alias for "--files" flag on "create" and "export-tar" actions.
|
||||
* Add support for disabling TLS verification in Healthchecks monitoring hook with "verify_tls"
|
||||
option.
|
||||
|
||||
1.6.6
|
||||
* #559: Update documentation about configuring multiple consistency checks or multiple databases.
|
||||
* #560: Fix all database hooks to error when the requested database to restore isn't present in the
|
||||
Borg archive.
|
||||
* #561: Fix command-line "--override" flag to continue supporting old configuration file formats.
|
||||
* #563: Fix traceback with "create" action and "--json" flag when a database hook is configured.
|
||||
|
||||
1.6.5
|
||||
* #553: Fix logging to include the full traceback when Borg experiences an internal error, not just
|
||||
the first few lines.
|
||||
* #554: Fix all monitoring hooks to warn if the server returns an HTTP 4xx error. This can happen
|
||||
with Healthchecks, for instance, when using an invalid ping URL.
|
||||
* #555: Fix environment variable plumbing so options like "encryption_passphrase" and
|
||||
"encryption_passcommand" in one configuration file aren't used for other configuration files.
|
||||
|
||||
1.6.4
|
||||
* #546, #382: Keep your repository passphrases and database passwords outside of borgmatic's
|
||||
configuration file with environment variable interpolation. See the documentation for more
|
||||
information: https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/
|
||||
|
||||
1.6.3
|
||||
* #541: Add "borgmatic list --find" flag for searching for files across multiple archives, useful
|
||||
for hunting down that file you accidentally deleted so you can extract it. See the documentation
|
||||
for more information:
|
||||
https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#searching-for-a-file
|
||||
* #543: Add a monitoring hook for sending push notifications via ntfy. See the documentation for
|
||||
more information: https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook
|
||||
* Fix Bash completion script to no longer alter your shell's settings (complain about unset
|
||||
variables or error on pipe failures).
|
||||
* Deprecate "borgmatic list --successful" flag, as listing only non-checkpoint (successful)
|
||||
archives is now the default in newer versions of Borg.
|
||||
|
||||
1.6.2
|
||||
* #523: Reduce the default consistency check frequency and support configuring the frequency
|
||||
independently for each check. Also add "borgmatic check --force" flag to ignore configured
|
||||
frequencies. See the documentation for more information:
|
||||
https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/#check-frequency
|
||||
* #536: Fix generate-borgmatic-config to support more complex schema changes like the new
|
||||
Healthchecks configuration options when the "--source" flag is used.
|
||||
* #538: Add support for "borgmatic borg debug" command.
|
||||
* #539: Add "generate-borgmatic-config --overwrite" flag to replace an existing destination file.
|
||||
* Add Bash completion script so you can tab-complete the borgmatic command-line. See the
|
||||
documentation for more information:
|
||||
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#shell-completion
|
||||
|
||||
1.6.1
|
||||
* #294: Add Healthchecks monitoring hook "ping_body_limit" option to configure how many bytes of
|
||||
logs to send to the Healthchecks server.
|
||||
* #402: Remove the error when "archive_name_format" is specified but a retention prefix isn't.
|
||||
* #420: Warn when an unsupported variable is used in a hook command.
|
||||
* #439: Change connection failures for monitoring hooks (Healthchecks, Cronitor, PagerDuty, and
|
||||
Cronhub) to be warnings instead of errors. This way, the monitoring system failing does not block
|
||||
backups.
|
||||
* #460: Add Healthchecks monitoring hook "send_logs" option to enable/disable sending borgmatic
|
||||
logs to the Healthchecks server.
|
||||
* #525: Add Healthchecks monitoring hook "states" option to only enable pinging for particular
|
||||
monitoring states (start, finish, fail).
|
||||
* #528: Improve the error message when a configuration override contains an invalid value.
|
||||
* #531: BREAKING: When deep merging common configuration, merge colliding list values by appending
|
||||
them. Previously, one list replaced the other.
|
||||
* #532: When a configuration include is a relative path, load it from either the current working
|
||||
directory or from the directory containing the file doing the including. Previously, only the
|
||||
working directory was used.
|
||||
* Add a randomized delay to the sample systemd timer to spread out the load on a server.
|
||||
* Change the configuration format for borgmatic monitoring hooks (Healthchecks, Cronitor,
|
||||
PagerDuty, and Cronhub) to specify the ping URL / integration key as a named option. The intent
|
||||
is to support additional options (some in this release). This change is backwards-compatible.
|
||||
* Add emojis to documentation table of contents to make it easier to find particular how-to and
|
||||
reference guides at a glance.
|
||||
|
||||
1.6.0
|
||||
* #381: BREAKING: Greatly simplify configuration file reuse by deep merging when including common
|
||||
configuration. See the documentation for more information:
|
||||
https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#include-merging
|
||||
* #473: BREAKING: Instead of executing "before" command hooks before all borgmatic actions run (and
|
||||
"after" hooks after), execute these hooks right before/after the corresponding action. E.g.,
|
||||
"before_check" now runs immediately before the "check" action. This better supports running
|
||||
timing-sensitive tasks like pausing containers. Side effect: before/after command hooks now run
|
||||
once for each configured repository instead of once per configuration file. Additionally, the
|
||||
"repositories" interpolated variable has been changed to "repository", containing the path to the
|
||||
current repository for the hook. See the documentation for more information:
|
||||
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
|
||||
* #513: Add mention of sudo's "secure_path" option to borgmatic installation documentation.
|
||||
* #515: Fix "borgmatic borg key ..." to pass parameters to Borg in the correct order.
|
||||
* #516: Fix handling of TERM signal to exit borgmatic, not just forward the signal to Borg.
|
||||
* #517: Fix borgmatic exit code (so it's zero) when initial Borg calls fail but later retries
|
||||
succeed.
|
||||
* Change Healthchecks logs truncation size from 10k bytes to 100k bytes, corresponding to that
|
||||
same change on Healthchecks.io.
|
||||
|
||||
1.5.24
|
||||
* #431: Add "working_directory" option to support source directories with relative paths.
|
||||
* #444: When loading a configuration file that is unreadable due to file permissions, warn instead
|
||||
of erroring. This supports running borgmatic as a non-root user with configuration in ~/.config
|
||||
even if there is an unreadable global configuration file in /etc.
|
||||
* #469: Add "repositories" context to "before_*" and "after_*" command action hooks. See the
|
||||
documentation for more information:
|
||||
https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
|
||||
* #486: Fix handling of "patterns_from" and "exclude_from" options to error instead of warning when
|
||||
referencing unreadable files and "create" action is run.
|
||||
* #507: Fix Borg usage error in the "compact" action when running "borgmatic --dry-run". Now, skip
|
||||
"compact" entirely during a dry run.
|
||||
|
||||
1.5.23
|
||||
* #394: Compact repository segments and free space with new "borgmatic compact" action. Borg 1.2+
|
||||
only. Also run "compact" by default when no actions are specified, as "prune" in Borg 1.2 no
|
||||
longer frees up space unless "compact" is run.
|
||||
* #394: When using the "atime", "bsd_flags", "numeric_owner", or "remote_rate_limit" options,
|
||||
tailor the flags passed to Borg depending on the Borg version.
|
||||
* #480, #482: Fix traceback when a YAML validation error occurs.
|
||||
|
||||
1.5.22
|
||||
* #288: Add database dump hook for MongoDB.
|
||||
* #470: Move mysqldump options to the beginning of the command due to MySQL bug 30994.
|
||||
* #471: When command-line configuration override produces a parse error, error cleanly instead of
|
||||
tracebacking.
|
||||
* #476: Fix unicode error when restoring particular MySQL databases.
|
||||
* Drop support for Python 3.6, which has been end-of-lifed.
|
||||
* Add support for Python 3.10.
|
||||
|
||||
1.5.21
|
||||
* #28: Optionally retry failing backups via "retries" and "retry_wait" configuration options.
|
||||
* #306: Add "list_options" MySQL configuration option for passing additional arguments to MySQL
|
||||
list command.
|
||||
* #459: Add support for old version (2.x) of jsonschema library.
|
||||
|
||||
1.5.20
|
||||
* Re-release with correct version without dev0 tag.
|
||||
|
||||
1.5.19
|
||||
* #387: Fix error when configured source directories are not present on the filesystem at the time
|
||||
of backup. Now, Borg will complain, but the backup will still continue.
|
||||
* #455: Mention changing borgmatic path in cron documentation.
|
||||
* Update sample systemd service file with more granular read-only filesystem settings.
|
||||
* Move Gitea and GitHub hosting from a personal namespace to an organization for better
|
||||
collaboration with related projects.
|
||||
* 1k ★s on GitHub!
|
||||
|
||||
1.5.18
|
||||
* #389: Fix "message too long" error when logging to rsyslog.
|
||||
* #440: Fix traceback that can occur when dumping a database.
|
||||
|
||||
1.5.17
|
||||
* #437: Fix error when configuration file contains "umask" option.
|
||||
* Remove test dependency on vim and /dev/urandom.
|
||||
|
||||
1.5.16
|
||||
* #379: Suppress console output in sample crontab and systemd service files.
|
||||
* #407: Fix syslog logging on FreeBSD.
|
||||
* #430: Fix hang when restoring a PostgreSQL "tar" format database dump.
|
||||
* Better error messages! Switch the library used for validating configuration files (from pykwalify
|
||||
to jsonschema).
|
||||
* Link borgmatic Ansible role from installation documentation:
|
||||
https://torsion.org/borgmatic/docs/how-to/set-up-backups/#other-ways-to-install
|
||||
|
||||
1.5.15
|
||||
* #419: Document use case of running backups conditionally based on laptop power level:
|
||||
https://torsion.org/borgmatic/docs/how-to/backup-to-a-removable-drive-or-an-intermittent-server/
|
||||
* #425: Run arbitrary Borg commands with new "borgmatic borg" action. See the documentation for
|
||||
more information: https://torsion.org/borgmatic/docs/how-to/run-arbitrary-borg-commands/
|
||||
|
||||
1.5.14
|
||||
* #390: Add link to Hetzner storage offering from the documentation.
|
||||
* #398: Clarify canonical home of borgmatic in documentation.
|
||||
* #406: Clarify that spaces in path names should not be backslashed in path names.
|
||||
* #423: Fix error handling to error loudly when Borg gets killed due to running out of memory!
|
||||
* Fix build so as not to attempt to build and push documentation for a non-master branch.
|
||||
* "Fix" build failure with Alpine Edge by switching from Edge to Alpine 3.13.
|
||||
* Move #borgmatic IRC channel from Freenode to Libera Chat due to Freenode takeover drama.
|
||||
IRC connection info: https://torsion.org/borgmatic/#issues
|
||||
|
||||
1.5.13
|
||||
* #373: Document that passphrase is used for Borg keyfile encryption, not just repokey encryption.
|
||||
* #404: Add support for ruamel.yaml 0.17.x YAML parsing library.
|
||||
* Update systemd service example to return a permission error when a system call isn't permitted
|
||||
(instead of terminating borgmatic outright).
|
||||
* Drop support for Python 3.5, which has been end-of-lifed.
|
||||
* Add support for Python 3.9.
|
||||
* Update versions of test dependencies (test_requirements.txt and test containers).
|
||||
* Only support black code formatter on Python 3.8+. New black dependencies make installation
|
||||
difficult on older versions of Python.
|
||||
* Replace "improve this documentation" form with link to support and ticket tracker.
|
||||
|
||||
1.5.12
|
||||
* Fix for previous release with incorrect version suffix in setup.py. No other changes.
|
||||
|
||||
|
@ -523,7 +842,7 @@
|
|||
* #49: Support for Borg experimental --patterns-from and --patterns options for specifying mixed
|
||||
includes/excludes.
|
||||
* Moved issue tracker from Taiga to integrated Gitea tracker at
|
||||
https://projects.torsion.org/witten/borgmatic/issues
|
||||
https://projects.torsion.org/borgmatic-collective/borgmatic/issues
|
||||
|
||||
1.1.12
|
||||
* #46: Declare dependency on pykwalify 1.6 or above, as older versions yield "Unknown key: version"
|
||||
|
|
91
README.md
91
README.md
|
@ -11,6 +11,8 @@ borgmatic is simple, configuration-driven backup software for servers and
|
|||
workstations. Protect your files with client-side encryption. Backup your
|
||||
databases too. Monitor it all with integrated third-party services.
|
||||
|
||||
The canonical home of borgmatic is at <a href="https://torsion.org/borgmatic">https://torsion.org/borgmatic</a>.
|
||||
|
||||
Here's an example configuration file:
|
||||
|
||||
```yaml
|
||||
|
@ -22,9 +24,8 @@ location:
|
|||
|
||||
# Paths of local or remote repositories to backup to.
|
||||
repositories:
|
||||
- 1234@usw-s001.rsync.net:backups.borg
|
||||
- k8pDxu32@k8pDxu32.repo.borgbase.com:repo
|
||||
- user1@scp2.cdn.lima-labs.com:repo
|
||||
- ssh://1234@usw-s001.rsync.net/./backups.borg
|
||||
- ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo
|
||||
- /var/lib/backups/local.borg
|
||||
|
||||
retention:
|
||||
|
@ -36,8 +37,9 @@ retention:
|
|||
consistency:
|
||||
# List of checks to run to validate your backups.
|
||||
checks:
|
||||
- repository
|
||||
- archives
|
||||
- name: repository
|
||||
- name: archives
|
||||
frequency: 2 weeks
|
||||
|
||||
hooks:
|
||||
# Custom preparation scripts to run.
|
||||
|
@ -53,9 +55,9 @@ hooks:
|
|||
```
|
||||
|
||||
Want to see borgmatic in action? Check out the <a
|
||||
href="https://asciinema.org/a/203761" target="_blank">screencast</a>.
|
||||
href="https://asciinema.org/a/203761?autoplay=1" target="_blank">screencast</a>.
|
||||
|
||||
<script src="https://asciinema.org/a/203761.js" id="asciicast-203761" async></script>
|
||||
<a href="https://asciinema.org/a/203761?autoplay=1" target="_blank"><img src="https://asciinema.org/a/203761.png" width="480"></a>
|
||||
|
||||
borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
|
||||
|
||||
|
@ -64,11 +66,13 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
|
|||
<a href="https://www.postgresql.org/"><img src="docs/static/postgresql.png" alt="PostgreSQL" height="60px" style="margin-bottom:20px;"></a>
|
||||
<a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px;"></a>
|
||||
<a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px;"></a>
|
||||
<a href="https://www.mongodb.com/"><img src="docs/static/mongodb.png" alt="MongoDB" height="60px" style="margin-bottom:20px;"></a>
|
||||
<a href="https://sqlite.org/"><img src="docs/static/sqlite.png" alt="SQLite" height="60px" style="margin-bottom:20px;"></a>
|
||||
<a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px;"></a>
|
||||
<a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px;"></a>
|
||||
<a href="https://cronhub.io/"><img src="docs/static/cronhub.png" alt="Cronhub" height="60px" style="margin-bottom:20px;"></a>
|
||||
<a href="https://www.pagerduty.com/"><img src="docs/static/pagerduty.png" alt="PagerDuty" height="60px" style="margin-bottom:20px;"></a>
|
||||
<a href="https://www.rsync.net/cgi-bin/borg.cgi?campaign=borg&adgroup=borgmatic"><img src="docs/static/rsyncnet.png" alt="rsync.net" height="60px" style="margin-bottom:20px;"></a>
|
||||
<a href="https://ntfy.sh/"><img src="docs/static/ntfy.png" alt="ntfy" height="60px" style="margin-bottom:20px;"></a>
|
||||
<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px;"></a>
|
||||
|
||||
|
||||
|
@ -84,58 +88,81 @@ reference guides</a>.
|
|||
|
||||
## Hosting providers
|
||||
|
||||
Need somewhere to store your encrypted offsite backups? The following hosting
|
||||
providers include specific support for Borg/borgmatic. Using these links and
|
||||
services helps support borgmatic development and hosting. (These are referral
|
||||
links, but without any tracking scripts or cookies.)
|
||||
Need somewhere to store your encrypted off-site backups? The following hosting
|
||||
providers include specific support for Borg/borgmatic—and fund borgmatic
|
||||
development and hosting when you use these links to sign up. (These are
|
||||
referral links, but without any tracking scripts or cookies.)
|
||||
|
||||
<ul>
|
||||
<li class="referral"><a href="https://www.rsync.net/cgi-bin/borg.cgi?campaign=borg&adgroup=borgmatic">rsync.net</a>: Cloud Storage provider with full support for borg and any other SSH/SFTP tool</li>
|
||||
<li class="referral"><a href="https://www.borgbase.com/?utm_source=borgmatic">BorgBase</a>: Borg hosting service with support for monitoring, 2FA, and append-only repos</li>
|
||||
<li class="referral"><a href="https://storage.lima-labs.com/special-pricing-offer-for-borgmatic-users/">Lima-Labs</a>: Affordable, reliable cloud data storage accessable via SSH/SCP/FTP for Borg backups or any other bulk storage needs</li>
|
||||
</ul>
|
||||
|
||||
Additionally, [rsync.net](https://www.rsync.net/products/borg.html) and
|
||||
[Hetzner](https://www.hetzner.com/storage/storage-box) have compatible storage
|
||||
offerings, but do not currently fund borgmatic development or hosting.
|
||||
|
||||
## Support and contributing
|
||||
|
||||
### Issues
|
||||
|
||||
You've got issues? Or an idea for a feature enhancement? We've got an [issue
|
||||
tracker](https://projects.torsion.org/witten/borgmatic/issues). In order to
|
||||
create a new issue or comment on an issue, you'll need to [login
|
||||
first](https://projects.torsion.org/user/login). Note that you can login with
|
||||
an existing GitHub account if you prefer.
|
||||
|
||||
If you'd like to chat with borgmatic developers or users, head on over to the
|
||||
`#borgmatic` IRC channel on Freenode, either via <a
|
||||
href="https://webchat.freenode.net/?channels=borgmatic">web chat</a> or a
|
||||
native <a href="irc://chat.freenode.net:6697">IRC client</a>.
|
||||
Are you experiencing an issue with borgmatic? Or do you have an idea for a
|
||||
feature enhancement? Head on over to our [issue
|
||||
tracker](https://projects.torsion.org/borgmatic-collective/borgmatic/issues).
|
||||
In order to create a new issue or add a comment, you'll need to
|
||||
[register](https://projects.torsion.org/user/sign_up?invite_code=borgmatic)
|
||||
first. If you prefer to use an existing GitHub account, you can skip account
|
||||
creation and [login directly](https://projects.torsion.org/user/login).
|
||||
|
||||
Also see the [security
|
||||
policy](https://torsion.org/borgmatic/docs/security-policy/) for any security
|
||||
issues.
|
||||
|
||||
|
||||
### Social
|
||||
|
||||
Check out the [Borg subreddit](https://www.reddit.com/r/BorgBackup/) for
|
||||
general Borg and borgmatic discussion and support.
|
||||
|
||||
Also follow [borgmatic on Mastodon](https://fosstodon.org/@borgmatic).
|
||||
|
||||
|
||||
### Chat
|
||||
|
||||
To chat with borgmatic developers or users, check out the `#borgmatic`
|
||||
IRC channel on Libera Chat, either via <a
|
||||
href="https://web.libera.chat/#borgmatic">web chat</a> or a native <a
|
||||
href="ircs://irc.libera.chat:6697">IRC client</a>. If you don't get a response
|
||||
right away, please hang around a while—or file a ticket instead.
|
||||
|
||||
|
||||
### Other
|
||||
|
||||
Other questions or comments? Contact
|
||||
[witten@torsion.org](mailto:witten@torsion.org).
|
||||
|
||||
|
||||
### Contributing
|
||||
|
||||
borgmatic is hosted at <https://torsion.org/borgmatic> with [source code
|
||||
available](https://projects.torsion.org/witten/borgmatic), and is also
|
||||
mirrored on [GitHub](https://github.com/witten/borgmatic) for convenience.
|
||||
borgmatic [source code is
|
||||
available](https://projects.torsion.org/borgmatic-collective/borgmatic) and is also mirrored
|
||||
on [GitHub](https://github.com/borgmatic-collective/borgmatic) for convenience.
|
||||
|
||||
borgmatic is licensed under the GNU General Public License version 3 or any
|
||||
later version.
|
||||
|
||||
If you'd like to contribute to borgmatic development, please feel free to
|
||||
submit a [Pull Request](https://projects.torsion.org/witten/borgmatic/pulls)
|
||||
or open an [issue](https://projects.torsion.org/witten/borgmatic/issues) first
|
||||
to discuss your idea. We also accept Pull Requests on GitHub, if that's more
|
||||
your thing. In general, contributions are very welcome. We don't bite!
|
||||
submit a [Pull
|
||||
Request](https://projects.torsion.org/borgmatic-collective/borgmatic/pulls) or
|
||||
open an
|
||||
[issue](https://projects.torsion.org/borgmatic-collective/borgmatic/issues) to
|
||||
discuss your idea. Note that you'll need to
|
||||
[register](https://projects.torsion.org/user/sign_up?invite_code=borgmatic)
|
||||
first. We also accept Pull Requests on GitHub, if that's more your thing. In
|
||||
general, contributions are very welcome. We don't bite!
|
||||
|
||||
Also, please check out the [borgmatic development
|
||||
how-to](https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/) for
|
||||
info on cloning source code, running tests, etc.
|
||||
|
||||
<a href="https://build.torsion.org/witten/borgmatic" alt="build status">![Build Status](https://build.torsion.org/api/badges/witten/borgmatic/status.svg?ref=refs/heads/master)</a>
|
||||
<a href="https://build.torsion.org/borgmatic-collective/borgmatic" alt="build status">![Build Status](https://build.torsion.org/api/badges/borgmatic-collective/borgmatic/status.svg?ref=refs/heads/master)</a>
|
||||
|
||||
|
|
|
@ -6,14 +6,13 @@ permalink: security-policy/index.html
|
|||
## Supported versions
|
||||
|
||||
While we want to hear about security vulnerabilities in all versions of
|
||||
borgmatic, security fixes will only be made to the most recently released
|
||||
version. It's not practical for our small volunteer effort to maintain
|
||||
multiple different release branches and put out separate security patches for
|
||||
each.
|
||||
borgmatic, security fixes are only made to the most recently released version.
|
||||
It's simply not practical for our small volunteer effort to maintain multiple
|
||||
release branches and put out separate security patches for each.
|
||||
|
||||
## Reporting a vulnerability
|
||||
|
||||
If you find a security vulnerability, please [file a
|
||||
ticket](https://torsion.org/borgmatic/#issues) or [send email
|
||||
directly](mailto:witten@torsion.org) as appropriate. You should expect to hear
|
||||
back within a few days at most, and generally sooner.
|
||||
back within a few days at most and generally sooner.
|
||||
|
|
0
borgmatic/actions/__init__.py
Normal file
0
borgmatic/actions/__init__.py
Normal file
36
borgmatic/actions/borg.py
Normal file
36
borgmatic/actions/borg.py
Normal file
|
@ -0,0 +1,36 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.borg.borg
|
||||
import borgmatic.borg.rlist
|
||||
import borgmatic.config.validate
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_borg(
|
||||
repository, storage, local_borg_version, borg_arguments, local_path, remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "borg" action for the given repository.
|
||||
'''
|
||||
if borg_arguments.repository is None or borgmatic.config.validate.repositories_match(
|
||||
repository, borg_arguments.repository
|
||||
):
|
||||
logger.info('{}: Running arbitrary Borg command'.format(repository))
|
||||
archive_name = borgmatic.borg.rlist.resolve_archive_name(
|
||||
repository,
|
||||
borg_arguments.archive,
|
||||
storage,
|
||||
local_borg_version,
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
borgmatic.borg.borg.run_arbitrary_borg(
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
options=borg_arguments.options,
|
||||
archive=archive_name,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
21
borgmatic/actions/break_lock.py
Normal file
21
borgmatic/actions/break_lock.py
Normal file
|
@ -0,0 +1,21 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.borg.break_lock
|
||||
import borgmatic.config.validate
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_break_lock(
|
||||
repository, storage, local_borg_version, break_lock_arguments, local_path, remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "break-lock" action for the given repository.
|
||||
'''
|
||||
if break_lock_arguments.repository is None or borgmatic.config.validate.repositories_match(
|
||||
repository, break_lock_arguments.repository
|
||||
):
|
||||
logger.info(f'{repository}: Breaking repository and cache locks')
|
||||
borgmatic.borg.break_lock.break_lock(
|
||||
repository, storage, local_borg_version, local_path=local_path, remote_path=remote_path,
|
||||
)
|
55
borgmatic/actions/check.py
Normal file
55
borgmatic/actions/check.py
Normal file
|
@ -0,0 +1,55 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.borg.check
|
||||
import borgmatic.hooks.command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_check(
|
||||
config_filename,
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
consistency,
|
||||
hooks,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
check_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "check" action for the given repository.
|
||||
'''
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
hooks.get('before_check'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'pre-check',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
logger.info('{}: Running consistency checks'.format(repository))
|
||||
borgmatic.borg.check.check_archives(
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
consistency,
|
||||
local_borg_version,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
progress=check_arguments.progress,
|
||||
repair=check_arguments.repair,
|
||||
only_checks=check_arguments.only,
|
||||
force=check_arguments.force,
|
||||
)
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
hooks.get('after_check'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'post-check',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
57
borgmatic/actions/compact.py
Normal file
57
borgmatic/actions/compact.py
Normal file
|
@ -0,0 +1,57 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.borg.compact
|
||||
import borgmatic.borg.feature
|
||||
import borgmatic.hooks.command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_compact(
|
||||
config_filename,
|
||||
repository,
|
||||
storage,
|
||||
retention,
|
||||
hooks,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
compact_arguments,
|
||||
global_arguments,
|
||||
dry_run_label,
|
||||
local_path,
|
||||
remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "compact" action for the given repository.
|
||||
'''
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
hooks.get('before_compact'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'pre-compact',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
if borgmatic.borg.feature.available(borgmatic.borg.feature.Feature.COMPACT, local_borg_version):
|
||||
logger.info('{}: Compacting segments{}'.format(repository, dry_run_label))
|
||||
borgmatic.borg.compact.compact_segments(
|
||||
global_arguments.dry_run,
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
progress=compact_arguments.progress,
|
||||
cleanup_commits=compact_arguments.cleanup_commits,
|
||||
threshold=compact_arguments.threshold,
|
||||
)
|
||||
else: # pragma: nocover
|
||||
logger.info('{}: Skipping compact (only available/needed in Borg 1.2+)'.format(repository))
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
hooks.get('after_compact'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'post-compact',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
90
borgmatic/actions/create.py
Normal file
90
borgmatic/actions/create.py
Normal file
|
@ -0,0 +1,90 @@
|
|||
import json
|
||||
import logging
|
||||
|
||||
import borgmatic.borg.create
|
||||
import borgmatic.hooks.command
|
||||
import borgmatic.hooks.dispatch
|
||||
import borgmatic.hooks.dump
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_create(
|
||||
config_filename,
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
hooks,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
create_arguments,
|
||||
global_arguments,
|
||||
dry_run_label,
|
||||
local_path,
|
||||
remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "create" action for the given repository.
|
||||
|
||||
If create_arguments.json is True, yield the JSON output from creating the archive.
|
||||
'''
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
hooks.get('before_backup'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'pre-backup',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
logger.info('{}: Creating archive{}'.format(repository, dry_run_label))
|
||||
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
|
||||
'remove_database_dumps',
|
||||
hooks,
|
||||
repository,
|
||||
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
active_dumps = borgmatic.hooks.dispatch.call_hooks(
|
||||
'dump_databases',
|
||||
hooks,
|
||||
repository,
|
||||
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
stream_processes = [process for processes in active_dumps.values() for process in processes]
|
||||
|
||||
json_output = borgmatic.borg.create.create_archive(
|
||||
global_arguments.dry_run,
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
local_borg_version,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
progress=create_arguments.progress,
|
||||
stats=create_arguments.stats,
|
||||
json=create_arguments.json,
|
||||
list_files=create_arguments.list_files,
|
||||
stream_processes=stream_processes,
|
||||
)
|
||||
if json_output: # pragma: nocover
|
||||
yield json.loads(json_output)
|
||||
|
||||
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
|
||||
'remove_database_dumps',
|
||||
hooks,
|
||||
config_filename,
|
||||
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
hooks.get('after_backup'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'post-backup',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
48
borgmatic/actions/export_tar.py
Normal file
48
borgmatic/actions/export_tar.py
Normal file
|
@ -0,0 +1,48 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.borg.export_tar
|
||||
import borgmatic.borg.rlist
|
||||
import borgmatic.config.validate
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_export_tar(
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
export_tar_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "export-tar" action for the given repository.
|
||||
'''
|
||||
if export_tar_arguments.repository is None or borgmatic.config.validate.repositories_match(
|
||||
repository, export_tar_arguments.repository
|
||||
):
|
||||
logger.info(
|
||||
'{}: Exporting archive {} as tar file'.format(repository, export_tar_arguments.archive)
|
||||
)
|
||||
borgmatic.borg.export_tar.export_tar_archive(
|
||||
global_arguments.dry_run,
|
||||
repository,
|
||||
borgmatic.borg.rlist.resolve_archive_name(
|
||||
repository,
|
||||
export_tar_arguments.archive,
|
||||
storage,
|
||||
local_borg_version,
|
||||
local_path,
|
||||
remote_path,
|
||||
),
|
||||
export_tar_arguments.paths,
|
||||
export_tar_arguments.destination,
|
||||
storage,
|
||||
local_borg_version,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
tar_filter=export_tar_arguments.tar_filter,
|
||||
list_files=export_tar_arguments.list_files,
|
||||
strip_components=export_tar_arguments.strip_components,
|
||||
)
|
67
borgmatic/actions/extract.py
Normal file
67
borgmatic/actions/extract.py
Normal file
|
@ -0,0 +1,67 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.borg.extract
|
||||
import borgmatic.borg.rlist
|
||||
import borgmatic.config.validate
|
||||
import borgmatic.hooks.command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_extract(
|
||||
config_filename,
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
hooks,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
extract_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "extract" action for the given repository.
|
||||
'''
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
hooks.get('before_extract'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'pre-extract',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
if extract_arguments.repository is None or borgmatic.config.validate.repositories_match(
|
||||
repository, extract_arguments.repository
|
||||
):
|
||||
logger.info('{}: Extracting archive {}'.format(repository, extract_arguments.archive))
|
||||
borgmatic.borg.extract.extract_archive(
|
||||
global_arguments.dry_run,
|
||||
repository,
|
||||
borgmatic.borg.rlist.resolve_archive_name(
|
||||
repository,
|
||||
extract_arguments.archive,
|
||||
storage,
|
||||
local_borg_version,
|
||||
local_path,
|
||||
remote_path,
|
||||
),
|
||||
extract_arguments.paths,
|
||||
location,
|
||||
storage,
|
||||
local_borg_version,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
destination_path=extract_arguments.destination,
|
||||
strip_components=extract_arguments.strip_components,
|
||||
progress=extract_arguments.progress,
|
||||
)
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
hooks.get('after_extract'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'post-extract',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
41
borgmatic/actions/info.py
Normal file
41
borgmatic/actions/info.py
Normal file
|
@ -0,0 +1,41 @@
|
|||
import json
|
||||
import logging
|
||||
|
||||
import borgmatic.borg.info
|
||||
import borgmatic.borg.rlist
|
||||
import borgmatic.config.validate
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_info(
|
||||
repository, storage, local_borg_version, info_arguments, local_path, remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "info" action for the given repository and archive.
|
||||
|
||||
If info_arguments.json is True, yield the JSON output from the info for the archive.
|
||||
'''
|
||||
if info_arguments.repository is None or borgmatic.config.validate.repositories_match(
|
||||
repository, info_arguments.repository
|
||||
):
|
||||
if not info_arguments.json: # pragma: nocover
|
||||
logger.answer(f'{repository}: Displaying archive summary information')
|
||||
info_arguments.archive = borgmatic.borg.rlist.resolve_archive_name(
|
||||
repository,
|
||||
info_arguments.archive,
|
||||
storage,
|
||||
local_borg_version,
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
json_output = borgmatic.borg.info.display_archives_info(
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
info_arguments=info_arguments,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
||||
if json_output: # pragma: nocover
|
||||
yield json.loads(json_output)
|
43
borgmatic/actions/list.py
Normal file
43
borgmatic/actions/list.py
Normal file
|
@ -0,0 +1,43 @@
|
|||
import json
|
||||
import logging
|
||||
|
||||
import borgmatic.borg.list
|
||||
import borgmatic.config.validate
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_list(
|
||||
repository, storage, local_borg_version, list_arguments, local_path, remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "list" action for the given repository and archive.
|
||||
|
||||
If list_arguments.json is True, yield the JSON output from listing the archive.
|
||||
'''
|
||||
if list_arguments.repository is None or borgmatic.config.validate.repositories_match(
|
||||
repository, list_arguments.repository
|
||||
):
|
||||
if not list_arguments.json: # pragma: nocover
|
||||
if list_arguments.find_paths:
|
||||
logger.answer(f'{repository}: Searching archives')
|
||||
elif not list_arguments.archive:
|
||||
logger.answer(f'{repository}: Listing archives')
|
||||
list_arguments.archive = borgmatic.borg.rlist.resolve_archive_name(
|
||||
repository,
|
||||
list_arguments.archive,
|
||||
storage,
|
||||
local_borg_version,
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
json_output = borgmatic.borg.list.list_archive(
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
list_arguments=list_arguments,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
||||
if json_output: # pragma: nocover
|
||||
yield json.loads(json_output)
|
42
borgmatic/actions/mount.py
Normal file
42
borgmatic/actions/mount.py
Normal file
|
@ -0,0 +1,42 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.borg.mount
|
||||
import borgmatic.borg.rlist
|
||||
import borgmatic.config.validate
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_mount(
|
||||
repository, storage, local_borg_version, mount_arguments, local_path, remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "mount" action for the given repository.
|
||||
'''
|
||||
if mount_arguments.repository is None or borgmatic.config.validate.repositories_match(
|
||||
repository, mount_arguments.repository
|
||||
):
|
||||
if mount_arguments.archive:
|
||||
logger.info('{}: Mounting archive {}'.format(repository, mount_arguments.archive))
|
||||
else: # pragma: nocover
|
||||
logger.info('{}: Mounting repository'.format(repository))
|
||||
|
||||
borgmatic.borg.mount.mount_archive(
|
||||
repository,
|
||||
borgmatic.borg.rlist.resolve_archive_name(
|
||||
repository,
|
||||
mount_arguments.archive,
|
||||
storage,
|
||||
local_borg_version,
|
||||
local_path,
|
||||
remote_path,
|
||||
),
|
||||
mount_arguments.mount_point,
|
||||
mount_arguments.paths,
|
||||
mount_arguments.foreground,
|
||||
mount_arguments.options,
|
||||
storage,
|
||||
local_borg_version,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
53
borgmatic/actions/prune.py
Normal file
53
borgmatic/actions/prune.py
Normal file
|
@ -0,0 +1,53 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.borg.prune
|
||||
import borgmatic.hooks.command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_prune(
|
||||
config_filename,
|
||||
repository,
|
||||
storage,
|
||||
retention,
|
||||
hooks,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
prune_arguments,
|
||||
global_arguments,
|
||||
dry_run_label,
|
||||
local_path,
|
||||
remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "prune" action for the given repository.
|
||||
'''
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
hooks.get('before_prune'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'pre-prune',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
logger.info('{}: Pruning archives{}'.format(repository, dry_run_label))
|
||||
borgmatic.borg.prune.prune_archives(
|
||||
global_arguments.dry_run,
|
||||
repository,
|
||||
storage,
|
||||
retention,
|
||||
local_borg_version,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
stats=prune_arguments.stats,
|
||||
list_archives=prune_arguments.list_archives,
|
||||
)
|
||||
borgmatic.hooks.command.execute_hook(
|
||||
hooks.get('after_prune'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'post-prune',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
40
borgmatic/actions/rcreate.py
Normal file
40
borgmatic/actions/rcreate.py
Normal file
|
@ -0,0 +1,40 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.borg.rcreate
|
||||
import borgmatic.config.validate
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_rcreate(
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
rcreate_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "rcreate" action for the given repository.
|
||||
'''
|
||||
if rcreate_arguments.repository and not borgmatic.config.validate.repositories_match(
|
||||
repository, rcreate_arguments.repository
|
||||
):
|
||||
return
|
||||
|
||||
logger.info('{}: Creating repository'.format(repository))
|
||||
borgmatic.borg.rcreate.create_repository(
|
||||
global_arguments.dry_run,
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
rcreate_arguments.encryption_mode,
|
||||
rcreate_arguments.source_repository,
|
||||
rcreate_arguments.copy_crypt_key,
|
||||
rcreate_arguments.append_only,
|
||||
rcreate_arguments.storage_quota,
|
||||
rcreate_arguments.make_parent_dirs,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
345
borgmatic/actions/restore.py
Normal file
345
borgmatic/actions/restore.py
Normal file
|
@ -0,0 +1,345 @@
|
|||
import copy
|
||||
import logging
|
||||
import os
|
||||
|
||||
import borgmatic.borg.extract
|
||||
import borgmatic.borg.list
|
||||
import borgmatic.borg.mount
|
||||
import borgmatic.borg.rlist
|
||||
import borgmatic.borg.state
|
||||
import borgmatic.config.validate
|
||||
import borgmatic.hooks.dispatch
|
||||
import borgmatic.hooks.dump
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
UNSPECIFIED_HOOK = object()
|
||||
|
||||
|
||||
def get_configured_database(
|
||||
hooks, archive_database_names, hook_name, database_name, configuration_database_name=None
|
||||
):
|
||||
'''
|
||||
Find the first database with the given hook name and database name in the configured hooks
|
||||
dict and the given archive database names dict (from hook name to database names contained in
|
||||
a particular backup archive). If UNSPECIFIED_HOOK is given as the hook name, search all database
|
||||
hooks for the named database. If a configuration database name is given, use that instead of the
|
||||
database name to lookup the database in the given hooks configuration.
|
||||
|
||||
Return the found database as a tuple of (found hook name, database configuration dict).
|
||||
'''
|
||||
if not configuration_database_name:
|
||||
configuration_database_name = database_name
|
||||
|
||||
if hook_name == UNSPECIFIED_HOOK:
|
||||
hooks_to_search = hooks
|
||||
else:
|
||||
hooks_to_search = {hook_name: hooks[hook_name]}
|
||||
|
||||
return next(
|
||||
(
|
||||
(name, hook_database)
|
||||
for (name, hook) in hooks_to_search.items()
|
||||
for hook_database in hook
|
||||
if hook_database['name'] == configuration_database_name
|
||||
and database_name in archive_database_names.get(name, [])
|
||||
),
|
||||
(None, None),
|
||||
)
|
||||
|
||||
|
||||
def get_configured_hook_name_and_database(hooks, database_name):
|
||||
'''
|
||||
Find the hook name and first database dict with the given database name in the configured hooks
|
||||
dict. This searches across all database hooks.
|
||||
'''
|
||||
|
||||
|
||||
def restore_single_database(
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
hooks,
|
||||
local_borg_version,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
archive_name,
|
||||
hook_name,
|
||||
database,
|
||||
): # pragma: no cover
|
||||
'''
|
||||
Given (among other things) an archive name, a database hook name, and a configured database
|
||||
configuration dict, restore that database from the archive.
|
||||
'''
|
||||
logger.info(f'{repository}: Restoring database {database["name"]}')
|
||||
|
||||
dump_pattern = borgmatic.hooks.dispatch.call_hooks(
|
||||
'make_database_dump_pattern',
|
||||
hooks,
|
||||
repository,
|
||||
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
database['name'],
|
||||
)[hook_name]
|
||||
|
||||
# Kick off a single database extract to stdout.
|
||||
extract_process = borgmatic.borg.extract.extract_archive(
|
||||
dry_run=global_arguments.dry_run,
|
||||
repository=repository,
|
||||
archive=archive_name,
|
||||
paths=borgmatic.hooks.dump.convert_glob_patterns_to_borg_patterns([dump_pattern]),
|
||||
location_config=location,
|
||||
storage_config=storage,
|
||||
local_borg_version=local_borg_version,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
destination_path='/',
|
||||
# A directory format dump isn't a single file, and therefore can't extract
|
||||
# to stdout. In this case, the extract_process return value is None.
|
||||
extract_to_stdout=bool(database.get('format') != 'directory'),
|
||||
)
|
||||
|
||||
# Run a single database restore, consuming the extract stdout (if any).
|
||||
borgmatic.hooks.dispatch.call_hooks(
|
||||
'restore_database_dump',
|
||||
{hook_name: [database]},
|
||||
repository,
|
||||
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
global_arguments.dry_run,
|
||||
extract_process,
|
||||
)
|
||||
|
||||
|
||||
def collect_archive_database_names(
|
||||
repository, archive, location, storage, local_borg_version, local_path, remote_path,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, a resolved archive name, a location configuration dict,
|
||||
a storage configuration dict, the local Borg version, and local and remote Borg paths, query the
|
||||
archive for the names of databases it contains and return them as a dict from hook name to a
|
||||
sequence of database names.
|
||||
'''
|
||||
borgmatic_source_directory = os.path.expanduser(
|
||||
location.get(
|
||||
'borgmatic_source_directory', borgmatic.borg.state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
|
||||
)
|
||||
).lstrip('/')
|
||||
parent_dump_path = os.path.expanduser(
|
||||
borgmatic.hooks.dump.make_database_dump_path(borgmatic_source_directory, '*_databases/*/*')
|
||||
)
|
||||
dump_paths = borgmatic.borg.list.capture_archive_listing(
|
||||
repository,
|
||||
archive,
|
||||
storage,
|
||||
local_borg_version,
|
||||
list_path=parent_dump_path,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
||||
|
||||
# Determine the database names corresponding to the dumps found in the archive and
|
||||
# add them to restore_names.
|
||||
archive_database_names = {}
|
||||
|
||||
for dump_path in dump_paths:
|
||||
try:
|
||||
(hook_name, _, database_name) = dump_path.split(
|
||||
borgmatic_source_directory + os.path.sep, 1
|
||||
)[1].split(os.path.sep)[0:3]
|
||||
except (ValueError, IndexError):
|
||||
logger.warning(
|
||||
f'{repository}: Ignoring invalid database dump path "{dump_path}" in archive {archive}'
|
||||
)
|
||||
else:
|
||||
if database_name not in archive_database_names.get(hook_name, []):
|
||||
archive_database_names.setdefault(hook_name, []).extend([database_name])
|
||||
|
||||
return archive_database_names
|
||||
|
||||
|
||||
def find_databases_to_restore(requested_database_names, archive_database_names):
|
||||
'''
|
||||
Given a sequence of requested database names to restore and a dict of hook name to the names of
|
||||
databases found in an archive, return an expanded sequence of database names to restore,
|
||||
replacing "all" with actual database names as appropriate.
|
||||
|
||||
Raise ValueError if any of the requested database names cannot be found in the archive.
|
||||
'''
|
||||
# A map from database hook name to the database names to restore for that hook.
|
||||
restore_names = (
|
||||
{UNSPECIFIED_HOOK: requested_database_names}
|
||||
if requested_database_names
|
||||
else {UNSPECIFIED_HOOK: ['all']}
|
||||
)
|
||||
|
||||
# If "all" is in restore_names, then replace it with the names of dumps found within the
|
||||
# archive.
|
||||
if 'all' in restore_names[UNSPECIFIED_HOOK]:
|
||||
restore_names[UNSPECIFIED_HOOK].remove('all')
|
||||
|
||||
for (hook_name, database_names) in archive_database_names.items():
|
||||
restore_names.setdefault(hook_name, []).extend(database_names)
|
||||
|
||||
# If a database is to be restored as part of "all", then remove it from restore names so
|
||||
# it doesn't get restored twice.
|
||||
for database_name in database_names:
|
||||
if database_name in restore_names[UNSPECIFIED_HOOK]:
|
||||
restore_names[UNSPECIFIED_HOOK].remove(database_name)
|
||||
|
||||
if not restore_names[UNSPECIFIED_HOOK]:
|
||||
restore_names.pop(UNSPECIFIED_HOOK)
|
||||
|
||||
combined_restore_names = set(
|
||||
name for database_names in restore_names.values() for name in database_names
|
||||
)
|
||||
combined_archive_database_names = set(
|
||||
name for database_names in archive_database_names.values() for name in database_names
|
||||
)
|
||||
|
||||
missing_names = sorted(set(combined_restore_names) - combined_archive_database_names)
|
||||
if missing_names:
|
||||
joined_names = ', '.join(f'"{name}"' for name in missing_names)
|
||||
raise ValueError(
|
||||
f"Cannot restore database{'s' if len(missing_names) > 1 else ''} {joined_names} missing from archive"
|
||||
)
|
||||
|
||||
return restore_names
|
||||
|
||||
|
||||
def ensure_databases_found(restore_names, remaining_restore_names, found_names):
|
||||
'''
|
||||
Given a dict from hook name to database names to restore, a dict from hook name to remaining
|
||||
database names to restore, and a sequence of found (actually restored) database names, raise
|
||||
ValueError if requested databases to restore were missing from the archive and/or configuration.
|
||||
'''
|
||||
combined_restore_names = set(
|
||||
name
|
||||
for database_names in tuple(restore_names.values())
|
||||
+ tuple(remaining_restore_names.values())
|
||||
for name in database_names
|
||||
)
|
||||
|
||||
if not combined_restore_names and not found_names:
|
||||
raise ValueError('No databases were found to restore')
|
||||
|
||||
missing_names = sorted(set(combined_restore_names) - set(found_names))
|
||||
if missing_names:
|
||||
joined_names = ', '.join(f'"{name}"' for name in missing_names)
|
||||
raise ValueError(
|
||||
f"Cannot restore database{'s' if len(missing_names) > 1 else ''} {joined_names} missing from borgmatic's configuration"
|
||||
)
|
||||
|
||||
|
||||
def run_restore(
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
hooks,
|
||||
local_borg_version,
|
||||
restore_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "restore" action for the given repository, but only if the repository matches the
|
||||
requested repository in restore arguments.
|
||||
|
||||
Raise ValueError if a configured database could not be found to restore.
|
||||
'''
|
||||
if restore_arguments.repository and not borgmatic.config.validate.repositories_match(
|
||||
repository, restore_arguments.repository
|
||||
):
|
||||
return
|
||||
|
||||
logger.info(
|
||||
'{}: Restoring databases from archive {}'.format(repository, restore_arguments.archive)
|
||||
)
|
||||
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
|
||||
'remove_database_dumps',
|
||||
hooks,
|
||||
repository,
|
||||
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
|
||||
archive_name = borgmatic.borg.rlist.resolve_archive_name(
|
||||
repository, restore_arguments.archive, storage, local_borg_version, local_path, remote_path,
|
||||
)
|
||||
archive_database_names = collect_archive_database_names(
|
||||
repository, archive_name, location, storage, local_borg_version, local_path, remote_path,
|
||||
)
|
||||
restore_names = find_databases_to_restore(restore_arguments.databases, archive_database_names)
|
||||
found_names = set()
|
||||
remaining_restore_names = {}
|
||||
|
||||
for hook_name, database_names in restore_names.items():
|
||||
for database_name in database_names:
|
||||
found_hook_name, found_database = get_configured_database(
|
||||
hooks, archive_database_names, hook_name, database_name
|
||||
)
|
||||
|
||||
if not found_database:
|
||||
remaining_restore_names.setdefault(found_hook_name or hook_name, []).append(
|
||||
database_name
|
||||
)
|
||||
continue
|
||||
|
||||
found_names.add(database_name)
|
||||
restore_single_database(
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
hooks,
|
||||
local_borg_version,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
archive_name,
|
||||
found_hook_name or hook_name,
|
||||
found_database,
|
||||
)
|
||||
|
||||
# For any database that weren't found via exact matches in the hooks configuration, try to
|
||||
# fallback to "all" entries.
|
||||
for hook_name, database_names in remaining_restore_names.items():
|
||||
for database_name in database_names:
|
||||
found_hook_name, found_database = get_configured_database(
|
||||
hooks, archive_database_names, hook_name, database_name, 'all'
|
||||
)
|
||||
|
||||
if not found_database:
|
||||
continue
|
||||
|
||||
found_names.add(database_name)
|
||||
database = copy.copy(found_database)
|
||||
database['name'] = database_name
|
||||
|
||||
restore_single_database(
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
hooks,
|
||||
local_borg_version,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
archive_name,
|
||||
found_hook_name or hook_name,
|
||||
database,
|
||||
)
|
||||
|
||||
borgmatic.hooks.dispatch.call_hooks_even_if_unconfigured(
|
||||
'remove_database_dumps',
|
||||
hooks,
|
||||
repository,
|
||||
borgmatic.hooks.dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
|
||||
ensure_databases_found(restore_names, remaining_restore_names, found_names)
|
32
borgmatic/actions/rinfo.py
Normal file
32
borgmatic/actions/rinfo.py
Normal file
|
@ -0,0 +1,32 @@
|
|||
import json
|
||||
import logging
|
||||
|
||||
import borgmatic.borg.rinfo
|
||||
import borgmatic.config.validate
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_rinfo(
|
||||
repository, storage, local_borg_version, rinfo_arguments, local_path, remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "rinfo" action for the given repository.
|
||||
|
||||
If rinfo_arguments.json is True, yield the JSON output from the info for the repository.
|
||||
'''
|
||||
if rinfo_arguments.repository is None or borgmatic.config.validate.repositories_match(
|
||||
repository, rinfo_arguments.repository
|
||||
):
|
||||
if not rinfo_arguments.json: # pragma: nocover
|
||||
logger.answer('{}: Displaying repository summary information'.format(repository))
|
||||
json_output = borgmatic.borg.rinfo.display_repository_info(
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
rinfo_arguments=rinfo_arguments,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
||||
if json_output: # pragma: nocover
|
||||
yield json.loads(json_output)
|
32
borgmatic/actions/rlist.py
Normal file
32
borgmatic/actions/rlist.py
Normal file
|
@ -0,0 +1,32 @@
|
|||
import json
|
||||
import logging
|
||||
|
||||
import borgmatic.borg.rlist
|
||||
import borgmatic.config.validate
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_rlist(
|
||||
repository, storage, local_borg_version, rlist_arguments, local_path, remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "rlist" action for the given repository.
|
||||
|
||||
If rlist_arguments.json is True, yield the JSON output from listing the repository.
|
||||
'''
|
||||
if rlist_arguments.repository is None or borgmatic.config.validate.repositories_match(
|
||||
repository, rlist_arguments.repository
|
||||
):
|
||||
if not rlist_arguments.json: # pragma: nocover
|
||||
logger.answer('{}: Listing repository'.format(repository))
|
||||
json_output = borgmatic.borg.rlist.list_repository(
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
rlist_arguments=rlist_arguments,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
||||
if json_output: # pragma: nocover
|
||||
yield json.loads(json_output)
|
29
borgmatic/actions/transfer.py
Normal file
29
borgmatic/actions/transfer.py
Normal file
|
@ -0,0 +1,29 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.borg.transfer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def run_transfer(
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
transfer_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
):
|
||||
'''
|
||||
Run the "transfer" action for the given repository.
|
||||
'''
|
||||
logger.info(f'{repository}: Transferring archives to repository')
|
||||
borgmatic.borg.transfer.transfer_archives(
|
||||
global_arguments.dry_run,
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
transfer_arguments,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
68
borgmatic/borg/borg.py
Normal file
68
borgmatic/borg/borg.py
Normal file
|
@ -0,0 +1,68 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.logger
|
||||
from borgmatic.borg import environment, flags
|
||||
from borgmatic.execute import execute_command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
REPOSITORYLESS_BORG_COMMANDS = {'serve', None}
|
||||
BORG_SUBCOMMANDS_WITH_SUBCOMMANDS = {'key', 'debug'}
|
||||
BORG_SUBCOMMANDS_WITHOUT_REPOSITORY = (('debug', 'info'), ('debug', 'convert-profile'), ())
|
||||
|
||||
|
||||
def run_arbitrary_borg(
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
options,
|
||||
archive=None,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, a storage config dict, the local Borg version, a
|
||||
sequence of arbitrary command-line Borg options, and an optional archive name, run an arbitrary
|
||||
Borg command on the given repository/archive.
|
||||
'''
|
||||
borgmatic.logger.add_custom_log_levels()
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
|
||||
try:
|
||||
options = options[1:] if options[0] == '--' else options
|
||||
|
||||
# Borg commands like "key" have a sub-command ("export", etc.) that must follow it.
|
||||
command_options_start_index = 2 if options[0] in BORG_SUBCOMMANDS_WITH_SUBCOMMANDS else 1
|
||||
borg_command = tuple(options[:command_options_start_index])
|
||||
command_options = tuple(options[command_options_start_index:])
|
||||
except IndexError:
|
||||
borg_command = ()
|
||||
command_options = ()
|
||||
|
||||
if borg_command in BORG_SUBCOMMANDS_WITHOUT_REPOSITORY:
|
||||
repository_archive_flags = ()
|
||||
elif archive:
|
||||
repository_archive_flags = flags.make_repository_archive_flags(
|
||||
repository, archive, local_borg_version
|
||||
)
|
||||
else:
|
||||
repository_archive_flags = flags.make_repository_flags(repository, local_borg_version)
|
||||
|
||||
full_command = (
|
||||
(local_path,)
|
||||
+ borg_command
|
||||
+ repository_archive_flags
|
||||
+ command_options
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ flags.make_flags('remote-path', remote_path)
|
||||
+ flags.make_flags('lock-wait', lock_wait)
|
||||
)
|
||||
|
||||
return execute_command(
|
||||
full_command,
|
||||
output_log_level=logging.ANSWER,
|
||||
borg_local_path=local_path,
|
||||
extra_environment=environment.make_environment(storage_config),
|
||||
)
|
31
borgmatic/borg/break_lock.py
Normal file
31
borgmatic/borg/break_lock.py
Normal file
|
@ -0,0 +1,31 @@
|
|||
import logging
|
||||
|
||||
from borgmatic.borg import environment, flags
|
||||
from borgmatic.execute import execute_command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def break_lock(
|
||||
repository, storage_config, local_borg_version, local_path='borg', remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, a storage configuration dict, the local Borg version,
|
||||
and optional local and remote Borg paths, break any repository and cache locks leftover from Borg
|
||||
aborting.
|
||||
'''
|
||||
umask = storage_config.get('umask', None)
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
|
||||
full_command = (
|
||||
(local_path, 'break-lock')
|
||||
+ (('--remote-path', remote_path) if remote_path else ())
|
||||
+ (('--umask', str(umask)) if umask else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ flags.make_repository_flags(repository, local_borg_version)
|
||||
)
|
||||
|
||||
borg_environment = environment.make_environment(storage_config)
|
||||
execute_command(full_command, borg_local_path=local_path, extra_environment=borg_environment)
|
|
@ -1,48 +1,155 @@
|
|||
import argparse
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import pathlib
|
||||
|
||||
from borgmatic.borg import extract
|
||||
from borgmatic.borg import environment, extract, feature, flags, rinfo, state
|
||||
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
|
||||
|
||||
DEFAULT_CHECKS = ('repository', 'archives')
|
||||
DEFAULT_CHECKS = (
|
||||
{'name': 'repository', 'frequency': '1 month'},
|
||||
{'name': 'archives', 'frequency': '1 month'},
|
||||
)
|
||||
DEFAULT_PREFIX = '{hostname}-'
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _parse_checks(consistency_config, only_checks=None):
|
||||
def parse_checks(consistency_config, only_checks=None):
|
||||
'''
|
||||
Given a consistency config with a "checks" list, and an optional list of override checks,
|
||||
transform them a tuple of named checks to run.
|
||||
Given a consistency config with a "checks" sequence of dicts and an optional list of override
|
||||
checks, return a tuple of named checks to run.
|
||||
|
||||
For example, given a retention config of:
|
||||
|
||||
{'checks': ['repository', 'archives']}
|
||||
{'checks': ({'name': 'repository'}, {'name': 'archives'})}
|
||||
|
||||
This will be returned as:
|
||||
|
||||
('repository', 'archives')
|
||||
|
||||
If no "checks" option is present in the config, return the DEFAULT_CHECKS. If the checks value
|
||||
is the string "disabled", return an empty tuple, meaning that no checks should be run.
|
||||
|
||||
If the "data" option is present, then make sure the "archives" option is included as well.
|
||||
If no "checks" option is present in the config, return the DEFAULT_CHECKS. If a checks value
|
||||
has a name of "disabled", return an empty tuple, meaning that no checks should be run.
|
||||
'''
|
||||
checks = [
|
||||
check.lower() for check in (only_checks or consistency_config.get('checks', []) or [])
|
||||
]
|
||||
if checks == ['disabled']:
|
||||
checks = only_checks or tuple(
|
||||
check_config['name']
|
||||
for check_config in (consistency_config.get('checks', None) or DEFAULT_CHECKS)
|
||||
)
|
||||
checks = tuple(check.lower() for check in checks)
|
||||
if 'disabled' in checks:
|
||||
if len(checks) > 1:
|
||||
logger.warning(
|
||||
'Multiple checks are configured, but one of them is "disabled"; not running any checks'
|
||||
)
|
||||
return ()
|
||||
|
||||
if 'data' in checks and 'archives' not in checks:
|
||||
checks.append('archives')
|
||||
|
||||
return tuple(check for check in checks if check not in ('disabled', '')) or DEFAULT_CHECKS
|
||||
return checks
|
||||
|
||||
|
||||
def _make_check_flags(checks, check_last=None, prefix=None):
|
||||
def parse_frequency(frequency):
|
||||
'''
|
||||
Given a parsed sequence of checks, transform it into tuple of command-line flags.
|
||||
Given a frequency string with a number and a unit of time, return a corresponding
|
||||
datetime.timedelta instance or None if the frequency is None or "always".
|
||||
|
||||
For instance, given "3 weeks", return datetime.timedelta(weeks=3)
|
||||
|
||||
Raise ValueError if the given frequency cannot be parsed.
|
||||
'''
|
||||
if not frequency:
|
||||
return None
|
||||
|
||||
frequency = frequency.strip().lower()
|
||||
|
||||
if frequency == 'always':
|
||||
return None
|
||||
|
||||
try:
|
||||
number, time_unit = frequency.split(' ')
|
||||
number = int(number)
|
||||
except ValueError:
|
||||
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
|
||||
|
||||
if not time_unit.endswith('s'):
|
||||
time_unit += 's'
|
||||
|
||||
if time_unit == 'months':
|
||||
number *= 30
|
||||
time_unit = 'days'
|
||||
elif time_unit == 'years':
|
||||
number *= 365
|
||||
time_unit = 'days'
|
||||
|
||||
try:
|
||||
return datetime.timedelta(**{time_unit: number})
|
||||
except TypeError:
|
||||
raise ValueError(f"Could not parse consistency check frequency '{frequency}'")
|
||||
|
||||
|
||||
def filter_checks_on_frequency(
|
||||
location_config, consistency_config, borg_repository_id, checks, force
|
||||
):
|
||||
'''
|
||||
Given a location config, a consistency config with a "checks" sequence of dicts, a Borg
|
||||
repository ID, a sequence of checks, and whether to force checks to run, filter down those
|
||||
checks based on the configured "frequency" for each check as compared to its check time file.
|
||||
|
||||
In other words, a check whose check time file's timestamp is too new (based on the configured
|
||||
frequency) will get cut from the returned sequence of checks. Example:
|
||||
|
||||
consistency_config = {
|
||||
'checks': [
|
||||
{
|
||||
'name': 'archives',
|
||||
'frequency': '2 weeks',
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
When this function is called with that consistency_config and "archives" in checks, "archives"
|
||||
will get filtered out of the returned result if its check time file is newer than 2 weeks old,
|
||||
indicating that it's not yet time to run that check again.
|
||||
|
||||
Raise ValueError if a frequency cannot be parsed.
|
||||
'''
|
||||
filtered_checks = list(checks)
|
||||
|
||||
if force:
|
||||
return tuple(filtered_checks)
|
||||
|
||||
for check_config in consistency_config.get('checks', DEFAULT_CHECKS):
|
||||
check = check_config['name']
|
||||
if checks and check not in checks:
|
||||
continue
|
||||
|
||||
frequency_delta = parse_frequency(check_config.get('frequency'))
|
||||
if not frequency_delta:
|
||||
continue
|
||||
|
||||
check_time = read_check_time(
|
||||
make_check_time_path(location_config, borg_repository_id, check)
|
||||
)
|
||||
if not check_time:
|
||||
continue
|
||||
|
||||
# If we've not yet reached the time when the frequency dictates we're ready for another
|
||||
# check, skip this check.
|
||||
if datetime.datetime.now() < check_time + frequency_delta:
|
||||
remaining = check_time + frequency_delta - datetime.datetime.now()
|
||||
logger.info(
|
||||
f'Skipping {check} check due to configured frequency; {remaining} until next check'
|
||||
)
|
||||
filtered_checks.remove(check)
|
||||
|
||||
return tuple(filtered_checks)
|
||||
|
||||
|
||||
def make_check_flags(local_borg_version, checks, check_last=None, prefix=None):
|
||||
'''
|
||||
Given the local Borg version and a parsed sequence of checks, transform the checks into tuple of
|
||||
command-line flags.
|
||||
|
||||
For example, given parsed checks of:
|
||||
|
||||
|
@ -53,47 +160,100 @@ def _make_check_flags(checks, check_last=None, prefix=None):
|
|||
('--repository-only',)
|
||||
|
||||
However, if both "repository" and "archives" are in checks, then omit them from the returned
|
||||
flags because Borg does both checks by default.
|
||||
flags because Borg does both checks by default. If "data" is in checks, that implies "archives".
|
||||
|
||||
Additionally, if a check_last value is given and "archives" is in checks, then include a
|
||||
"--last" flag. And if a prefix value is given and "archives" is in checks, then include a
|
||||
"--prefix" flag.
|
||||
"--match-archives" flag.
|
||||
'''
|
||||
if 'data' in checks:
|
||||
data_flags = ('--verify-data',)
|
||||
checks += ('archives',)
|
||||
else:
|
||||
data_flags = ()
|
||||
|
||||
if 'archives' in checks:
|
||||
last_flags = ('--last', str(check_last)) if check_last else ()
|
||||
prefix_flags = ('--prefix', prefix) if prefix else ()
|
||||
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
|
||||
match_archives_flags = ('--match-archives', f'sh:{prefix}*') if prefix else ()
|
||||
else:
|
||||
match_archives_flags = ('--glob-archives', f'{prefix}*') if prefix else ()
|
||||
else:
|
||||
last_flags = ()
|
||||
prefix_flags = ()
|
||||
match_archives_flags = ()
|
||||
if check_last:
|
||||
logger.warning(
|
||||
'Ignoring check_last option, as "archives" is not in consistency checks.'
|
||||
'Ignoring check_last option, as "archives" or "data" are not in consistency checks'
|
||||
)
|
||||
if prefix:
|
||||
logger.warning(
|
||||
'Ignoring consistency prefix option, as "archives" is not in consistency checks.'
|
||||
'Ignoring consistency prefix option, as "archives" or "data" are not in consistency checks'
|
||||
)
|
||||
|
||||
common_flags = last_flags + prefix_flags + (('--verify-data',) if 'data' in checks else ())
|
||||
common_flags = last_flags + match_archives_flags + data_flags
|
||||
|
||||
if set(DEFAULT_CHECKS).issubset(set(checks)):
|
||||
if {'repository', 'archives'}.issubset(set(checks)):
|
||||
return common_flags
|
||||
|
||||
return (
|
||||
tuple('--{}-only'.format(check) for check in checks if check in DEFAULT_CHECKS)
|
||||
tuple('--{}-only'.format(check) for check in checks if check in ('repository', 'archives'))
|
||||
+ common_flags
|
||||
)
|
||||
|
||||
|
||||
def make_check_time_path(location_config, borg_repository_id, check_type):
|
||||
'''
|
||||
Given a location configuration dict, a Borg repository ID, and the name of a check type
|
||||
("repository", "archives", etc.), return a path for recording that check's time (the time of
|
||||
that check last occurring).
|
||||
'''
|
||||
return os.path.join(
|
||||
os.path.expanduser(
|
||||
location_config.get(
|
||||
'borgmatic_source_directory', state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
|
||||
)
|
||||
),
|
||||
'checks',
|
||||
borg_repository_id,
|
||||
check_type,
|
||||
)
|
||||
|
||||
|
||||
def write_check_time(path): # pragma: no cover
|
||||
'''
|
||||
Record a check time of now as the modification time of the given path.
|
||||
'''
|
||||
logger.debug(f'Writing check time at {path}')
|
||||
|
||||
os.makedirs(os.path.dirname(path), mode=0o700, exist_ok=True)
|
||||
pathlib.Path(path, mode=0o600).touch()
|
||||
|
||||
|
||||
def read_check_time(path):
|
||||
'''
|
||||
Return the check time based on the modification time of the given path. Return None if the path
|
||||
doesn't exist.
|
||||
'''
|
||||
logger.debug(f'Reading check time from {path}')
|
||||
|
||||
try:
|
||||
return datetime.datetime.fromtimestamp(os.stat(path).st_mtime)
|
||||
except FileNotFoundError:
|
||||
return None
|
||||
|
||||
|
||||
def check_archives(
|
||||
repository,
|
||||
location_config,
|
||||
storage_config,
|
||||
consistency_config,
|
||||
local_borg_version,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
progress=None,
|
||||
repair=None,
|
||||
only_checks=None,
|
||||
force=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, a storage config dict, a consistency config dict,
|
||||
|
@ -102,13 +262,35 @@ def check_archives(
|
|||
Borg archives for consistency.
|
||||
|
||||
If there are no consistency checks to run, skip running them.
|
||||
|
||||
Raises ValueError if the Borg repository ID cannot be determined.
|
||||
'''
|
||||
checks = _parse_checks(consistency_config, only_checks)
|
||||
try:
|
||||
borg_repository_id = json.loads(
|
||||
rinfo.display_repository_info(
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
argparse.Namespace(json=True),
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
)['repository']['id']
|
||||
except (json.JSONDecodeError, KeyError):
|
||||
raise ValueError(f'Cannot determine Borg repository ID for {repository}')
|
||||
|
||||
checks = filter_checks_on_frequency(
|
||||
location_config,
|
||||
consistency_config,
|
||||
borg_repository_id,
|
||||
parse_checks(consistency_config, only_checks),
|
||||
force,
|
||||
)
|
||||
check_last = consistency_config.get('check_last', None)
|
||||
lock_wait = None
|
||||
extra_borg_options = storage_config.get('extra_borg_options', {}).get('check', '')
|
||||
|
||||
if set(checks).intersection(set(DEFAULT_CHECKS + ('data',))):
|
||||
if set(checks).intersection({'repository', 'archives', 'data'}):
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
|
||||
verbosity_flags = ()
|
||||
|
@ -122,21 +304,31 @@ def check_archives(
|
|||
full_command = (
|
||||
(local_path, 'check')
|
||||
+ (('--repair',) if repair else ())
|
||||
+ _make_check_flags(checks, check_last, prefix)
|
||||
+ make_check_flags(local_borg_version, checks, check_last, prefix)
|
||||
+ (('--remote-path', remote_path) if remote_path else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ verbosity_flags
|
||||
+ (('--progress',) if progress else ())
|
||||
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
|
||||
+ (repository,)
|
||||
+ flags.make_repository_flags(repository, local_borg_version)
|
||||
)
|
||||
|
||||
# The Borg repair option trigger an interactive prompt, which won't work when output is
|
||||
borg_environment = environment.make_environment(storage_config)
|
||||
|
||||
# The Borg repair option triggers an interactive prompt, which won't work when output is
|
||||
# captured. And progress messes with the terminal directly.
|
||||
if repair or progress:
|
||||
execute_command(full_command, output_file=DO_NOT_CAPTURE)
|
||||
execute_command(
|
||||
full_command, output_file=DO_NOT_CAPTURE, extra_environment=borg_environment
|
||||
)
|
||||
else:
|
||||
execute_command(full_command)
|
||||
execute_command(full_command, extra_environment=borg_environment)
|
||||
|
||||
for check in checks:
|
||||
write_check_time(make_check_time_path(location_config, borg_repository_id, check))
|
||||
|
||||
if 'extract' in checks:
|
||||
extract.extract_last_archive_dry_run(repository, lock_wait, local_path, remote_path)
|
||||
extract.extract_last_archive_dry_run(
|
||||
storage_config, local_borg_version, repository, lock_wait, local_path, remote_path
|
||||
)
|
||||
write_check_time(make_check_time_path(location_config, borg_repository_id, 'extract'))
|
||||
|
|
51
borgmatic/borg/compact.py
Normal file
51
borgmatic/borg/compact.py
Normal file
|
@ -0,0 +1,51 @@
|
|||
import logging
|
||||
|
||||
from borgmatic.borg import environment, flags
|
||||
from borgmatic.execute import execute_command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def compact_segments(
|
||||
dry_run,
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
progress=False,
|
||||
cleanup_commits=False,
|
||||
threshold=None,
|
||||
):
|
||||
'''
|
||||
Given dry-run flag, a local or remote repository path, a storage config dict, and the local
|
||||
Borg version, compact the segments in a repository.
|
||||
'''
|
||||
umask = storage_config.get('umask', None)
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
extra_borg_options = storage_config.get('extra_borg_options', {}).get('compact', '')
|
||||
|
||||
full_command = (
|
||||
(local_path, 'compact')
|
||||
+ (('--remote-path', remote_path) if remote_path else ())
|
||||
+ (('--umask', str(umask)) if umask else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ (('--progress',) if progress else ())
|
||||
+ (('--cleanup-commits',) if cleanup_commits else ())
|
||||
+ (('--threshold', str(threshold)) if threshold else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
|
||||
+ flags.make_repository_flags(repository, local_borg_version)
|
||||
)
|
||||
|
||||
if dry_run:
|
||||
logging.info(f'{repository}: Skipping compact (dry run)')
|
||||
return
|
||||
|
||||
execute_command(
|
||||
full_command,
|
||||
output_log_level=logging.INFO,
|
||||
borg_local_path=local_path,
|
||||
extra_environment=environment.make_environment(storage_config),
|
||||
)
|
|
@ -3,14 +3,22 @@ import itertools
|
|||
import logging
|
||||
import os
|
||||
import pathlib
|
||||
import stat
|
||||
import tempfile
|
||||
|
||||
from borgmatic.execute import DO_NOT_CAPTURE, execute_command, execute_command_with_processes
|
||||
import borgmatic.logger
|
||||
from borgmatic.borg import environment, feature, flags, state
|
||||
from borgmatic.execute import (
|
||||
DO_NOT_CAPTURE,
|
||||
execute_command,
|
||||
execute_command_and_capture_output,
|
||||
execute_command_with_processes,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _expand_directory(directory):
|
||||
def expand_directory(directory):
|
||||
'''
|
||||
Given a directory path, expand any tilde (representing a user's home directory) and any globs
|
||||
therein. Return a list of one or more resulting paths.
|
||||
|
@ -20,7 +28,7 @@ def _expand_directory(directory):
|
|||
return glob.glob(expanded_directory) or [expanded_directory]
|
||||
|
||||
|
||||
def _expand_directories(directories):
|
||||
def expand_directories(directories):
|
||||
'''
|
||||
Given a sequence of directory paths, expand tildes and globs in each one. Return all the
|
||||
resulting directories as a single flattened tuple.
|
||||
|
@ -29,11 +37,11 @@ def _expand_directories(directories):
|
|||
return ()
|
||||
|
||||
return tuple(
|
||||
itertools.chain.from_iterable(_expand_directory(directory) for directory in directories)
|
||||
itertools.chain.from_iterable(expand_directory(directory) for directory in directories)
|
||||
)
|
||||
|
||||
|
||||
def _expand_home_directories(directories):
|
||||
def expand_home_directories(directories):
|
||||
'''
|
||||
Given a sequence of directory paths, expand tildes in each one. Do not perform any globbing.
|
||||
Return the results as a tuple.
|
||||
|
@ -44,16 +52,21 @@ def _expand_home_directories(directories):
|
|||
return tuple(os.path.expanduser(directory) for directory in directories)
|
||||
|
||||
|
||||
def map_directories_to_devices(directories): # pragma: no cover
|
||||
def map_directories_to_devices(directories):
|
||||
'''
|
||||
Given a sequence of directories, return a map from directory to an identifier for the device on
|
||||
which that directory resides. This is handy for determining whether two different directories
|
||||
are on the same filesystem (have the same device identifier).
|
||||
which that directory resides or None if the path doesn't exist.
|
||||
|
||||
This is handy for determining whether two different directories are on the same filesystem (have
|
||||
the same device identifier).
|
||||
'''
|
||||
return {directory: os.stat(directory).st_dev for directory in directories}
|
||||
return {
|
||||
directory: os.stat(directory).st_dev if os.path.exists(directory) else None
|
||||
for directory in directories
|
||||
}
|
||||
|
||||
|
||||
def deduplicate_directories(directory_devices):
|
||||
def deduplicate_directories(directory_devices, additional_directory_devices):
|
||||
'''
|
||||
Given a map from directory to the identifier for the device on which that directory resides,
|
||||
return the directories as a sorted tuple with all duplicate child directories removed. For
|
||||
|
@ -68,21 +81,28 @@ def deduplicate_directories(directory_devices):
|
|||
there are cases where Borg coming across the same file twice will result in duplicate reads and
|
||||
even hangs, e.g. when a database hook is using a named pipe for streaming database dumps to
|
||||
Borg.
|
||||
|
||||
If any additional directory devices are given, also deduplicate against them, but don't include
|
||||
them in the returned directories.
|
||||
'''
|
||||
deduplicated = set()
|
||||
directories = sorted(directory_devices.keys())
|
||||
additional_directories = sorted(additional_directory_devices.keys())
|
||||
all_devices = {**directory_devices, **additional_directory_devices}
|
||||
|
||||
for directory in directories:
|
||||
deduplicated.add(directory)
|
||||
parents = pathlib.PurePath(directory).parents
|
||||
|
||||
# If another directory in the given list is a parent of current directory (even n levels
|
||||
# up) and both are on the same filesystem, then the current directory is a duplicate.
|
||||
for other_directory in directories:
|
||||
# If another directory in the given list (or the additional list) is a parent of current
|
||||
# directory (even n levels up) and both are on the same filesystem, then the current
|
||||
# directory is a duplicate.
|
||||
for other_directory in directories + additional_directories:
|
||||
for parent in parents:
|
||||
if (
|
||||
pathlib.PurePath(other_directory) == parent
|
||||
and directory_devices[other_directory] == directory_devices[directory]
|
||||
and all_devices[directory] is not None
|
||||
and all_devices[other_directory] == all_devices[directory]
|
||||
):
|
||||
if directory in deduplicated:
|
||||
deduplicated.remove(directory)
|
||||
|
@ -91,22 +111,42 @@ def deduplicate_directories(directory_devices):
|
|||
return tuple(sorted(deduplicated))
|
||||
|
||||
|
||||
def _write_pattern_file(patterns=None):
|
||||
def write_pattern_file(patterns=None, sources=None, pattern_file=None):
|
||||
'''
|
||||
Given a sequence of patterns, write them to a named temporary file and return it. Return None
|
||||
if no patterns are provided.
|
||||
Given a sequence of patterns and an optional sequence of source directories, write them to a
|
||||
named temporary file (with the source directories as additional roots) and return the file.
|
||||
If an optional open pattern file is given, overwrite it instead of making a new temporary file.
|
||||
Return None if no patterns are provided.
|
||||
'''
|
||||
if not patterns:
|
||||
if not patterns and not sources:
|
||||
return None
|
||||
|
||||
if pattern_file is None:
|
||||
pattern_file = tempfile.NamedTemporaryFile('w')
|
||||
pattern_file.write('\n'.join(patterns))
|
||||
else:
|
||||
pattern_file.seek(0)
|
||||
|
||||
pattern_file.write(
|
||||
'\n'.join(tuple(patterns or ()) + tuple(f'R {source}' for source in (sources or [])))
|
||||
)
|
||||
pattern_file.flush()
|
||||
|
||||
return pattern_file
|
||||
|
||||
|
||||
def _make_pattern_flags(location_config, pattern_filename=None):
|
||||
def ensure_files_readable(*filename_lists):
|
||||
'''
|
||||
Given a sequence of filename sequences, ensure that each filename is openable. This prevents
|
||||
unreadable files from being passed to Borg, which in certain situations only warns instead of
|
||||
erroring.
|
||||
'''
|
||||
for file_object in itertools.chain.from_iterable(
|
||||
filename_list for filename_list in filename_lists if filename_list
|
||||
):
|
||||
open(file_object).close()
|
||||
|
||||
|
||||
def make_pattern_flags(location_config, pattern_filename=None):
|
||||
'''
|
||||
Given a location config dict with a potential patterns_from option, and a filename containing
|
||||
any additional patterns, return the corresponding Borg flags for those files as a tuple.
|
||||
|
@ -122,7 +162,7 @@ def _make_pattern_flags(location_config, pattern_filename=None):
|
|||
)
|
||||
|
||||
|
||||
def _make_exclude_flags(location_config, exclude_filename=None):
|
||||
def make_exclude_flags(location_config, exclude_filename=None):
|
||||
'''
|
||||
Given a location config dict with various exclude options, and a filename containing any exclude
|
||||
patterns, return the corresponding Borg flags as a tuple.
|
||||
|
@ -156,15 +196,36 @@ def _make_exclude_flags(location_config, exclude_filename=None):
|
|||
)
|
||||
|
||||
|
||||
DEFAULT_BORGMATIC_SOURCE_DIRECTORY = '~/.borgmatic'
|
||||
def make_list_filter_flags(local_borg_version, dry_run):
|
||||
'''
|
||||
Given the local Borg version and whether this is a dry run, return the corresponding flags for
|
||||
passing to "--list --filter". The general idea is that excludes are shown for a dry run or when
|
||||
the verbosity is debug.
|
||||
'''
|
||||
base_flags = 'AME'
|
||||
show_excludes = logger.isEnabledFor(logging.DEBUG)
|
||||
|
||||
if feature.available(feature.Feature.EXCLUDED_FILES_MINUS, local_borg_version):
|
||||
if show_excludes or dry_run:
|
||||
return f'{base_flags}+-'
|
||||
else:
|
||||
return base_flags
|
||||
|
||||
if show_excludes:
|
||||
return f'{base_flags}x-'
|
||||
else:
|
||||
return f'{base_flags}-'
|
||||
|
||||
|
||||
def borgmatic_source_directories(borgmatic_source_directory):
|
||||
DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}'
|
||||
|
||||
|
||||
def collect_borgmatic_source_directories(borgmatic_source_directory):
|
||||
'''
|
||||
Return a list of borgmatic-specific source directories used for state like database backups.
|
||||
'''
|
||||
if not borgmatic_source_directory:
|
||||
borgmatic_source_directory = DEFAULT_BORGMATIC_SOURCE_DIRECTORY
|
||||
borgmatic_source_directory = state.DEFAULT_BORGMATIC_SOURCE_DIRECTORY
|
||||
|
||||
return (
|
||||
[borgmatic_source_directory]
|
||||
|
@ -173,7 +234,76 @@ def borgmatic_source_directories(borgmatic_source_directory):
|
|||
)
|
||||
|
||||
|
||||
DEFAULT_ARCHIVE_NAME_FORMAT = '{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}'
|
||||
ROOT_PATTERN_PREFIX = 'R '
|
||||
|
||||
|
||||
def pattern_root_directories(patterns=None):
|
||||
'''
|
||||
Given a sequence of patterns, parse out and return just the root directories.
|
||||
'''
|
||||
if not patterns:
|
||||
return []
|
||||
|
||||
return [
|
||||
pattern.split(ROOT_PATTERN_PREFIX, maxsplit=1)[1]
|
||||
for pattern in patterns
|
||||
if pattern.startswith(ROOT_PATTERN_PREFIX)
|
||||
]
|
||||
|
||||
|
||||
def special_file(path):
|
||||
'''
|
||||
Return whether the given path is a special file (character device, block device, or named pipe
|
||||
/ FIFO).
|
||||
'''
|
||||
try:
|
||||
mode = os.stat(path).st_mode
|
||||
except (FileNotFoundError, OSError):
|
||||
return False
|
||||
|
||||
return stat.S_ISCHR(mode) or stat.S_ISBLK(mode) or stat.S_ISFIFO(mode)
|
||||
|
||||
|
||||
def any_parent_directories(path, candidate_parents):
|
||||
'''
|
||||
Return whether any of the given candidate parent directories are an actual parent of the given
|
||||
path. This includes grandparents, etc.
|
||||
'''
|
||||
for parent in candidate_parents:
|
||||
if pathlib.PurePosixPath(parent) in pathlib.PurePath(path).parents:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def collect_special_file_paths(
|
||||
create_command, local_path, working_directory, borg_environment, skip_directories
|
||||
):
|
||||
'''
|
||||
Given a Borg create command as a tuple, a local Borg path, a working directory, and a dict of
|
||||
environment variables to pass to Borg, and a sequence of parent directories to skip, collect the
|
||||
paths for any special files (character devices, block devices, and named pipes / FIFOs) that
|
||||
Borg would encounter during a create. These are all paths that could cause Borg to hang if its
|
||||
--read-special flag is used.
|
||||
'''
|
||||
paths_output = execute_command_and_capture_output(
|
||||
create_command + ('--dry-run', '--list'),
|
||||
capture_stderr=True,
|
||||
working_directory=working_directory,
|
||||
extra_environment=borg_environment,
|
||||
)
|
||||
|
||||
paths = tuple(
|
||||
path_line.split(' ', 1)[1]
|
||||
for path_line in paths_output.split('\n')
|
||||
if path_line and path_line.startswith('- ') or path_line.startswith('+ ')
|
||||
)
|
||||
|
||||
return tuple(
|
||||
path
|
||||
for path in paths
|
||||
if special_file(path) and not any_parent_directories(path, skip_directories)
|
||||
)
|
||||
|
||||
|
||||
def create_archive(
|
||||
|
@ -181,12 +311,13 @@ def create_archive(
|
|||
repository,
|
||||
location_config,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
progress=False,
|
||||
stats=False,
|
||||
json=False,
|
||||
files=False,
|
||||
list_files=False,
|
||||
stream_processes=None,
|
||||
):
|
||||
'''
|
||||
|
@ -196,72 +327,117 @@ def create_archive(
|
|||
If a sequence of stream processes is given (instances of subprocess.Popen), then execute the
|
||||
create command while also triggering the given processes to produce output.
|
||||
'''
|
||||
borgmatic.logger.add_custom_log_levels()
|
||||
borgmatic_source_directories = expand_directories(
|
||||
collect_borgmatic_source_directories(location_config.get('borgmatic_source_directory'))
|
||||
)
|
||||
sources = deduplicate_directories(
|
||||
map_directories_to_devices(
|
||||
_expand_directories(
|
||||
location_config['source_directories']
|
||||
+ borgmatic_source_directories(location_config.get('borgmatic_source_directory'))
|
||||
)
|
||||
expand_directories(
|
||||
tuple(location_config.get('source_directories', ())) + borgmatic_source_directories
|
||||
)
|
||||
),
|
||||
additional_directory_devices=map_directories_to_devices(
|
||||
expand_directories(pattern_root_directories(location_config.get('patterns')))
|
||||
),
|
||||
)
|
||||
|
||||
pattern_file = _write_pattern_file(location_config.get('patterns'))
|
||||
exclude_file = _write_pattern_file(
|
||||
_expand_home_directories(location_config.get('exclude_patterns'))
|
||||
ensure_files_readable(location_config.get('patterns_from'), location_config.get('exclude_from'))
|
||||
|
||||
try:
|
||||
working_directory = os.path.expanduser(location_config.get('working_directory'))
|
||||
except TypeError:
|
||||
working_directory = None
|
||||
|
||||
pattern_file = (
|
||||
write_pattern_file(location_config.get('patterns'), sources)
|
||||
if location_config.get('patterns') or location_config.get('patterns_from')
|
||||
else None
|
||||
)
|
||||
exclude_file = write_pattern_file(
|
||||
expand_home_directories(location_config.get('exclude_patterns'))
|
||||
)
|
||||
checkpoint_interval = storage_config.get('checkpoint_interval', None)
|
||||
checkpoint_volume = storage_config.get('checkpoint_volume', None)
|
||||
chunker_params = storage_config.get('chunker_params', None)
|
||||
compression = storage_config.get('compression', None)
|
||||
remote_rate_limit = storage_config.get('remote_rate_limit', None)
|
||||
upload_rate_limit = storage_config.get('upload_rate_limit', None)
|
||||
umask = storage_config.get('umask', None)
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
list_filter_flags = make_list_filter_flags(local_borg_version, dry_run)
|
||||
files_cache = location_config.get('files_cache')
|
||||
archive_name_format = storage_config.get('archive_name_format', DEFAULT_ARCHIVE_NAME_FORMAT)
|
||||
extra_borg_options = storage_config.get('extra_borg_options', {}).get('create', '')
|
||||
|
||||
full_command = (
|
||||
(local_path, 'create')
|
||||
+ _make_pattern_flags(location_config, pattern_file.name if pattern_file else None)
|
||||
+ _make_exclude_flags(location_config, exclude_file.name if exclude_file else None)
|
||||
if feature.available(feature.Feature.ATIME, local_borg_version):
|
||||
atime_flags = ('--atime',) if location_config.get('atime') is True else ()
|
||||
else:
|
||||
atime_flags = ('--noatime',) if location_config.get('atime') is False else ()
|
||||
|
||||
if feature.available(feature.Feature.NOFLAGS, local_borg_version):
|
||||
noflags_flags = ('--noflags',) if location_config.get('flags') is False else ()
|
||||
else:
|
||||
noflags_flags = ('--nobsdflags',) if location_config.get('flags') is False else ()
|
||||
|
||||
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
|
||||
numeric_ids_flags = ('--numeric-ids',) if location_config.get('numeric_ids') else ()
|
||||
else:
|
||||
numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_ids') else ()
|
||||
|
||||
if feature.available(feature.Feature.UPLOAD_RATELIMIT, local_borg_version):
|
||||
upload_ratelimit_flags = (
|
||||
('--upload-ratelimit', str(upload_rate_limit)) if upload_rate_limit else ()
|
||||
)
|
||||
else:
|
||||
upload_ratelimit_flags = (
|
||||
('--remote-ratelimit', str(upload_rate_limit)) if upload_rate_limit else ()
|
||||
)
|
||||
|
||||
if stream_processes and location_config.get('read_special') is False:
|
||||
logger.warning(
|
||||
f'{repository}: Ignoring configured "read_special" value of false, as true is needed for database hooks.'
|
||||
)
|
||||
|
||||
create_command = (
|
||||
tuple(local_path.split(' '))
|
||||
+ ('create',)
|
||||
+ make_pattern_flags(location_config, pattern_file.name if pattern_file else None)
|
||||
+ make_exclude_flags(location_config, exclude_file.name if exclude_file else None)
|
||||
+ (('--checkpoint-interval', str(checkpoint_interval)) if checkpoint_interval else ())
|
||||
+ (('--checkpoint-volume', str(checkpoint_volume)) if checkpoint_volume else ())
|
||||
+ (('--chunker-params', chunker_params) if chunker_params else ())
|
||||
+ (('--compression', compression) if compression else ())
|
||||
+ (('--remote-ratelimit', str(remote_rate_limit)) if remote_rate_limit else ())
|
||||
+ upload_ratelimit_flags
|
||||
+ (
|
||||
('--one-file-system',)
|
||||
if location_config.get('one_file_system') or stream_processes
|
||||
else ()
|
||||
)
|
||||
+ (('--numeric-owner',) if location_config.get('numeric_owner') else ())
|
||||
+ (('--noatime',) if location_config.get('atime') is False else ())
|
||||
+ numeric_ids_flags
|
||||
+ atime_flags
|
||||
+ (('--noctime',) if location_config.get('ctime') is False else ())
|
||||
+ (('--nobirthtime',) if location_config.get('birthtime') is False else ())
|
||||
+ (('--read-special',) if (location_config.get('read_special') or stream_processes) else ())
|
||||
+ (('--nobsdflags',) if location_config.get('bsd_flags') is False else ())
|
||||
+ (('--read-special',) if location_config.get('read_special') or stream_processes else ())
|
||||
+ noflags_flags
|
||||
+ (('--files-cache', files_cache) if files_cache else ())
|
||||
+ (('--remote-path', remote_path) if remote_path else ())
|
||||
+ (('--umask', str(umask)) if umask else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ (('--list', '--filter', 'AME-') if files and not json and not progress else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
|
||||
+ (('--stats',) if stats and not json and not dry_run else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
|
||||
+ (('--dry-run',) if dry_run else ())
|
||||
+ (('--progress',) if progress else ())
|
||||
+ (('--json',) if json else ())
|
||||
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
|
||||
+ (
|
||||
'{repository}::{archive_name_format}'.format(
|
||||
repository=repository, archive_name_format=archive_name_format
|
||||
),
|
||||
('--list', '--filter', list_filter_flags)
|
||||
if list_files and not json and not progress
|
||||
else ()
|
||||
)
|
||||
+ sources
|
||||
+ (('--dry-run',) if dry_run else ())
|
||||
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
|
||||
+ flags.make_repository_archive_flags(repository, archive_name_format, local_borg_version)
|
||||
+ (sources if not pattern_file else ())
|
||||
)
|
||||
|
||||
if json:
|
||||
output_log_level = None
|
||||
elif (stats or files) and logger.getEffectiveLevel() == logging.WARNING:
|
||||
output_log_level = logging.WARNING
|
||||
elif list_files or (stats and not dry_run):
|
||||
output_log_level = logging.ANSWER
|
||||
else:
|
||||
output_log_level = logging.INFO
|
||||
|
||||
|
@ -269,13 +445,60 @@ def create_archive(
|
|||
# the terminal directly.
|
||||
output_file = DO_NOT_CAPTURE if progress else None
|
||||
|
||||
borg_environment = environment.make_environment(storage_config)
|
||||
|
||||
# If database hooks are enabled (as indicated by streaming processes), exclude files that might
|
||||
# cause Borg to hang. But skip this if the user has explicitly set the "read_special" to True.
|
||||
if stream_processes and not location_config.get('read_special'):
|
||||
logger.debug(f'{repository}: Collecting special file paths')
|
||||
special_file_paths = collect_special_file_paths(
|
||||
create_command,
|
||||
local_path,
|
||||
working_directory,
|
||||
borg_environment,
|
||||
skip_directories=borgmatic_source_directories,
|
||||
)
|
||||
|
||||
if special_file_paths:
|
||||
logger.warning(
|
||||
f'{repository}: Excluding special files to prevent Borg from hanging: {", ".join(special_file_paths)}'
|
||||
)
|
||||
exclude_file = write_pattern_file(
|
||||
expand_home_directories(
|
||||
tuple(location_config.get('exclude_patterns') or ()) + special_file_paths
|
||||
),
|
||||
pattern_file=exclude_file,
|
||||
)
|
||||
create_command += make_exclude_flags(location_config, exclude_file.name)
|
||||
|
||||
create_command += (
|
||||
(('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
|
||||
+ (('--stats',) if stats and not json and not dry_run else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
|
||||
+ (('--progress',) if progress else ())
|
||||
+ (('--json',) if json else ())
|
||||
)
|
||||
|
||||
if stream_processes:
|
||||
return execute_command_with_processes(
|
||||
full_command,
|
||||
create_command,
|
||||
stream_processes,
|
||||
output_log_level,
|
||||
output_file,
|
||||
borg_local_path=local_path,
|
||||
working_directory=working_directory,
|
||||
extra_environment=borg_environment,
|
||||
)
|
||||
elif output_log_level is None:
|
||||
return execute_command_and_capture_output(
|
||||
create_command, working_directory=working_directory, extra_environment=borg_environment,
|
||||
)
|
||||
else:
|
||||
execute_command(
|
||||
create_command,
|
||||
output_log_level,
|
||||
output_file,
|
||||
borg_local_path=local_path,
|
||||
working_directory=working_directory,
|
||||
extra_environment=borg_environment,
|
||||
)
|
||||
|
||||
return execute_command(full_command, output_log_level, output_file, borg_local_path=local_path)
|
||||
|
|
|
@ -1,5 +1,3 @@
|
|||
import os
|
||||
|
||||
OPTION_TO_ENVIRONMENT_VARIABLE = {
|
||||
'borg_base_directory': 'BORG_BASE_DIR',
|
||||
'borg_config_directory': 'BORG_CONFIG_DIR',
|
||||
|
@ -18,21 +16,24 @@ DEFAULT_BOOL_OPTION_TO_ENVIRONMENT_VARIABLE = {
|
|||
}
|
||||
|
||||
|
||||
def initialize(storage_config):
|
||||
for option_name, environment_variable_name in OPTION_TO_ENVIRONMENT_VARIABLE.items():
|
||||
def make_environment(storage_config):
|
||||
'''
|
||||
Given a borgmatic storage configuration dict, return its options converted to a Borg environment
|
||||
variable dict.
|
||||
'''
|
||||
environment = {}
|
||||
|
||||
# Options from borgmatic configuration take precedence over already set BORG_* environment
|
||||
# variables.
|
||||
value = storage_config.get(option_name) or os.environ.get(environment_variable_name)
|
||||
for option_name, environment_variable_name in OPTION_TO_ENVIRONMENT_VARIABLE.items():
|
||||
value = storage_config.get(option_name)
|
||||
|
||||
if value:
|
||||
os.environ[environment_variable_name] = value
|
||||
else:
|
||||
os.environ.pop(environment_variable_name, None)
|
||||
environment[environment_variable_name] = value
|
||||
|
||||
for (
|
||||
option_name,
|
||||
environment_variable_name,
|
||||
) in DEFAULT_BOOL_OPTION_TO_ENVIRONMENT_VARIABLE.items():
|
||||
value = storage_config.get(option_name, False)
|
||||
os.environ[environment_variable_name] = 'yes' if value else 'no'
|
||||
environment[environment_variable_name] = 'yes' if value else 'no'
|
||||
|
||||
return environment
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
import logging
|
||||
import os
|
||||
|
||||
import borgmatic.logger
|
||||
from borgmatic.borg import environment, flags
|
||||
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
@ -13,21 +15,23 @@ def export_tar_archive(
|
|||
paths,
|
||||
destination_path,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
tar_filter=None,
|
||||
files=False,
|
||||
list_files=False,
|
||||
strip_components=None,
|
||||
):
|
||||
'''
|
||||
Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to
|
||||
export from the archive, a destination path to export to, a storage configuration dict, optional
|
||||
local and remote Borg paths, an optional filter program, whether to include per-file details,
|
||||
and an optional number of path components to strip, export the archive into the given
|
||||
destination path as a tar-formatted file.
|
||||
export from the archive, a destination path to export to, a storage configuration dict, the
|
||||
local Borg version, optional local and remote Borg paths, an optional filter program, whether to
|
||||
include per-file details, and an optional number of path components to strip, export the archive
|
||||
into the given destination path as a tar-formatted file.
|
||||
|
||||
If the destination path is "-", then stream the output to stdout instead of to a file.
|
||||
'''
|
||||
borgmatic.logger.add_custom_log_levels()
|
||||
umask = storage_config.get('umask', None)
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
|
||||
|
@ -37,18 +41,22 @@ def export_tar_archive(
|
|||
+ (('--umask', str(umask)) if umask else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--list',) if files else ())
|
||||
+ (('--list',) if list_files else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ (('--dry-run',) if dry_run else ())
|
||||
+ (('--tar-filter', tar_filter) if tar_filter else ())
|
||||
+ (('--strip-components', str(strip_components)) if strip_components else ())
|
||||
+ ('::'.join((repository if ':' in repository else os.path.abspath(repository), archive)),)
|
||||
+ flags.make_repository_archive_flags(
|
||||
repository if ':' in repository else os.path.abspath(repository),
|
||||
archive,
|
||||
local_borg_version,
|
||||
)
|
||||
+ (destination_path,)
|
||||
+ (tuple(paths) if paths else ())
|
||||
)
|
||||
|
||||
if files and logger.getEffectiveLevel() == logging.WARNING:
|
||||
output_log_level = logging.WARNING
|
||||
if list_files:
|
||||
output_log_level = logging.ANSWER
|
||||
else:
|
||||
output_log_level = logging.INFO
|
||||
|
||||
|
@ -61,4 +69,5 @@ def export_tar_archive(
|
|||
output_file=DO_NOT_CAPTURE if destination_path == '-' else None,
|
||||
output_log_level=output_log_level,
|
||||
borg_local_path=local_path,
|
||||
extra_environment=environment.make_environment(storage_config),
|
||||
)
|
||||
|
|
|
@ -2,12 +2,20 @@ import logging
|
|||
import os
|
||||
import subprocess
|
||||
|
||||
from borgmatic.borg import environment, feature, flags, rlist
|
||||
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def extract_last_archive_dry_run(repository, lock_wait=None, local_path='borg', remote_path=None):
|
||||
def extract_last_archive_dry_run(
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
repository,
|
||||
lock_wait=None,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Perform an extraction dry-run of the most recent archive. If there are no archives, skip the
|
||||
dry-run.
|
||||
|
@ -20,38 +28,28 @@ def extract_last_archive_dry_run(repository, lock_wait=None, local_path='borg',
|
|||
elif logger.isEnabledFor(logging.INFO):
|
||||
verbosity_flags = ('--info',)
|
||||
|
||||
full_list_command = (
|
||||
(local_path, 'list', '--short')
|
||||
+ remote_path_flags
|
||||
+ lock_wait_flags
|
||||
+ verbosity_flags
|
||||
+ (repository,)
|
||||
)
|
||||
|
||||
list_output = execute_command(
|
||||
full_list_command, output_log_level=None, borg_local_path=local_path
|
||||
)
|
||||
|
||||
try:
|
||||
last_archive_name = list_output.strip().splitlines()[-1]
|
||||
except IndexError:
|
||||
last_archive_name = rlist.resolve_archive_name(
|
||||
repository, 'latest', storage_config, local_borg_version, local_path, remote_path
|
||||
)
|
||||
except ValueError:
|
||||
logger.warning('No archives found. Skipping extract consistency check.')
|
||||
return
|
||||
|
||||
list_flag = ('--list',) if logger.isEnabledFor(logging.DEBUG) else ()
|
||||
borg_environment = environment.make_environment(storage_config)
|
||||
full_extract_command = (
|
||||
(local_path, 'extract', '--dry-run')
|
||||
+ remote_path_flags
|
||||
+ lock_wait_flags
|
||||
+ verbosity_flags
|
||||
+ list_flag
|
||||
+ (
|
||||
'{repository}::{last_archive_name}'.format(
|
||||
repository=repository, last_archive_name=last_archive_name
|
||||
),
|
||||
)
|
||||
+ flags.make_repository_archive_flags(repository, last_archive_name, local_borg_version)
|
||||
)
|
||||
|
||||
execute_command(full_extract_command, working_directory=None)
|
||||
execute_command(
|
||||
full_extract_command, working_directory=None, extra_environment=borg_environment
|
||||
)
|
||||
|
||||
|
||||
def extract_archive(
|
||||
|
@ -61,6 +59,7 @@ def extract_archive(
|
|||
paths,
|
||||
location_config,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
destination_path=None,
|
||||
|
@ -70,9 +69,9 @@ def extract_archive(
|
|||
):
|
||||
'''
|
||||
Given a dry-run flag, a local or remote repository path, an archive name, zero or more paths to
|
||||
restore from the archive, location/storage configuration dicts, optional local and remote Borg
|
||||
paths, and an optional destination path to extract to, extract the archive into the current
|
||||
directory.
|
||||
restore from the archive, the local Borg version string, location/storage configuration dicts,
|
||||
optional local and remote Borg paths, and an optional destination path to extract to, extract
|
||||
the archive into the current directory.
|
||||
|
||||
If extract to stdout is True, then start the extraction streaming to stdout, and return that
|
||||
extract process as an instance of subprocess.Popen.
|
||||
|
@ -83,10 +82,22 @@ def extract_archive(
|
|||
if progress and extract_to_stdout:
|
||||
raise ValueError('progress and extract_to_stdout cannot both be set')
|
||||
|
||||
if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
|
||||
numeric_ids_flags = ('--numeric-ids',) if location_config.get('numeric_ids') else ()
|
||||
else:
|
||||
numeric_ids_flags = ('--numeric-owner',) if location_config.get('numeric_ids') else ()
|
||||
|
||||
if strip_components == 'all':
|
||||
if not paths:
|
||||
raise ValueError('The --strip-components flag with "all" requires at least one --path')
|
||||
|
||||
# Calculate the maximum number of leading path components of the given paths.
|
||||
strip_components = max(0, *(len(path.split(os.path.sep)) - 1 for path in paths))
|
||||
|
||||
full_command = (
|
||||
(local_path, 'extract')
|
||||
+ (('--remote-path', remote_path) if remote_path else ())
|
||||
+ (('--numeric-owner',) if location_config.get('numeric_owner') else ())
|
||||
+ numeric_ids_flags
|
||||
+ (('--umask', str(umask)) if umask else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
|
@ -95,15 +106,24 @@ def extract_archive(
|
|||
+ (('--strip-components', str(strip_components)) if strip_components else ())
|
||||
+ (('--progress',) if progress else ())
|
||||
+ (('--stdout',) if extract_to_stdout else ())
|
||||
+ ('::'.join((repository if ':' in repository else os.path.abspath(repository), archive)),)
|
||||
+ flags.make_repository_archive_flags(
|
||||
repository if ':' in repository else os.path.abspath(repository),
|
||||
archive,
|
||||
local_borg_version,
|
||||
)
|
||||
+ (tuple(paths) if paths else ())
|
||||
)
|
||||
|
||||
borg_environment = environment.make_environment(storage_config)
|
||||
|
||||
# The progress output isn't compatible with captured and logged output, as progress messes with
|
||||
# the terminal directly.
|
||||
if progress:
|
||||
return execute_command(
|
||||
full_command, output_file=DO_NOT_CAPTURE, working_directory=destination_path
|
||||
full_command,
|
||||
output_file=DO_NOT_CAPTURE,
|
||||
working_directory=destination_path,
|
||||
extra_environment=borg_environment,
|
||||
)
|
||||
return None
|
||||
|
||||
|
@ -113,8 +133,11 @@ def extract_archive(
|
|||
output_file=subprocess.PIPE,
|
||||
working_directory=destination_path,
|
||||
run_to_completion=False,
|
||||
extra_environment=borg_environment,
|
||||
)
|
||||
|
||||
# Don't give Borg local path, so as to error on warnings, as Borg only gives a warning if the
|
||||
# restore paths don't exist in the archive!
|
||||
execute_command(full_command, working_directory=destination_path)
|
||||
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
|
||||
# if the restore paths don't exist in the archive.
|
||||
execute_command(
|
||||
full_command, working_directory=destination_path, extra_environment=borg_environment
|
||||
)
|
||||
|
|
40
borgmatic/borg/feature.py
Normal file
40
borgmatic/borg/feature.py
Normal file
|
@ -0,0 +1,40 @@
|
|||
from enum import Enum
|
||||
|
||||
from pkg_resources import parse_version
|
||||
|
||||
|
||||
class Feature(Enum):
|
||||
COMPACT = 1
|
||||
ATIME = 2
|
||||
NOFLAGS = 3
|
||||
NUMERIC_IDS = 4
|
||||
UPLOAD_RATELIMIT = 5
|
||||
SEPARATE_REPOSITORY_ARCHIVE = 6
|
||||
RCREATE = 7
|
||||
RLIST = 8
|
||||
RINFO = 9
|
||||
MATCH_ARCHIVES = 10
|
||||
EXCLUDED_FILES_MINUS = 11
|
||||
|
||||
|
||||
FEATURE_TO_MINIMUM_BORG_VERSION = {
|
||||
Feature.COMPACT: parse_version('1.2.0a2'), # borg compact
|
||||
Feature.ATIME: parse_version('1.2.0a7'), # borg create --atime
|
||||
Feature.NOFLAGS: parse_version('1.2.0a8'), # borg create --noflags
|
||||
Feature.NUMERIC_IDS: parse_version('1.2.0b3'), # borg create/extract/mount --numeric-ids
|
||||
Feature.UPLOAD_RATELIMIT: parse_version('1.2.0b3'), # borg create --upload-ratelimit
|
||||
Feature.SEPARATE_REPOSITORY_ARCHIVE: parse_version('2.0.0a2'), # --repo with separate archive
|
||||
Feature.RCREATE: parse_version('2.0.0a2'), # borg rcreate
|
||||
Feature.RLIST: parse_version('2.0.0a2'), # borg rlist
|
||||
Feature.RINFO: parse_version('2.0.0a2'), # borg rinfo
|
||||
Feature.MATCH_ARCHIVES: parse_version('2.0.0b3'), # borg --match-archives
|
||||
Feature.EXCLUDED_FILES_MINUS: parse_version('2.0.0b5'), # --list --filter uses "-" for excludes
|
||||
}
|
||||
|
||||
|
||||
def available(feature, borg_version):
|
||||
'''
|
||||
Given a Borg Feature constant and a Borg version string, return whether that feature is
|
||||
available in that version of Borg.
|
||||
'''
|
||||
return FEATURE_TO_MINIMUM_BORG_VERSION[feature] <= parse_version(borg_version)
|
|
@ -1,5 +1,7 @@
|
|||
import itertools
|
||||
|
||||
from borgmatic.borg import feature
|
||||
|
||||
|
||||
def make_flags(name, value):
|
||||
'''
|
||||
|
@ -29,3 +31,28 @@ def make_flags_from_arguments(arguments, excludes=()):
|
|||
if name not in excludes and not name.startswith('_')
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def make_repository_flags(repository, local_borg_version):
|
||||
'''
|
||||
Given the path of a Borg repository and the local Borg version, return Borg-version-appropriate
|
||||
command-line flags (as a tuple) for selecting that repository.
|
||||
'''
|
||||
return (
|
||||
('--repo',)
|
||||
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
|
||||
else ()
|
||||
) + (repository,)
|
||||
|
||||
|
||||
def make_repository_archive_flags(repository, archive, local_borg_version):
|
||||
'''
|
||||
Given the path of a Borg repository, an archive name or pattern, and the local Borg version,
|
||||
return Borg-version-appropriate command-line flags (as a tuple) for selecting that repository
|
||||
and archive.
|
||||
'''
|
||||
return (
|
||||
('--repo', repository, archive)
|
||||
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
|
||||
else (f'{repository}::{archive}',)
|
||||
)
|
||||
|
|
|
@ -1,19 +1,26 @@
|
|||
import logging
|
||||
|
||||
from borgmatic.borg.flags import make_flags, make_flags_from_arguments
|
||||
from borgmatic.execute import execute_command
|
||||
import borgmatic.logger
|
||||
from borgmatic.borg import environment, feature, flags
|
||||
from borgmatic.execute import execute_command, execute_command_and_capture_output
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def display_archives_info(
|
||||
repository, storage_config, info_arguments, local_path='borg', remote_path=None
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
info_arguments,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, a storage config dict, and the arguments to the info
|
||||
action, display summary information for Borg archives in the repository or return JSON summary
|
||||
information.
|
||||
Given a local or remote repository path, a storage config dict, the local Borg version, and the
|
||||
arguments to the info action, display summary information for Borg archives in the repository or
|
||||
return JSON summary information.
|
||||
'''
|
||||
borgmatic.logger.add_custom_log_levels()
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
|
||||
full_command = (
|
||||
|
@ -28,18 +35,36 @@ def display_archives_info(
|
|||
if logger.isEnabledFor(logging.DEBUG) and not info_arguments.json
|
||||
else ()
|
||||
)
|
||||
+ make_flags('remote-path', remote_path)
|
||||
+ make_flags('lock-wait', lock_wait)
|
||||
+ make_flags_from_arguments(info_arguments, excludes=('repository', 'archive'))
|
||||
+ flags.make_flags('remote-path', remote_path)
|
||||
+ flags.make_flags('lock-wait', lock_wait)
|
||||
+ (
|
||||
'::'.join((repository, info_arguments.archive))
|
||||
if info_arguments.archive
|
||||
else repository,
|
||||
(
|
||||
flags.make_flags('match-archives', f'sh:{info_arguments.prefix}*')
|
||||
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
|
||||
else flags.make_flags('glob-archives', f'{info_arguments.prefix}*')
|
||||
)
|
||||
if info_arguments.prefix
|
||||
else ()
|
||||
)
|
||||
+ flags.make_flags_from_arguments(
|
||||
info_arguments, excludes=('repository', 'archive', 'prefix')
|
||||
)
|
||||
+ flags.make_repository_flags(repository, local_borg_version)
|
||||
+ (
|
||||
flags.make_flags('match-archives', info_arguments.archive)
|
||||
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
|
||||
else flags.make_flags('glob-archives', info_arguments.archive)
|
||||
)
|
||||
)
|
||||
|
||||
return execute_command(
|
||||
full_command,
|
||||
output_log_level=None if info_arguments.json else logging.WARNING,
|
||||
borg_local_path=local_path,
|
||||
if info_arguments.json:
|
||||
return execute_command_and_capture_output(
|
||||
full_command, extra_environment=environment.make_environment(storage_config),
|
||||
)
|
||||
else:
|
||||
execute_command(
|
||||
full_command,
|
||||
output_log_level=logging.ANSWER,
|
||||
borg_local_path=local_path,
|
||||
extra_environment=environment.make_environment(storage_config),
|
||||
)
|
||||
|
|
|
@ -1,58 +0,0 @@
|
|||
import logging
|
||||
import subprocess
|
||||
|
||||
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
INFO_REPOSITORY_NOT_FOUND_EXIT_CODE = 2
|
||||
|
||||
|
||||
def initialize_repository(
|
||||
repository,
|
||||
storage_config,
|
||||
encryption_mode,
|
||||
append_only=None,
|
||||
storage_quota=None,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, a storage configuration dict, a Borg encryption mode,
|
||||
whether the repository should be append-only, and the storage quota to use, initialize the
|
||||
repository. If the repository already exists, then log and skip initialization.
|
||||
'''
|
||||
info_command = (
|
||||
(local_path, 'info')
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ (('--remote-path', remote_path) if remote_path else ())
|
||||
+ (repository,)
|
||||
)
|
||||
logger.debug(' '.join(info_command))
|
||||
|
||||
try:
|
||||
execute_command(info_command, output_log_level=None)
|
||||
logger.info('Repository already exists. Skipping initialization.')
|
||||
return
|
||||
except subprocess.CalledProcessError as error:
|
||||
if error.returncode != INFO_REPOSITORY_NOT_FOUND_EXIT_CODE:
|
||||
raise
|
||||
|
||||
extra_borg_options = storage_config.get('extra_borg_options', {}).get('init', '')
|
||||
|
||||
init_command = (
|
||||
(local_path, 'init')
|
||||
+ (('--encryption', encryption_mode) if encryption_mode else ())
|
||||
+ (('--append-only',) if append_only else ())
|
||||
+ (('--storage-quota', storage_quota) if storage_quota else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ (('--remote-path', remote_path) if remote_path else ())
|
||||
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
|
||||
+ (repository,)
|
||||
)
|
||||
|
||||
# Do not capture output here, so as to support interactive prompts.
|
||||
execute_command(init_command, output_file=DO_NOT_CAPTURE, borg_local_path=local_path)
|
|
@ -1,63 +1,41 @@
|
|||
import argparse
|
||||
import copy
|
||||
import logging
|
||||
import re
|
||||
|
||||
from borgmatic.borg.flags import make_flags, make_flags_from_arguments
|
||||
from borgmatic.execute import execute_command
|
||||
import borgmatic.logger
|
||||
from borgmatic.borg import environment, feature, flags, rlist
|
||||
from borgmatic.execute import execute_command, execute_command_and_capture_output
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# A hack to convince Borg to exclude archives ending in ".checkpoint". This assumes that a
|
||||
# non-checkpoint archive name ends in a digit (e.g. from a timestamp).
|
||||
BORG_EXCLUDE_CHECKPOINTS_GLOB = '*[0123456789]'
|
||||
ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST = ('prefix', 'match_archives', 'sort_by', 'first', 'last')
|
||||
MAKE_FLAGS_EXCLUDES = (
|
||||
'repository',
|
||||
'archive',
|
||||
'successful',
|
||||
'paths',
|
||||
'find_paths',
|
||||
) + ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST
|
||||
|
||||
|
||||
def resolve_archive_name(repository, archive, storage_config, local_path='borg', remote_path=None):
|
||||
def make_list_command(
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
list_arguments,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, an archive name, a storage config dict, a local Borg
|
||||
path, and a remote Borg path, simply return the archive name. But if the archive name is
|
||||
"latest", then instead introspect the repository for the latest successful (non-checkpoint)
|
||||
archive, and return its name.
|
||||
|
||||
Raise ValueError if "latest" is given but there are no archives in the repository.
|
||||
'''
|
||||
if archive != "latest":
|
||||
return archive
|
||||
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
|
||||
full_command = (
|
||||
(local_path, 'list')
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ make_flags('remote-path', remote_path)
|
||||
+ make_flags('lock-wait', lock_wait)
|
||||
+ make_flags('glob-archives', BORG_EXCLUDE_CHECKPOINTS_GLOB)
|
||||
+ make_flags('last', 1)
|
||||
+ ('--short', repository)
|
||||
)
|
||||
|
||||
output = execute_command(full_command, output_log_level=None, borg_local_path=local_path)
|
||||
try:
|
||||
latest_archive = output.strip().splitlines()[-1]
|
||||
except IndexError:
|
||||
raise ValueError('No archives found in the repository')
|
||||
|
||||
logger.debug('{}: Latest archive is {}'.format(repository, latest_archive))
|
||||
|
||||
return latest_archive
|
||||
|
||||
|
||||
def list_archives(repository, storage_config, list_arguments, local_path='borg', remote_path=None):
|
||||
'''
|
||||
Given a local or remote repository path, a storage config dict, and the arguments to the list
|
||||
action, display the output of listing Borg archives in the repository or return JSON output. Or,
|
||||
if an archive name is given, listing the files in that archive.
|
||||
Given a local or remote repository path, a storage config dict, the arguments to the list
|
||||
action, and local and remote Borg paths, return a command as a tuple to list archives or paths
|
||||
within an archive.
|
||||
'''
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
if list_arguments.successful:
|
||||
list_arguments.glob_archives = BORG_EXCLUDE_CHECKPOINTS_GLOB
|
||||
|
||||
full_command = (
|
||||
return (
|
||||
(local_path, 'list')
|
||||
+ (
|
||||
('--info',)
|
||||
|
@ -69,21 +47,194 @@ def list_archives(repository, storage_config, list_arguments, local_path='borg',
|
|||
if logger.isEnabledFor(logging.DEBUG) and not list_arguments.json
|
||||
else ()
|
||||
)
|
||||
+ make_flags('remote-path', remote_path)
|
||||
+ make_flags('lock-wait', lock_wait)
|
||||
+ make_flags_from_arguments(
|
||||
list_arguments, excludes=('repository', 'archive', 'paths', 'successful')
|
||||
)
|
||||
+ flags.make_flags('remote-path', remote_path)
|
||||
+ flags.make_flags('lock-wait', lock_wait)
|
||||
+ flags.make_flags_from_arguments(list_arguments, excludes=MAKE_FLAGS_EXCLUDES)
|
||||
+ (
|
||||
'::'.join((repository, list_arguments.archive))
|
||||
flags.make_repository_archive_flags(
|
||||
repository, list_arguments.archive, local_borg_version
|
||||
)
|
||||
if list_arguments.archive
|
||||
else repository,
|
||||
else flags.make_repository_flags(repository, local_borg_version)
|
||||
)
|
||||
+ (tuple(list_arguments.paths) if list_arguments.paths else ())
|
||||
)
|
||||
|
||||
return execute_command(
|
||||
full_command,
|
||||
output_log_level=None if list_arguments.json else logging.WARNING,
|
||||
borg_local_path=local_path,
|
||||
|
||||
def make_find_paths(find_paths):
|
||||
'''
|
||||
Given a sequence of path fragments or patterns as passed to `--find`, transform all path
|
||||
fragments into glob patterns. Pass through existing patterns untouched.
|
||||
|
||||
For example, given find_paths of:
|
||||
|
||||
['foo.txt', 'pp:root/somedir']
|
||||
|
||||
... transform that into:
|
||||
|
||||
['sh:**/*foo.txt*/**', 'pp:root/somedir']
|
||||
'''
|
||||
if not find_paths:
|
||||
return ()
|
||||
|
||||
return tuple(
|
||||
find_path
|
||||
if re.compile(r'([-!+RrPp] )|(\w\w:)').match(find_path)
|
||||
else f'sh:**/*{find_path}*/**'
|
||||
for find_path in find_paths
|
||||
)
|
||||
|
||||
|
||||
def capture_archive_listing(
|
||||
repository,
|
||||
archive,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
list_path=None,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, an archive name, a storage config dict, the local Borg
|
||||
version, the archive path in which to list files, and local and remote Borg paths, capture the
|
||||
output of listing that archive and return it as a list of file paths.
|
||||
'''
|
||||
borg_environment = environment.make_environment(storage_config)
|
||||
|
||||
return tuple(
|
||||
execute_command_and_capture_output(
|
||||
make_list_command(
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
argparse.Namespace(
|
||||
repository=repository,
|
||||
archive=archive,
|
||||
paths=[f'sh:{list_path}'],
|
||||
find_paths=None,
|
||||
json=None,
|
||||
format='{path}{NL}',
|
||||
),
|
||||
local_path,
|
||||
remote_path,
|
||||
),
|
||||
extra_environment=borg_environment,
|
||||
)
|
||||
.strip('\n')
|
||||
.split('\n')
|
||||
)
|
||||
|
||||
|
||||
def list_archive(
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
list_arguments,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, a storage config dict, the local Borg version, the
|
||||
arguments to the list action, and local and remote Borg paths, display the output of listing
|
||||
the files of a Borg archive (or return JSON output). If list_arguments.find_paths are given,
|
||||
list the files by searching across multiple archives. If neither find_paths nor archive name
|
||||
are given, instead list the archives in the given repository.
|
||||
'''
|
||||
borgmatic.logger.add_custom_log_levels()
|
||||
|
||||
if not list_arguments.archive and not list_arguments.find_paths:
|
||||
if feature.available(feature.Feature.RLIST, local_borg_version):
|
||||
logger.warning(
|
||||
'Omitting the --archive flag on the list action is deprecated when using Borg 2.x+. Use the rlist action instead.'
|
||||
)
|
||||
|
||||
rlist_arguments = argparse.Namespace(
|
||||
repository=repository,
|
||||
short=list_arguments.short,
|
||||
format=list_arguments.format,
|
||||
json=list_arguments.json,
|
||||
prefix=list_arguments.prefix,
|
||||
match_archives=list_arguments.match_archives,
|
||||
sort_by=list_arguments.sort_by,
|
||||
first=list_arguments.first,
|
||||
last=list_arguments.last,
|
||||
)
|
||||
return rlist.list_repository(
|
||||
repository, storage_config, local_borg_version, rlist_arguments, local_path, remote_path
|
||||
)
|
||||
|
||||
if list_arguments.archive:
|
||||
for name in ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST:
|
||||
if getattr(list_arguments, name, None):
|
||||
logger.warning(
|
||||
f"The --{name.replace('_', '-')} flag on the list action is ignored when using the --archive flag."
|
||||
)
|
||||
|
||||
if list_arguments.json:
|
||||
raise ValueError(
|
||||
'The --json flag on the list action is not supported when using the --archive/--find flags.'
|
||||
)
|
||||
|
||||
borg_environment = environment.make_environment(storage_config)
|
||||
|
||||
# If there are any paths to find (and there's not a single archive already selected), start by
|
||||
# getting a list of archives to search.
|
||||
if list_arguments.find_paths and not list_arguments.archive:
|
||||
rlist_arguments = argparse.Namespace(
|
||||
repository=repository,
|
||||
short=True,
|
||||
format=None,
|
||||
json=None,
|
||||
prefix=list_arguments.prefix,
|
||||
match_archives=list_arguments.match_archives,
|
||||
sort_by=list_arguments.sort_by,
|
||||
first=list_arguments.first,
|
||||
last=list_arguments.last,
|
||||
)
|
||||
|
||||
# Ask Borg to list archives. Capture its output for use below.
|
||||
archive_lines = tuple(
|
||||
execute_command_and_capture_output(
|
||||
rlist.make_rlist_command(
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
rlist_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
),
|
||||
extra_environment=borg_environment,
|
||||
)
|
||||
.strip('\n')
|
||||
.split('\n')
|
||||
)
|
||||
else:
|
||||
archive_lines = (list_arguments.archive,)
|
||||
|
||||
# For each archive listed by Borg, run list on the contents of that archive.
|
||||
for archive in archive_lines:
|
||||
logger.answer(f'{repository}: Listing archive {archive}')
|
||||
|
||||
archive_arguments = copy.copy(list_arguments)
|
||||
archive_arguments.archive = archive
|
||||
|
||||
# This list call is to show the files in a single archive, not list multiple archives. So
|
||||
# blank out any archive filtering flags. They'll break anyway in Borg 2.
|
||||
for name in ARCHIVE_FILTER_FLAGS_MOVED_TO_RLIST:
|
||||
setattr(archive_arguments, name, None)
|
||||
|
||||
main_command = make_list_command(
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
archive_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
) + make_find_paths(list_arguments.find_paths)
|
||||
|
||||
execute_command(
|
||||
main_command,
|
||||
output_log_level=logging.ANSWER,
|
||||
borg_local_path=local_path,
|
||||
extra_environment=borg_environment,
|
||||
)
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
import logging
|
||||
|
||||
from borgmatic.borg import environment, feature, flags
|
||||
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
@ -13,13 +14,15 @@ def mount_archive(
|
|||
foreground,
|
||||
options,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, an optional archive name, a filesystem mount point,
|
||||
zero or more paths to mount from the archive, extra Borg mount options, a storage configuration
|
||||
dict, and optional local and remote Borg paths, mount the archive onto the mount point.
|
||||
dict, the local Borg version, and optional local and remote Borg paths, mount the archive onto
|
||||
the mount point.
|
||||
'''
|
||||
umask = storage_config.get('umask', None)
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
|
@ -33,14 +36,36 @@ def mount_archive(
|
|||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ (('--foreground',) if foreground else ())
|
||||
+ (('-o', options) if options else ())
|
||||
+ (('::'.join((repository, archive)),) if archive else (repository,))
|
||||
+ (
|
||||
(
|
||||
flags.make_repository_flags(repository, local_borg_version)
|
||||
+ (
|
||||
('--match-archives', archive)
|
||||
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
|
||||
else ('--glob-archives', archive)
|
||||
)
|
||||
)
|
||||
if feature.available(feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version)
|
||||
else (
|
||||
flags.make_repository_archive_flags(repository, archive, local_borg_version)
|
||||
if archive
|
||||
else flags.make_repository_flags(repository, local_borg_version)
|
||||
)
|
||||
)
|
||||
+ (mount_point,)
|
||||
+ (tuple(paths) if paths else ())
|
||||
)
|
||||
|
||||
borg_environment = environment.make_environment(storage_config)
|
||||
|
||||
# Don't capture the output when foreground mode is used so that ctrl-C can work properly.
|
||||
if foreground:
|
||||
execute_command(full_command, output_file=DO_NOT_CAPTURE, borg_local_path=local_path)
|
||||
execute_command(
|
||||
full_command,
|
||||
output_file=DO_NOT_CAPTURE,
|
||||
borg_local_path=local_path,
|
||||
extra_environment=borg_environment,
|
||||
)
|
||||
return
|
||||
|
||||
execute_command(full_command, borg_local_path=local_path)
|
||||
execute_command(full_command, borg_local_path=local_path, extra_environment=borg_environment)
|
||||
|
|
|
@ -1,11 +1,13 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.logger
|
||||
from borgmatic.borg import environment, feature, flags
|
||||
from borgmatic.execute import execute_command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _make_prune_flags(retention_config):
|
||||
def make_prune_flags(retention_config, local_borg_version):
|
||||
'''
|
||||
Given a retention config dict mapping from option name to value, tranform it into an iterable of
|
||||
command-line name-value flag pairs.
|
||||
|
@ -22,11 +24,13 @@ def _make_prune_flags(retention_config):
|
|||
)
|
||||
'''
|
||||
config = retention_config.copy()
|
||||
prefix = config.pop('prefix', '{hostname}-')
|
||||
|
||||
if 'prefix' not in config:
|
||||
config['prefix'] = '{hostname}-'
|
||||
elif not config['prefix']:
|
||||
config.pop('prefix')
|
||||
if prefix:
|
||||
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version):
|
||||
config['match_archives'] = f'sh:{prefix}*'
|
||||
else:
|
||||
config['glob_archives'] = f'{prefix}*'
|
||||
|
||||
return (
|
||||
('--' + option_name.replace('_', '-'), str(value)) for option_name, value in config.items()
|
||||
|
@ -38,38 +42,49 @@ def prune_archives(
|
|||
repository,
|
||||
storage_config,
|
||||
retention_config,
|
||||
local_borg_version,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
stats=False,
|
||||
files=False,
|
||||
list_archives=False,
|
||||
):
|
||||
'''
|
||||
Given dry-run flag, a local or remote repository path, a storage config dict, and a
|
||||
retention config dict, prune Borg archives according to the retention policy specified in that
|
||||
configuration.
|
||||
'''
|
||||
borgmatic.logger.add_custom_log_levels()
|
||||
umask = storage_config.get('umask', None)
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
extra_borg_options = storage_config.get('extra_borg_options', {}).get('prune', '')
|
||||
|
||||
full_command = (
|
||||
(local_path, 'prune')
|
||||
+ tuple(element for pair in _make_prune_flags(retention_config) for element in pair)
|
||||
+ tuple(
|
||||
element
|
||||
for pair in make_prune_flags(retention_config, local_borg_version)
|
||||
for element in pair
|
||||
)
|
||||
+ (('--remote-path', remote_path) if remote_path else ())
|
||||
+ (('--umask', str(umask)) if umask else ())
|
||||
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
|
||||
+ (('--stats',) if stats and not dry_run else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--list',) if files else ())
|
||||
+ (('--list',) if list_archives else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ (('--dry-run',) if dry_run else ())
|
||||
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
|
||||
+ (repository,)
|
||||
+ flags.make_repository_flags(repository, local_borg_version)
|
||||
)
|
||||
|
||||
if (stats or files) and logger.getEffectiveLevel() == logging.WARNING:
|
||||
output_log_level = logging.WARNING
|
||||
if stats or list_archives:
|
||||
output_log_level = logging.ANSWER
|
||||
else:
|
||||
output_log_level = logging.INFO
|
||||
|
||||
execute_command(full_command, output_log_level=output_log_level, borg_local_path=local_path)
|
||||
execute_command(
|
||||
full_command,
|
||||
output_log_level=output_log_level,
|
||||
borg_local_path=local_path,
|
||||
extra_environment=environment.make_environment(storage_config),
|
||||
)
|
||||
|
|
81
borgmatic/borg/rcreate.py
Normal file
81
borgmatic/borg/rcreate.py
Normal file
|
@ -0,0 +1,81 @@
|
|||
import argparse
|
||||
import logging
|
||||
import subprocess
|
||||
|
||||
from borgmatic.borg import environment, feature, flags, rinfo
|
||||
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
RINFO_REPOSITORY_NOT_FOUND_EXIT_CODE = 2
|
||||
|
||||
|
||||
def create_repository(
|
||||
dry_run,
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
encryption_mode,
|
||||
source_repository=None,
|
||||
copy_crypt_key=False,
|
||||
append_only=None,
|
||||
storage_quota=None,
|
||||
make_parent_dirs=False,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a dry-run flag, a local or remote repository path, a storage configuration dict, the local
|
||||
Borg version, a Borg encryption mode, the path to another repo whose key material should be
|
||||
reused, whether the repository should be append-only, and the storage quota to use, create the
|
||||
repository. If the repository already exists, then log and skip creation.
|
||||
'''
|
||||
try:
|
||||
rinfo.display_repository_info(
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
argparse.Namespace(json=True),
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
logger.info(f'{repository}: Repository already exists. Skipping creation.')
|
||||
return
|
||||
except subprocess.CalledProcessError as error:
|
||||
if error.returncode != RINFO_REPOSITORY_NOT_FOUND_EXIT_CODE:
|
||||
raise
|
||||
|
||||
extra_borg_options = storage_config.get('extra_borg_options', {}).get('rcreate', '')
|
||||
|
||||
rcreate_command = (
|
||||
(local_path,)
|
||||
+ (
|
||||
('rcreate',)
|
||||
if feature.available(feature.Feature.RCREATE, local_borg_version)
|
||||
else ('init',)
|
||||
)
|
||||
+ (('--encryption', encryption_mode) if encryption_mode else ())
|
||||
+ (('--other-repo', source_repository) if source_repository else ())
|
||||
+ (('--copy-crypt-key',) if copy_crypt_key else ())
|
||||
+ (('--append-only',) if append_only else ())
|
||||
+ (('--storage-quota', storage_quota) if storage_quota else ())
|
||||
+ (('--make-parent-dirs',) if make_parent_dirs else ())
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ (('--remote-path', remote_path) if remote_path else ())
|
||||
+ (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
|
||||
+ flags.make_repository_flags(repository, local_borg_version)
|
||||
)
|
||||
|
||||
if dry_run:
|
||||
logging.info(f'{repository}: Skipping repository creation (dry run)')
|
||||
return
|
||||
|
||||
# Do not capture output here, so as to support interactive prompts.
|
||||
execute_command(
|
||||
rcreate_command,
|
||||
output_file=DO_NOT_CAPTURE,
|
||||
borg_local_path=local_path,
|
||||
extra_environment=environment.make_environment(storage_config),
|
||||
)
|
61
borgmatic/borg/rinfo.py
Normal file
61
borgmatic/borg/rinfo.py
Normal file
|
@ -0,0 +1,61 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.logger
|
||||
from borgmatic.borg import environment, feature, flags
|
||||
from borgmatic.execute import execute_command, execute_command_and_capture_output
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def display_repository_info(
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
rinfo_arguments,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, a storage config dict, the local Borg version, and the
|
||||
arguments to the rinfo action, display summary information for the Borg repository or return
|
||||
JSON summary information.
|
||||
'''
|
||||
borgmatic.logger.add_custom_log_levels()
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
|
||||
full_command = (
|
||||
(local_path,)
|
||||
+ (
|
||||
('rinfo',)
|
||||
if feature.available(feature.Feature.RINFO, local_borg_version)
|
||||
else ('info',)
|
||||
)
|
||||
+ (
|
||||
('--info',)
|
||||
if logger.getEffectiveLevel() == logging.INFO and not rinfo_arguments.json
|
||||
else ()
|
||||
)
|
||||
+ (
|
||||
('--debug', '--show-rc')
|
||||
if logger.isEnabledFor(logging.DEBUG) and not rinfo_arguments.json
|
||||
else ()
|
||||
)
|
||||
+ flags.make_flags('remote-path', remote_path)
|
||||
+ flags.make_flags('lock-wait', lock_wait)
|
||||
+ (('--json',) if rinfo_arguments.json else ())
|
||||
+ flags.make_repository_flags(repository, local_borg_version)
|
||||
)
|
||||
|
||||
extra_environment = environment.make_environment(storage_config)
|
||||
|
||||
if rinfo_arguments.json:
|
||||
return execute_command_and_capture_output(
|
||||
full_command, extra_environment=extra_environment,
|
||||
)
|
||||
else:
|
||||
execute_command(
|
||||
full_command,
|
||||
output_log_level=logging.ANSWER,
|
||||
borg_local_path=local_path,
|
||||
extra_environment=extra_environment,
|
||||
)
|
127
borgmatic/borg/rlist.py
Normal file
127
borgmatic/borg/rlist.py
Normal file
|
@ -0,0 +1,127 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.logger
|
||||
from borgmatic.borg import environment, feature, flags
|
||||
from borgmatic.execute import execute_command, execute_command_and_capture_output
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def resolve_archive_name(
|
||||
repository, archive, storage_config, local_borg_version, local_path='borg', remote_path=None
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, an archive name, a storage config dict, a local Borg
|
||||
path, and a remote Borg path, simply return the archive name. But if the archive name is
|
||||
"latest", then instead introspect the repository for the latest archive and return its name.
|
||||
|
||||
Raise ValueError if "latest" is given but there are no archives in the repository.
|
||||
'''
|
||||
if archive != 'latest':
|
||||
return archive
|
||||
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
|
||||
full_command = (
|
||||
(
|
||||
local_path,
|
||||
'rlist' if feature.available(feature.Feature.RLIST, local_borg_version) else 'list',
|
||||
)
|
||||
+ flags.make_flags('remote-path', remote_path)
|
||||
+ flags.make_flags('lock-wait', lock_wait)
|
||||
+ flags.make_flags('last', 1)
|
||||
+ ('--short',)
|
||||
+ flags.make_repository_flags(repository, local_borg_version)
|
||||
)
|
||||
|
||||
output = execute_command_and_capture_output(
|
||||
full_command, extra_environment=environment.make_environment(storage_config),
|
||||
)
|
||||
try:
|
||||
latest_archive = output.strip().splitlines()[-1]
|
||||
except IndexError:
|
||||
raise ValueError('No archives found in the repository')
|
||||
|
||||
logger.debug('{}: Latest archive is {}'.format(repository, latest_archive))
|
||||
|
||||
return latest_archive
|
||||
|
||||
|
||||
MAKE_FLAGS_EXCLUDES = ('repository', 'prefix')
|
||||
|
||||
|
||||
def make_rlist_command(
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
rlist_arguments,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, a storage config dict, the local Borg version, the
|
||||
arguments to the rlist action, and local and remote Borg paths, return a command as a tuple to
|
||||
list archives with a repository.
|
||||
'''
|
||||
lock_wait = storage_config.get('lock_wait', None)
|
||||
|
||||
return (
|
||||
(
|
||||
local_path,
|
||||
'rlist' if feature.available(feature.Feature.RLIST, local_borg_version) else 'list',
|
||||
)
|
||||
+ (
|
||||
('--info',)
|
||||
if logger.getEffectiveLevel() == logging.INFO and not rlist_arguments.json
|
||||
else ()
|
||||
)
|
||||
+ (
|
||||
('--debug', '--show-rc')
|
||||
if logger.isEnabledFor(logging.DEBUG) and not rlist_arguments.json
|
||||
else ()
|
||||
)
|
||||
+ flags.make_flags('remote-path', remote_path)
|
||||
+ flags.make_flags('lock-wait', lock_wait)
|
||||
+ (
|
||||
(
|
||||
flags.make_flags('match-archives', f'sh:{rlist_arguments.prefix}*')
|
||||
if feature.available(feature.Feature.MATCH_ARCHIVES, local_borg_version)
|
||||
else flags.make_flags('glob-archives', f'{rlist_arguments.prefix}*')
|
||||
)
|
||||
if rlist_arguments.prefix
|
||||
else ()
|
||||
)
|
||||
+ flags.make_flags_from_arguments(rlist_arguments, excludes=MAKE_FLAGS_EXCLUDES)
|
||||
+ flags.make_repository_flags(repository, local_borg_version)
|
||||
)
|
||||
|
||||
|
||||
def list_repository(
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
rlist_arguments,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a local or remote repository path, a storage config dict, the local Borg version, the
|
||||
arguments to the list action, and local and remote Borg paths, display the output of listing
|
||||
Borg archives in the given repository (or return JSON output).
|
||||
'''
|
||||
borgmatic.logger.add_custom_log_levels()
|
||||
borg_environment = environment.make_environment(storage_config)
|
||||
|
||||
main_command = make_rlist_command(
|
||||
repository, storage_config, local_borg_version, rlist_arguments, local_path, remote_path
|
||||
)
|
||||
|
||||
if rlist_arguments.json:
|
||||
return execute_command_and_capture_output(main_command, extra_environment=borg_environment,)
|
||||
else:
|
||||
execute_command(
|
||||
main_command,
|
||||
output_log_level=logging.ANSWER,
|
||||
borg_local_path=local_path,
|
||||
extra_environment=borg_environment,
|
||||
)
|
1
borgmatic/borg/state.py
Normal file
1
borgmatic/borg/state.py
Normal file
|
@ -0,0 +1 @@
|
|||
DEFAULT_BORGMATIC_SOURCE_DIRECTORY = '~/.borgmatic'
|
52
borgmatic/borg/transfer.py
Normal file
52
borgmatic/borg/transfer.py
Normal file
|
@ -0,0 +1,52 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.logger
|
||||
from borgmatic.borg import environment, flags
|
||||
from borgmatic.execute import DO_NOT_CAPTURE, execute_command
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def transfer_archives(
|
||||
dry_run,
|
||||
repository,
|
||||
storage_config,
|
||||
local_borg_version,
|
||||
transfer_arguments,
|
||||
local_path='borg',
|
||||
remote_path=None,
|
||||
):
|
||||
'''
|
||||
Given a dry-run flag, a local or remote repository path, a storage config dict, the local Borg
|
||||
version, and the arguments to the transfer action, transfer archives to the given repository.
|
||||
'''
|
||||
borgmatic.logger.add_custom_log_levels()
|
||||
|
||||
full_command = (
|
||||
(local_path, 'transfer')
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
+ flags.make_flags('remote-path', remote_path)
|
||||
+ flags.make_flags('lock-wait', storage_config.get('lock_wait', None))
|
||||
+ (('--progress',) if transfer_arguments.progress else ())
|
||||
+ (
|
||||
flags.make_flags(
|
||||
'match-archives', transfer_arguments.match_archives or transfer_arguments.archive
|
||||
)
|
||||
)
|
||||
+ flags.make_flags_from_arguments(
|
||||
transfer_arguments,
|
||||
excludes=('repository', 'source_repository', 'archive', 'match_archives'),
|
||||
)
|
||||
+ flags.make_repository_flags(repository, local_borg_version)
|
||||
+ flags.make_flags('other-repo', transfer_arguments.source_repository)
|
||||
+ flags.make_flags('dry-run', dry_run)
|
||||
)
|
||||
|
||||
return execute_command(
|
||||
full_command,
|
||||
output_log_level=logging.ANSWER,
|
||||
output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None,
|
||||
borg_local_path=local_path,
|
||||
extra_environment=environment.make_environment(storage_config),
|
||||
)
|
28
borgmatic/borg/version.py
Normal file
28
borgmatic/borg/version.py
Normal file
|
@ -0,0 +1,28 @@
|
|||
import logging
|
||||
|
||||
from borgmatic.borg import environment
|
||||
from borgmatic.execute import execute_command_and_capture_output
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def local_borg_version(storage_config, local_path='borg'):
|
||||
'''
|
||||
Given a storage configuration dict and a local Borg binary path, return a version string for it.
|
||||
|
||||
Raise OSError or CalledProcessError if there is a problem running Borg.
|
||||
Raise ValueError if the version cannot be parsed.
|
||||
'''
|
||||
full_command = (
|
||||
(local_path, '--version')
|
||||
+ (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
|
||||
+ (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
|
||||
)
|
||||
output = execute_command_and_capture_output(
|
||||
full_command, extra_environment=environment.make_environment(storage_config),
|
||||
)
|
||||
|
||||
try:
|
||||
return output.split(' ')[1].strip()
|
||||
except IndexError:
|
||||
raise ValueError('Could not parse Borg version string')
|
|
@ -4,28 +4,34 @@ from argparse import Action, ArgumentParser
|
|||
from borgmatic.config import collect
|
||||
|
||||
SUBPARSER_ALIASES = {
|
||||
'init': ['--init', '-I'],
|
||||
'prune': ['--prune', '-p'],
|
||||
'create': ['--create', '-C'],
|
||||
'check': ['--check', '-k'],
|
||||
'extract': ['--extract', '-x'],
|
||||
'export-tar': ['--export-tar'],
|
||||
'mount': ['--mount', '-m'],
|
||||
'umount': ['--umount', '-u'],
|
||||
'restore': ['--restore', '-r'],
|
||||
'list': ['--list', '-l'],
|
||||
'info': ['--info', '-i'],
|
||||
'rcreate': ['init', '-I'],
|
||||
'prune': ['-p'],
|
||||
'compact': [],
|
||||
'create': ['-C'],
|
||||
'check': ['-k'],
|
||||
'extract': ['-x'],
|
||||
'export-tar': [],
|
||||
'mount': ['-m'],
|
||||
'umount': ['-u'],
|
||||
'restore': ['-r'],
|
||||
'rlist': [],
|
||||
'list': ['-l'],
|
||||
'rinfo': [],
|
||||
'info': ['-i'],
|
||||
'transfer': [],
|
||||
'break-lock': [],
|
||||
'borg': [],
|
||||
}
|
||||
|
||||
|
||||
def parse_subparser_arguments(unparsed_arguments, subparsers):
|
||||
'''
|
||||
Given a sequence of arguments, and a subparsers object as returned by
|
||||
argparse.ArgumentParser().add_subparsers(), give each requested action's subparser a shot at
|
||||
parsing all arguments. This allows common arguments like "--repository" to be shared across
|
||||
multiple subparsers.
|
||||
Given a sequence of arguments and a dict from subparser name to argparse.ArgumentParser
|
||||
instance, give each requested action's subparser a shot at parsing all arguments. This allows
|
||||
common arguments like "--repository" to be shared across multiple subparsers.
|
||||
|
||||
Return the result as a dict mapping from subparser name to a parsed namespace of arguments.
|
||||
Return the result as a tuple of (a dict mapping from subparser name to a parsed namespace of
|
||||
arguments, a list of remaining arguments not claimed by any subparser).
|
||||
'''
|
||||
arguments = collections.OrderedDict()
|
||||
remaining_arguments = list(unparsed_arguments)
|
||||
|
@ -35,11 +41,17 @@ def parse_subparser_arguments(unparsed_arguments, subparsers):
|
|||
for alias in aliases
|
||||
}
|
||||
|
||||
for subparser_name, subparser in subparsers.choices.items():
|
||||
if subparser_name not in remaining_arguments:
|
||||
continue
|
||||
# If the "borg" action is used, skip all other subparsers. This avoids confusion like
|
||||
# "borg list" triggering borgmatic's own list action.
|
||||
if 'borg' in unparsed_arguments:
|
||||
subparsers = {'borg': subparsers['borg']}
|
||||
|
||||
canonical_name = alias_to_subparser_name.get(subparser_name, subparser_name)
|
||||
for argument in remaining_arguments:
|
||||
canonical_name = alias_to_subparser_name.get(argument, argument)
|
||||
subparser = subparsers.get(canonical_name)
|
||||
|
||||
if not subparser:
|
||||
continue
|
||||
|
||||
# If a parsed value happens to be the same as the name of a subparser, remove it from the
|
||||
# remaining arguments. This prevents, for instance, "check --only extract" from triggering
|
||||
|
@ -47,59 +59,45 @@ def parse_subparser_arguments(unparsed_arguments, subparsers):
|
|||
parsed, unused_remaining = subparser.parse_known_args(unparsed_arguments)
|
||||
for value in vars(parsed).values():
|
||||
if isinstance(value, str):
|
||||
if value in subparsers.choices:
|
||||
if value in subparsers:
|
||||
remaining_arguments.remove(value)
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
if item in subparsers.choices:
|
||||
if item in subparsers:
|
||||
remaining_arguments.remove(item)
|
||||
|
||||
arguments[canonical_name] = parsed
|
||||
|
||||
# If no actions are explicitly requested, assume defaults: prune, create, and check.
|
||||
# If no actions are explicitly requested, assume defaults.
|
||||
if not arguments and '--help' not in unparsed_arguments and '-h' not in unparsed_arguments:
|
||||
for subparser_name in ('prune', 'create', 'check'):
|
||||
subparser = subparsers.choices[subparser_name]
|
||||
for subparser_name in ('create', 'prune', 'compact', 'check'):
|
||||
subparser = subparsers[subparser_name]
|
||||
parsed, unused_remaining = subparser.parse_known_args(unparsed_arguments)
|
||||
arguments[subparser_name] = parsed
|
||||
|
||||
return arguments
|
||||
|
||||
|
||||
def parse_global_arguments(unparsed_arguments, top_level_parser, subparsers):
|
||||
'''
|
||||
Given a sequence of arguments, a top-level parser (containing subparsers), and a subparsers
|
||||
object as returned by argparse.ArgumentParser().add_subparsers(), parse and return any global
|
||||
arguments as a parsed argparse.Namespace instance.
|
||||
'''
|
||||
# Ask each subparser, one by one, to greedily consume arguments. Any arguments that remain
|
||||
# are global arguments.
|
||||
remaining_arguments = list(unparsed_arguments)
|
||||
present_subparser_names = set()
|
||||
|
||||
for subparser_name, subparser in subparsers.choices.items():
|
||||
if subparser_name not in remaining_arguments:
|
||||
# Now ask each subparser, one by one, to greedily consume arguments.
|
||||
for subparser_name, subparser in subparsers.items():
|
||||
if subparser_name not in arguments.keys():
|
||||
continue
|
||||
|
||||
present_subparser_names.add(subparser_name)
|
||||
subparser = subparsers[subparser_name]
|
||||
unused_parsed, remaining_arguments = subparser.parse_known_args(remaining_arguments)
|
||||
|
||||
# If no actions are explicitly requested, assume defaults: prune, create, and check.
|
||||
if (
|
||||
not present_subparser_names
|
||||
and '--help' not in unparsed_arguments
|
||||
and '-h' not in unparsed_arguments
|
||||
):
|
||||
for subparser_name in ('prune', 'create', 'check'):
|
||||
subparser = subparsers.choices[subparser_name]
|
||||
unused_parsed, remaining_arguments = subparser.parse_known_args(remaining_arguments)
|
||||
# Special case: If "borg" is present in the arguments, consume all arguments after (+1) the
|
||||
# "borg" action.
|
||||
if 'borg' in arguments:
|
||||
borg_options_index = remaining_arguments.index('borg') + 1
|
||||
arguments['borg'].options = remaining_arguments[borg_options_index:]
|
||||
remaining_arguments = remaining_arguments[:borg_options_index]
|
||||
|
||||
# Remove the subparser names themselves.
|
||||
for subparser_name in present_subparser_names:
|
||||
for subparser_name, subparser in subparsers.items():
|
||||
if subparser_name in remaining_arguments:
|
||||
remaining_arguments.remove(subparser_name)
|
||||
|
||||
return top_level_parser.parse_args(remaining_arguments)
|
||||
return (arguments, remaining_arguments)
|
||||
|
||||
|
||||
class Extend_action(Action):
|
||||
|
@ -116,10 +114,9 @@ class Extend_action(Action):
|
|||
setattr(namespace, self.dest, list(values))
|
||||
|
||||
|
||||
def parse_arguments(*unparsed_arguments):
|
||||
def make_parsers():
|
||||
'''
|
||||
Given command-line arguments with which this script was invoked, parse the arguments and return
|
||||
them as a dict mapping from subparser name (or "global") to an argparse.Namespace instance.
|
||||
Build a top-level parser and its subparsers and return them as a tuple.
|
||||
'''
|
||||
config_paths = collect.get_default_config_paths(expand_home=True)
|
||||
unexpanded_config_paths = collect.get_default_config_paths(expand_home=False)
|
||||
|
@ -196,6 +193,18 @@ def parse_arguments(*unparsed_arguments):
|
|||
action='extend',
|
||||
help='One or more configuration file options to override with specified values',
|
||||
)
|
||||
global_group.add_argument(
|
||||
'--no-environment-interpolation',
|
||||
dest='resolve_env',
|
||||
action='store_false',
|
||||
help='Do not resolve environment variables in configuration file',
|
||||
)
|
||||
global_group.add_argument(
|
||||
'--bash-completion',
|
||||
default=False,
|
||||
action='store_true',
|
||||
help='Show bash completion script and exit',
|
||||
)
|
||||
global_group.add_argument(
|
||||
'--version',
|
||||
dest='version',
|
||||
|
@ -207,8 +216,8 @@ def parse_arguments(*unparsed_arguments):
|
|||
top_level_parser = ArgumentParser(
|
||||
description='''
|
||||
Simple, configuration-driven backup software for servers and workstations. If none of
|
||||
the action options are given, then borgmatic defaults to: prune, create, and check
|
||||
archives.
|
||||
the action options are given, then borgmatic defaults to: create, prune, compact, and
|
||||
check.
|
||||
''',
|
||||
parents=[global_parser],
|
||||
)
|
||||
|
@ -216,41 +225,111 @@ def parse_arguments(*unparsed_arguments):
|
|||
subparsers = top_level_parser.add_subparsers(
|
||||
title='actions',
|
||||
metavar='',
|
||||
help='Specify zero or more actions. Defaults to prune, create, and check. Use --help with action for details:',
|
||||
help='Specify zero or more actions. Defaults to creat, prune, compact, and check. Use --help with action for details:',
|
||||
)
|
||||
init_parser = subparsers.add_parser(
|
||||
'init',
|
||||
aliases=SUBPARSER_ALIASES['init'],
|
||||
help='Initialize an empty Borg repository',
|
||||
description='Initialize an empty Borg repository',
|
||||
rcreate_parser = subparsers.add_parser(
|
||||
'rcreate',
|
||||
aliases=SUBPARSER_ALIASES['rcreate'],
|
||||
help='Create a new, empty Borg repository',
|
||||
description='Create a new, empty Borg repository',
|
||||
add_help=False,
|
||||
)
|
||||
init_group = init_parser.add_argument_group('init arguments')
|
||||
init_group.add_argument(
|
||||
rcreate_group = rcreate_parser.add_argument_group('rcreate arguments')
|
||||
rcreate_group.add_argument(
|
||||
'-e',
|
||||
'--encryption',
|
||||
dest='encryption_mode',
|
||||
help='Borg repository encryption mode',
|
||||
required=True,
|
||||
)
|
||||
init_group.add_argument(
|
||||
'--append-only',
|
||||
dest='append_only',
|
||||
rcreate_group.add_argument(
|
||||
'--source-repository',
|
||||
'--other-repo',
|
||||
metavar='KEY_REPOSITORY',
|
||||
help='Path to an existing Borg repository whose key material should be reused (Borg 2.x+ only)',
|
||||
)
|
||||
rcreate_group.add_argument(
|
||||
'--repository',
|
||||
help='Path of the new repository to create (must be already specified in a borgmatic configuration file), defaults to the configured repository if there is only one',
|
||||
)
|
||||
rcreate_group.add_argument(
|
||||
'--copy-crypt-key',
|
||||
action='store_true',
|
||||
help='Create an append-only repository',
|
||||
help='Copy the crypt key used for authenticated encryption from the source repository, defaults to a new random key (Borg 2.x+ only)',
|
||||
)
|
||||
init_group.add_argument(
|
||||
'--storage-quota',
|
||||
dest='storage_quota',
|
||||
help='Create a repository with a fixed storage quota',
|
||||
rcreate_group.add_argument(
|
||||
'--append-only', action='store_true', help='Create an append-only repository',
|
||||
)
|
||||
rcreate_group.add_argument(
|
||||
'--storage-quota', help='Create a repository with a fixed storage quota',
|
||||
)
|
||||
rcreate_group.add_argument(
|
||||
'--make-parent-dirs',
|
||||
action='store_true',
|
||||
help='Create any missing parent directories of the repository directory',
|
||||
)
|
||||
rcreate_group.add_argument(
|
||||
'-h', '--help', action='help', help='Show this help message and exit'
|
||||
)
|
||||
|
||||
transfer_parser = subparsers.add_parser(
|
||||
'transfer',
|
||||
aliases=SUBPARSER_ALIASES['transfer'],
|
||||
help='Transfer archives from one repository to another, optionally upgrading the transferred data (Borg 2.0+ only)',
|
||||
description='Transfer archives from one repository to another, optionally upgrading the transferred data (Borg 2.0+ only)',
|
||||
add_help=False,
|
||||
)
|
||||
transfer_group = transfer_parser.add_argument_group('transfer arguments')
|
||||
transfer_group.add_argument(
|
||||
'--repository',
|
||||
help='Path of existing destination repository to transfer archives to, defaults to the configured repository if there is only one',
|
||||
)
|
||||
transfer_group.add_argument(
|
||||
'--source-repository',
|
||||
help='Path of existing source repository to transfer archives from',
|
||||
required=True,
|
||||
)
|
||||
transfer_group.add_argument(
|
||||
'--archive',
|
||||
help='Name of single archive to transfer (or "latest"), defaults to transferring all archives',
|
||||
)
|
||||
transfer_group.add_argument(
|
||||
'--upgrader',
|
||||
help='Upgrader type used to convert the transfered data, e.g. "From12To20" to upgrade data from Borg 1.2 to 2.0 format, defaults to no conversion',
|
||||
)
|
||||
transfer_group.add_argument(
|
||||
'--progress',
|
||||
default=False,
|
||||
action='store_true',
|
||||
help='Display progress as each archive is transferred',
|
||||
)
|
||||
transfer_group.add_argument(
|
||||
'-a',
|
||||
'--match-archives',
|
||||
'--glob-archives',
|
||||
metavar='PATTERN',
|
||||
help='Only transfer archives with names matching this pattern',
|
||||
)
|
||||
transfer_group.add_argument(
|
||||
'--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys'
|
||||
)
|
||||
transfer_group.add_argument(
|
||||
'--first',
|
||||
metavar='N',
|
||||
help='Only transfer first N archives after other filters are applied',
|
||||
)
|
||||
transfer_group.add_argument(
|
||||
'--last', metavar='N', help='Only transfer last N archives after other filters are applied'
|
||||
)
|
||||
transfer_group.add_argument(
|
||||
'-h', '--help', action='help', help='Show this help message and exit'
|
||||
)
|
||||
init_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
|
||||
|
||||
prune_parser = subparsers.add_parser(
|
||||
'prune',
|
||||
aliases=SUBPARSER_ALIASES['prune'],
|
||||
help='Prune archives according to the retention policy',
|
||||
description='Prune archives according to the retention policy',
|
||||
help='Prune archives according to the retention policy (with Borg 1.2+, run compact afterwards to actually free space)',
|
||||
description='Prune archives according to the retention policy (with Borg 1.2+, run compact afterwards to actually free space)',
|
||||
add_help=False,
|
||||
)
|
||||
prune_group = prune_parser.add_argument_group('prune arguments')
|
||||
|
@ -262,15 +341,47 @@ def parse_arguments(*unparsed_arguments):
|
|||
help='Display statistics of archive',
|
||||
)
|
||||
prune_group.add_argument(
|
||||
'--files', dest='files', default=False, action='store_true', help='Show per-file details'
|
||||
'--list', dest='list_archives', action='store_true', help='List archives kept/pruned'
|
||||
)
|
||||
prune_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
|
||||
|
||||
compact_parser = subparsers.add_parser(
|
||||
'compact',
|
||||
aliases=SUBPARSER_ALIASES['compact'],
|
||||
help='Compact segments to free space (Borg 1.2+, borgmatic 1.5.23+ only)',
|
||||
description='Compact segments to free space (Borg 1.2+, borgmatic 1.5.23+ only)',
|
||||
add_help=False,
|
||||
)
|
||||
compact_group = compact_parser.add_argument_group('compact arguments')
|
||||
compact_group.add_argument(
|
||||
'--progress',
|
||||
dest='progress',
|
||||
default=False,
|
||||
action='store_true',
|
||||
help='Display progress as each segment is compacted',
|
||||
)
|
||||
compact_group.add_argument(
|
||||
'--cleanup-commits',
|
||||
dest='cleanup_commits',
|
||||
default=False,
|
||||
action='store_true',
|
||||
help='Cleanup commit-only 17-byte segment files left behind by Borg 1.1 (flag in Borg 1.2 only)',
|
||||
)
|
||||
compact_group.add_argument(
|
||||
'--threshold',
|
||||
type=int,
|
||||
dest='threshold',
|
||||
help='Minimum saved space percentage threshold for compacting a segment, defaults to 10',
|
||||
)
|
||||
compact_group.add_argument(
|
||||
'-h', '--help', action='help', help='Show this help message and exit'
|
||||
)
|
||||
|
||||
create_parser = subparsers.add_parser(
|
||||
'create',
|
||||
aliases=SUBPARSER_ALIASES['create'],
|
||||
help='Create archives (actually perform backups)',
|
||||
description='Create archives (actually perform backups)',
|
||||
help='Create an archive (actually perform a backup)',
|
||||
description='Create an archive (actually perform a backup)',
|
||||
add_help=False,
|
||||
)
|
||||
create_group = create_parser.add_argument_group('create arguments')
|
||||
|
@ -289,7 +400,7 @@ def parse_arguments(*unparsed_arguments):
|
|||
help='Display statistics of archive',
|
||||
)
|
||||
create_group.add_argument(
|
||||
'--files', dest='files', default=False, action='store_true', help='Show per-file details'
|
||||
'--list', '--files', dest='list_files', action='store_true', help='Show per-file details'
|
||||
)
|
||||
create_group.add_argument(
|
||||
'--json', dest='json', default=False, action='store_true', help='Output results as JSON'
|
||||
|
@ -316,7 +427,7 @@ def parse_arguments(*unparsed_arguments):
|
|||
dest='repair',
|
||||
default=False,
|
||||
action='store_true',
|
||||
help='Attempt to repair any inconsistencies found (experimental and only for interactive use)',
|
||||
help='Attempt to repair any inconsistencies found (for interactive use)',
|
||||
)
|
||||
check_group.add_argument(
|
||||
'--only',
|
||||
|
@ -324,7 +435,13 @@ def parse_arguments(*unparsed_arguments):
|
|||
choices=('repository', 'archives', 'data', 'extract'),
|
||||
dest='only',
|
||||
action='append',
|
||||
help='Run a particular consistency check (repository, archives, data, or extract) instead of configured checks; can specify flag multiple times',
|
||||
help='Run a particular consistency check (repository, archives, data, or extract) instead of configured checks (subject to configured frequency, can specify flag multiple times)',
|
||||
)
|
||||
check_group.add_argument(
|
||||
'--force',
|
||||
default=False,
|
||||
action='store_true',
|
||||
help='Ignore configured check frequencies and run checks unconditionally',
|
||||
)
|
||||
check_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
|
||||
|
||||
|
@ -359,10 +476,9 @@ def parse_arguments(*unparsed_arguments):
|
|||
)
|
||||
extract_group.add_argument(
|
||||
'--strip-components',
|
||||
type=int,
|
||||
type=lambda number: number if number == 'all' else int(number),
|
||||
metavar='NUMBER',
|
||||
dest='strip_components',
|
||||
help='Number of leading path components to remove from each extracted path. Skip paths with fewer elements',
|
||||
help='Number of leading path components to remove from each extracted path or "all" to strip all leading path components. Skip paths with fewer elements',
|
||||
)
|
||||
extract_group.add_argument(
|
||||
'--progress',
|
||||
|
@ -401,14 +517,14 @@ def parse_arguments(*unparsed_arguments):
|
|||
'--destination',
|
||||
metavar='PATH',
|
||||
dest='destination',
|
||||
help='Path to destination export tar file, or "-" for stdout (but be careful about dirtying output with --verbosity or --files)',
|
||||
help='Path to destination export tar file, or "-" for stdout (but be careful about dirtying output with --verbosity or --list)',
|
||||
required=True,
|
||||
)
|
||||
export_tar_group.add_argument(
|
||||
'--tar-filter', help='Name of filter program to pipe data through'
|
||||
)
|
||||
export_tar_group.add_argument(
|
||||
'--files', default=False, action='store_true', help='Show per-file details'
|
||||
'--list', '--files', dest='list_files', action='store_true', help='Show per-file details'
|
||||
)
|
||||
export_tar_group.add_argument(
|
||||
'--strip-components',
|
||||
|
@ -495,34 +611,80 @@ def parse_arguments(*unparsed_arguments):
|
|||
metavar='NAME',
|
||||
nargs='+',
|
||||
dest='databases',
|
||||
help='Names of databases to restore from archive, defaults to all databases. Note that any databases to restore must be defined in borgmatic\'s configuration',
|
||||
help="Names of databases to restore from archive, defaults to all databases. Note that any databases to restore must be defined in borgmatic's configuration",
|
||||
)
|
||||
restore_group.add_argument(
|
||||
'-h', '--help', action='help', help='Show this help message and exit'
|
||||
)
|
||||
|
||||
rlist_parser = subparsers.add_parser(
|
||||
'rlist',
|
||||
aliases=SUBPARSER_ALIASES['rlist'],
|
||||
help='List repository',
|
||||
description='List the archives in a repository',
|
||||
add_help=False,
|
||||
)
|
||||
rlist_group = rlist_parser.add_argument_group('rlist arguments')
|
||||
rlist_group.add_argument(
|
||||
'--repository', help='Path of repository to list, defaults to the configured repositories',
|
||||
)
|
||||
rlist_group.add_argument(
|
||||
'--short', default=False, action='store_true', help='Output only archive names'
|
||||
)
|
||||
rlist_group.add_argument('--format', help='Format for archive listing')
|
||||
rlist_group.add_argument(
|
||||
'--json', default=False, action='store_true', help='Output results as JSON'
|
||||
)
|
||||
rlist_group.add_argument(
|
||||
'-P', '--prefix', help='Only list archive names starting with this prefix'
|
||||
)
|
||||
rlist_group.add_argument(
|
||||
'-a',
|
||||
'--match-archives',
|
||||
'--glob-archives',
|
||||
metavar='PATTERN',
|
||||
help='Only list archive names matching this pattern',
|
||||
)
|
||||
rlist_group.add_argument(
|
||||
'--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys'
|
||||
)
|
||||
rlist_group.add_argument(
|
||||
'--first', metavar='N', help='List first N archives after other filters are applied'
|
||||
)
|
||||
rlist_group.add_argument(
|
||||
'--last', metavar='N', help='List last N archives after other filters are applied'
|
||||
)
|
||||
rlist_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
|
||||
|
||||
list_parser = subparsers.add_parser(
|
||||
'list',
|
||||
aliases=SUBPARSER_ALIASES['list'],
|
||||
help='List archives',
|
||||
description='List archives or the contents of an archive',
|
||||
help='List archive',
|
||||
description='List the files in an archive or search for a file across archives',
|
||||
add_help=False,
|
||||
)
|
||||
list_group = list_parser.add_argument_group('list arguments')
|
||||
list_group.add_argument(
|
||||
'--repository',
|
||||
help='Path of repository to list, defaults to the configured repository if there is only one',
|
||||
help='Path of repository containing archive to list, defaults to the configured repositories',
|
||||
)
|
||||
list_group.add_argument('--archive', help='Name of archive to list (or "latest")')
|
||||
list_group.add_argument('--archive', help='Name of the archive to list (or "latest")')
|
||||
list_group.add_argument(
|
||||
'--path',
|
||||
metavar='PATH',
|
||||
nargs='+',
|
||||
dest='paths',
|
||||
help='Paths to list from archive, defaults to the entire archive',
|
||||
help='Paths or patterns to list from a single selected archive (via "--archive"), defaults to listing the entire archive',
|
||||
)
|
||||
list_group.add_argument(
|
||||
'--short', default=False, action='store_true', help='Output only archive or path names'
|
||||
'--find',
|
||||
metavar='PATH',
|
||||
nargs='+',
|
||||
dest='find_paths',
|
||||
help='Partial paths or patterns to search for and list across multiple archives',
|
||||
)
|
||||
list_group.add_argument(
|
||||
'--short', default=False, action='store_true', help='Output only path names'
|
||||
)
|
||||
list_group.add_argument('--format', help='Format for file listing')
|
||||
list_group.add_argument(
|
||||
|
@ -532,13 +694,17 @@ def parse_arguments(*unparsed_arguments):
|
|||
'-P', '--prefix', help='Only list archive names starting with this prefix'
|
||||
)
|
||||
list_group.add_argument(
|
||||
'-a', '--glob-archives', metavar='GLOB', help='Only list archive names matching this glob'
|
||||
'-a',
|
||||
'--match-archives',
|
||||
'--glob-archives',
|
||||
metavar='PATTERN',
|
||||
help='Only list archive names matching this pattern',
|
||||
)
|
||||
list_group.add_argument(
|
||||
'--successful',
|
||||
default=False,
|
||||
default=True,
|
||||
action='store_true',
|
||||
help='Only list archive names of successful (non-checkpoint) backups',
|
||||
help='Deprecated; no effect. Newer versions of Borg shows successful (non-checkpoint) archives by default.',
|
||||
)
|
||||
list_group.add_argument(
|
||||
'--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys'
|
||||
|
@ -563,17 +729,34 @@ def parse_arguments(*unparsed_arguments):
|
|||
)
|
||||
list_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
|
||||
|
||||
rinfo_parser = subparsers.add_parser(
|
||||
'rinfo',
|
||||
aliases=SUBPARSER_ALIASES['rinfo'],
|
||||
help='Show repository summary information such as disk space used',
|
||||
description='Show repository summary information such as disk space used',
|
||||
add_help=False,
|
||||
)
|
||||
rinfo_group = rinfo_parser.add_argument_group('rinfo arguments')
|
||||
rinfo_group.add_argument(
|
||||
'--repository',
|
||||
help='Path of repository to show info for, defaults to the configured repository if there is only one',
|
||||
)
|
||||
rinfo_group.add_argument(
|
||||
'--json', dest='json', default=False, action='store_true', help='Output results as JSON'
|
||||
)
|
||||
rinfo_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
|
||||
|
||||
info_parser = subparsers.add_parser(
|
||||
'info',
|
||||
aliases=SUBPARSER_ALIASES['info'],
|
||||
help='Display summary information on archives',
|
||||
description='Display summary information on archives',
|
||||
help='Show archive summary information such as disk space used',
|
||||
description='Show archive summary information such as disk space used',
|
||||
add_help=False,
|
||||
)
|
||||
info_group = info_parser.add_argument_group('info arguments')
|
||||
info_group.add_argument(
|
||||
'--repository',
|
||||
help='Path of repository to show info for, defaults to the configured repository if there is only one',
|
||||
help='Path of repository containing archive to show info for, defaults to the configured repository if there is only one',
|
||||
)
|
||||
info_group.add_argument('--archive', help='Name of archive to show info for (or "latest")')
|
||||
info_group.add_argument(
|
||||
|
@ -584,9 +767,10 @@ def parse_arguments(*unparsed_arguments):
|
|||
)
|
||||
info_group.add_argument(
|
||||
'-a',
|
||||
'--match-archives',
|
||||
'--glob-archives',
|
||||
metavar='GLOB',
|
||||
help='Only show info for archive names matching this glob',
|
||||
metavar='PATTERN',
|
||||
help='Only show info for archive names matching this pattern',
|
||||
)
|
||||
info_group.add_argument(
|
||||
'--sort-by', metavar='KEYS', help='Comma-separated list of sorting keys'
|
||||
|
@ -601,26 +785,92 @@ def parse_arguments(*unparsed_arguments):
|
|||
)
|
||||
info_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
|
||||
|
||||
arguments = parse_subparser_arguments(unparsed_arguments, subparsers)
|
||||
arguments['global'] = parse_global_arguments(unparsed_arguments, top_level_parser, subparsers)
|
||||
break_lock_parser = subparsers.add_parser(
|
||||
'break-lock',
|
||||
aliases=SUBPARSER_ALIASES['break-lock'],
|
||||
help='Break the repository and cache locks left behind by Borg aborting',
|
||||
description='Break Borg repository and cache locks left behind by Borg aborting',
|
||||
add_help=False,
|
||||
)
|
||||
break_lock_group = break_lock_parser.add_argument_group('break-lock arguments')
|
||||
break_lock_group.add_argument(
|
||||
'--repository',
|
||||
help='Path of repository to break the lock for, defaults to the configured repository if there is only one',
|
||||
)
|
||||
break_lock_group.add_argument(
|
||||
'-h', '--help', action='help', help='Show this help message and exit'
|
||||
)
|
||||
|
||||
borg_parser = subparsers.add_parser(
|
||||
'borg',
|
||||
aliases=SUBPARSER_ALIASES['borg'],
|
||||
help='Run an arbitrary Borg command',
|
||||
description="Run an arbitrary Borg command based on borgmatic's configuration",
|
||||
add_help=False,
|
||||
)
|
||||
borg_group = borg_parser.add_argument_group('borg arguments')
|
||||
borg_group.add_argument(
|
||||
'--repository',
|
||||
help='Path of repository to pass to Borg, defaults to the configured repositories',
|
||||
)
|
||||
borg_group.add_argument('--archive', help='Name of archive to pass to Borg (or "latest")')
|
||||
borg_group.add_argument(
|
||||
'--',
|
||||
metavar='OPTION',
|
||||
dest='options',
|
||||
nargs='+',
|
||||
help='Options to pass to Borg, command first ("create", "list", etc). "--" is optional. To specify the repository or the archive, you must use --repository or --archive instead of providing them here.',
|
||||
)
|
||||
borg_group.add_argument('-h', '--help', action='help', help='Show this help message and exit')
|
||||
|
||||
return top_level_parser, subparsers
|
||||
|
||||
|
||||
def parse_arguments(*unparsed_arguments):
|
||||
'''
|
||||
Given command-line arguments with which this script was invoked, parse the arguments and return
|
||||
them as a dict mapping from subparser name (or "global") to an argparse.Namespace instance.
|
||||
'''
|
||||
top_level_parser, subparsers = make_parsers()
|
||||
|
||||
arguments, remaining_arguments = parse_subparser_arguments(
|
||||
unparsed_arguments, subparsers.choices
|
||||
)
|
||||
arguments['global'] = top_level_parser.parse_args(remaining_arguments)
|
||||
|
||||
if arguments['global'].excludes_filename:
|
||||
raise ValueError(
|
||||
'The --excludes option has been replaced with exclude_patterns in configuration'
|
||||
'The --excludes flag has been replaced with exclude_patterns in configuration.'
|
||||
)
|
||||
|
||||
if 'init' in arguments and arguments['global'].dry_run:
|
||||
raise ValueError('The init action cannot be used with the --dry-run option')
|
||||
|
||||
if 'list' in arguments and arguments['list'].glob_archives and arguments['list'].successful:
|
||||
raise ValueError('The --glob-archives and --successful options cannot be used together')
|
||||
if 'create' in arguments and arguments['create'].list_files and arguments['create'].progress:
|
||||
raise ValueError(
|
||||
'With the create action, only one of --list (--files) and --progress flags can be used.'
|
||||
)
|
||||
|
||||
if (
|
||||
'list' in arguments
|
||||
and 'info' in arguments
|
||||
and arguments['list'].json
|
||||
and arguments['info'].json
|
||||
('list' in arguments and 'rinfo' in arguments and arguments['list'].json)
|
||||
or ('list' in arguments and 'info' in arguments and arguments['list'].json)
|
||||
or ('rinfo' in arguments and 'info' in arguments and arguments['rinfo'].json)
|
||||
):
|
||||
raise ValueError('With the --json option, list and info actions cannot be used together')
|
||||
raise ValueError('With the --json flag, multiple actions cannot be used together.')
|
||||
|
||||
if (
|
||||
'transfer' in arguments
|
||||
and arguments['transfer'].archive
|
||||
and arguments['transfer'].match_archives
|
||||
):
|
||||
raise ValueError(
|
||||
'With the transfer action, only one of --archive and --glob-archives flags can be used.'
|
||||
)
|
||||
|
||||
if 'info' in arguments and (
|
||||
(arguments['info'].archive and arguments['info'].prefix)
|
||||
or (arguments['info'].archive and arguments['info'].match_archives)
|
||||
or (arguments['info'].prefix and arguments['info'].match_archives)
|
||||
):
|
||||
raise ValueError(
|
||||
'With the info action, only one of --archive, --prefix, or --match-archives flags can be used.'
|
||||
)
|
||||
|
||||
return arguments
|
||||
|
|
|
@ -1,29 +1,38 @@
|
|||
import collections
|
||||
import copy
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
from queue import Queue
|
||||
from subprocess import CalledProcessError
|
||||
|
||||
import colorama
|
||||
import pkg_resources
|
||||
|
||||
from borgmatic.borg import check as borg_check
|
||||
from borgmatic.borg import create as borg_create
|
||||
from borgmatic.borg import environment as borg_environment
|
||||
from borgmatic.borg import export_tar as borg_export_tar
|
||||
from borgmatic.borg import extract as borg_extract
|
||||
from borgmatic.borg import info as borg_info
|
||||
from borgmatic.borg import init as borg_init
|
||||
from borgmatic.borg import list as borg_list
|
||||
from borgmatic.borg import mount as borg_mount
|
||||
from borgmatic.borg import prune as borg_prune
|
||||
import borgmatic.actions.borg
|
||||
import borgmatic.actions.break_lock
|
||||
import borgmatic.actions.check
|
||||
import borgmatic.actions.compact
|
||||
import borgmatic.actions.create
|
||||
import borgmatic.actions.export_tar
|
||||
import borgmatic.actions.extract
|
||||
import borgmatic.actions.info
|
||||
import borgmatic.actions.list
|
||||
import borgmatic.actions.mount
|
||||
import borgmatic.actions.prune
|
||||
import borgmatic.actions.rcreate
|
||||
import borgmatic.actions.restore
|
||||
import borgmatic.actions.rinfo
|
||||
import borgmatic.actions.rlist
|
||||
import borgmatic.actions.transfer
|
||||
import borgmatic.commands.completion
|
||||
from borgmatic.borg import umount as borg_umount
|
||||
from borgmatic.borg import version as borg_version
|
||||
from borgmatic.commands.arguments import parse_arguments
|
||||
from borgmatic.config import checks, collect, convert, validate
|
||||
from borgmatic.hooks import command, dispatch, dump, monitor
|
||||
from borgmatic.logger import configure_logging, should_do_markup
|
||||
from borgmatic.hooks import command, dispatch, monitor
|
||||
from borgmatic.logger import add_custom_log_levels, configure_logging, should_do_markup
|
||||
from borgmatic.signals import configure_signals
|
||||
from borgmatic.verbosity import verbosity_to_log_level
|
||||
|
||||
|
@ -35,8 +44,8 @@ LEGACY_CONFIG_PATH = '/etc/borgmatic/config'
|
|||
def run_configuration(config_filename, config, arguments):
|
||||
'''
|
||||
Given a config filename, the corresponding parsed config dict, and command-line arguments as a
|
||||
dict from subparser name to a namespace of parsed arguments, execute its defined pruning,
|
||||
backups, consistency checks, and/or other actions.
|
||||
dict from subparser name to a namespace of parsed arguments, execute the defined create, prune,
|
||||
compact, check, and/or other actions.
|
||||
|
||||
Yield a combination of:
|
||||
|
||||
|
@ -51,14 +60,23 @@ def run_configuration(config_filename, config, arguments):
|
|||
|
||||
local_path = location.get('local_path', 'borg')
|
||||
remote_path = location.get('remote_path')
|
||||
borg_environment.initialize(storage)
|
||||
retries = storage.get('retries', 0)
|
||||
retry_wait = storage.get('retry_wait', 0)
|
||||
encountered_error = None
|
||||
error_repository = ''
|
||||
prune_create_or_check = {'prune', 'create', 'check'}.intersection(arguments)
|
||||
using_primary_action = {'create', 'prune', 'compact', 'check'}.intersection(arguments)
|
||||
monitoring_log_level = verbosity_to_log_level(global_arguments.monitoring_verbosity)
|
||||
|
||||
try:
|
||||
if prune_create_or_check:
|
||||
local_borg_version = borg_version.local_borg_version(storage, local_path)
|
||||
except (OSError, CalledProcessError, ValueError) as error:
|
||||
yield from log_error_records(
|
||||
'{}: Error getting local Borg version'.format(config_filename), error
|
||||
)
|
||||
return
|
||||
|
||||
try:
|
||||
if using_primary_action:
|
||||
dispatch.call_hooks(
|
||||
'initialize_monitor',
|
||||
hooks,
|
||||
|
@ -67,39 +85,7 @@ def run_configuration(config_filename, config, arguments):
|
|||
monitoring_log_level,
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
if 'prune' in arguments:
|
||||
command.execute_hook(
|
||||
hooks.get('before_prune'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'pre-prune',
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
if 'create' in arguments:
|
||||
command.execute_hook(
|
||||
hooks.get('before_backup'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'pre-backup',
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
if 'check' in arguments:
|
||||
command.execute_hook(
|
||||
hooks.get('before_check'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'pre-check',
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
if 'extract' in arguments:
|
||||
command.execute_hook(
|
||||
hooks.get('before_extract'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'pre-extract',
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
if prune_create_or_check:
|
||||
if using_primary_action:
|
||||
dispatch.call_hooks(
|
||||
'ping_monitor',
|
||||
hooks,
|
||||
|
@ -114,15 +100,23 @@ def run_configuration(config_filename, config, arguments):
|
|||
return
|
||||
|
||||
encountered_error = error
|
||||
yield from make_error_log_records(
|
||||
'{}: Error running pre hook'.format(config_filename), error
|
||||
)
|
||||
yield from log_error_records('{}: Error pinging monitor'.format(config_filename), error)
|
||||
|
||||
if not encountered_error:
|
||||
for repository_path in location['repositories']:
|
||||
repo_queue = Queue()
|
||||
for repo in location['repositories']:
|
||||
repo_queue.put((repo, 0),)
|
||||
|
||||
while not repo_queue.empty():
|
||||
repository_path, retry_num = repo_queue.get()
|
||||
timeout = retry_num * retry_wait
|
||||
if timeout:
|
||||
logger.warning(f'{config_filename}: Sleeping {timeout}s before next retry')
|
||||
time.sleep(timeout)
|
||||
try:
|
||||
yield from run_actions(
|
||||
arguments=arguments,
|
||||
config_filename=config_filename,
|
||||
location=location,
|
||||
storage=storage,
|
||||
retention=retention,
|
||||
|
@ -130,58 +124,56 @@ def run_configuration(config_filename, config, arguments):
|
|||
hooks=hooks,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
local_borg_version=local_borg_version,
|
||||
repository_path=repository_path,
|
||||
)
|
||||
except (OSError, CalledProcessError, ValueError) as error:
|
||||
encountered_error = error
|
||||
error_repository = repository_path
|
||||
yield from make_error_log_records(
|
||||
if retry_num < retries:
|
||||
repo_queue.put((repository_path, retry_num + 1),)
|
||||
tuple( # Consume the generator so as to trigger logging.
|
||||
log_error_records(
|
||||
'{}: Error running actions for repository'.format(repository_path),
|
||||
error,
|
||||
levelno=logging.WARNING,
|
||||
log_command_error_output=True,
|
||||
)
|
||||
)
|
||||
logger.warning(
|
||||
f'{config_filename}: Retrying... attempt {retry_num + 1}/{retries}'
|
||||
)
|
||||
continue
|
||||
|
||||
if command.considered_soft_failure(config_filename, error):
|
||||
return
|
||||
|
||||
yield from log_error_records(
|
||||
'{}: Error running actions for repository'.format(repository_path), error
|
||||
)
|
||||
encountered_error = error
|
||||
error_repository = repository_path
|
||||
|
||||
try:
|
||||
if using_primary_action:
|
||||
# send logs irrespective of error
|
||||
dispatch.call_hooks(
|
||||
'ping_monitor',
|
||||
hooks,
|
||||
config_filename,
|
||||
monitor.MONITOR_HOOK_NAMES,
|
||||
monitor.State.LOG,
|
||||
monitoring_log_level,
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
except (OSError, CalledProcessError) as error:
|
||||
if command.considered_soft_failure(config_filename, error):
|
||||
return
|
||||
|
||||
encountered_error = error
|
||||
yield from log_error_records('{}: Error pinging monitor'.format(config_filename), error)
|
||||
|
||||
if not encountered_error:
|
||||
try:
|
||||
if 'prune' in arguments:
|
||||
command.execute_hook(
|
||||
hooks.get('after_prune'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'post-prune',
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
if 'create' in arguments:
|
||||
dispatch.call_hooks(
|
||||
'remove_database_dumps',
|
||||
hooks,
|
||||
config_filename,
|
||||
dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
command.execute_hook(
|
||||
hooks.get('after_backup'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'post-backup',
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
if 'check' in arguments:
|
||||
command.execute_hook(
|
||||
hooks.get('after_check'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'post-check',
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
if 'extract' in arguments:
|
||||
command.execute_hook(
|
||||
hooks.get('after_extract'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'post-extract',
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
if prune_create_or_check:
|
||||
if using_primary_action:
|
||||
dispatch.call_hooks(
|
||||
'ping_monitor',
|
||||
hooks,
|
||||
|
@ -204,11 +196,9 @@ def run_configuration(config_filename, config, arguments):
|
|||
return
|
||||
|
||||
encountered_error = error
|
||||
yield from make_error_log_records(
|
||||
'{}: Error running post hook'.format(config_filename), error
|
||||
)
|
||||
yield from log_error_records('{}: Error pinging monitor'.format(config_filename), error)
|
||||
|
||||
if encountered_error and prune_create_or_check:
|
||||
if encountered_error and using_primary_action:
|
||||
try:
|
||||
command.execute_hook(
|
||||
hooks.get('on_error'),
|
||||
|
@ -241,7 +231,7 @@ def run_configuration(config_filename, config, arguments):
|
|||
if command.considered_soft_failure(config_filename, error):
|
||||
return
|
||||
|
||||
yield from make_error_log_records(
|
||||
yield from log_error_records(
|
||||
'{}: Error running on-error hook'.format(config_filename), error
|
||||
)
|
||||
|
||||
|
@ -249,6 +239,7 @@ def run_configuration(config_filename, config, arguments):
|
|||
def run_actions(
|
||||
*,
|
||||
arguments,
|
||||
config_filename,
|
||||
location,
|
||||
storage,
|
||||
retention,
|
||||
|
@ -256,296 +247,208 @@ def run_actions(
|
|||
hooks,
|
||||
local_path,
|
||||
remote_path,
|
||||
repository_path
|
||||
): # pragma: no cover
|
||||
local_borg_version,
|
||||
repository_path,
|
||||
):
|
||||
'''
|
||||
Given parsed command-line arguments as an argparse.ArgumentParser instance, several different
|
||||
configuration dicts, local and remote paths to Borg, and a repository name, run all actions
|
||||
from the command-line arguments on the given repository.
|
||||
Given parsed command-line arguments as an argparse.ArgumentParser instance, the configuration
|
||||
filename, several different configuration dicts, local and remote paths to Borg, a local Borg
|
||||
version string, and a repository name, run all actions from the command-line arguments on the
|
||||
given repository.
|
||||
|
||||
Yield JSON output strings from executing any actions that produce JSON.
|
||||
|
||||
Raise OSError or subprocess.CalledProcessError if an error occurs running a command for an
|
||||
action. Raise ValueError if the arguments or configuration passed to action are invalid.
|
||||
action or a hook. Raise ValueError if the arguments or configuration passed to action are
|
||||
invalid.
|
||||
'''
|
||||
add_custom_log_levels()
|
||||
repository = os.path.expanduser(repository_path)
|
||||
global_arguments = arguments['global']
|
||||
dry_run_label = ' (dry run; not making any changes)' if global_arguments.dry_run else ''
|
||||
if 'init' in arguments:
|
||||
logger.info('{}: Initializing repository'.format(repository))
|
||||
borg_init.initialize_repository(
|
||||
hook_context = {
|
||||
'repository': repository_path,
|
||||
# Deprecated: For backwards compatibility with borgmatic < 1.6.0.
|
||||
'repositories': ','.join(location['repositories']),
|
||||
}
|
||||
|
||||
command.execute_hook(
|
||||
hooks.get('before_actions'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'pre-actions',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
|
||||
for (action_name, action_arguments) in arguments.items():
|
||||
if action_name == 'rcreate':
|
||||
borgmatic.actions.rcreate.run_rcreate(
|
||||
repository,
|
||||
storage,
|
||||
arguments['init'].encryption_mode,
|
||||
arguments['init'].append_only,
|
||||
arguments['init'].storage_quota,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
local_borg_version,
|
||||
action_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
if 'prune' in arguments:
|
||||
logger.info('{}: Pruning archives{}'.format(repository, dry_run_label))
|
||||
borg_prune.prune_archives(
|
||||
global_arguments.dry_run,
|
||||
elif action_name == 'transfer':
|
||||
borgmatic.actions.transfer.run_transfer(
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
action_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
elif action_name == 'create':
|
||||
yield from borgmatic.actions.create.run_create(
|
||||
config_filename,
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
hooks,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
action_arguments,
|
||||
global_arguments,
|
||||
dry_run_label,
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
elif action_name == 'prune':
|
||||
borgmatic.actions.prune.run_prune(
|
||||
config_filename,
|
||||
repository,
|
||||
storage,
|
||||
retention,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
stats=arguments['prune'].stats,
|
||||
files=arguments['prune'].files,
|
||||
)
|
||||
if 'create' in arguments:
|
||||
logger.info('{}: Creating archive{}'.format(repository, dry_run_label))
|
||||
dispatch.call_hooks(
|
||||
'remove_database_dumps',
|
||||
hooks,
|
||||
repository,
|
||||
dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
global_arguments.dry_run,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
action_arguments,
|
||||
global_arguments,
|
||||
dry_run_label,
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
active_dumps = dispatch.call_hooks(
|
||||
'dump_databases',
|
||||
hooks,
|
||||
elif action_name == 'compact':
|
||||
borgmatic.actions.compact.run_compact(
|
||||
config_filename,
|
||||
repository,
|
||||
dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
stream_processes = [process for processes in active_dumps.values() for process in processes]
|
||||
|
||||
json_output = borg_create.create_archive(
|
||||
global_arguments.dry_run,
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
progress=arguments['create'].progress,
|
||||
stats=arguments['create'].stats,
|
||||
json=arguments['create'].json,
|
||||
files=arguments['create'].files,
|
||||
stream_processes=stream_processes,
|
||||
retention,
|
||||
hooks,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
action_arguments,
|
||||
global_arguments,
|
||||
dry_run_label,
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
if json_output:
|
||||
yield json.loads(json_output)
|
||||
|
||||
if 'check' in arguments and checks.repository_enabled_for_checks(repository, consistency):
|
||||
logger.info('{}: Running consistency checks'.format(repository))
|
||||
borg_check.check_archives(
|
||||
elif action_name == 'check':
|
||||
if checks.repository_enabled_for_checks(repository, consistency):
|
||||
borgmatic.actions.check.run_check(
|
||||
config_filename,
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
consistency,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
progress=arguments['check'].progress,
|
||||
repair=arguments['check'].repair,
|
||||
only_checks=arguments['check'].only,
|
||||
)
|
||||
if 'extract' in arguments:
|
||||
if arguments['extract'].repository is None or validate.repositories_match(
|
||||
repository, arguments['extract'].repository
|
||||
):
|
||||
logger.info(
|
||||
'{}: Extracting archive {}'.format(repository, arguments['extract'].archive)
|
||||
)
|
||||
borg_extract.extract_archive(
|
||||
global_arguments.dry_run,
|
||||
repository,
|
||||
borg_list.resolve_archive_name(
|
||||
repository, arguments['extract'].archive, storage, local_path, remote_path
|
||||
),
|
||||
arguments['extract'].paths,
|
||||
location,
|
||||
storage,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
destination_path=arguments['extract'].destination,
|
||||
strip_components=arguments['extract'].strip_components,
|
||||
progress=arguments['extract'].progress,
|
||||
)
|
||||
if 'export-tar' in arguments:
|
||||
if arguments['export-tar'].repository is None or validate.repositories_match(
|
||||
repository, arguments['export-tar'].repository
|
||||
):
|
||||
logger.info(
|
||||
'{}: Exporting archive {} as tar file'.format(
|
||||
repository, arguments['export-tar'].archive
|
||||
)
|
||||
)
|
||||
borg_export_tar.export_tar_archive(
|
||||
global_arguments.dry_run,
|
||||
repository,
|
||||
borg_list.resolve_archive_name(
|
||||
repository, arguments['export-tar'].archive, storage, local_path, remote_path
|
||||
),
|
||||
arguments['export-tar'].paths,
|
||||
arguments['export-tar'].destination,
|
||||
storage,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
tar_filter=arguments['export-tar'].tar_filter,
|
||||
files=arguments['export-tar'].files,
|
||||
strip_components=arguments['export-tar'].strip_components,
|
||||
)
|
||||
if 'mount' in arguments:
|
||||
if arguments['mount'].repository is None or validate.repositories_match(
|
||||
repository, arguments['mount'].repository
|
||||
):
|
||||
if arguments['mount'].archive:
|
||||
logger.info(
|
||||
'{}: Mounting archive {}'.format(repository, arguments['mount'].archive)
|
||||
)
|
||||
else:
|
||||
logger.info('{}: Mounting repository'.format(repository))
|
||||
|
||||
borg_mount.mount_archive(
|
||||
repository,
|
||||
borg_list.resolve_archive_name(
|
||||
repository, arguments['mount'].archive, storage, local_path, remote_path
|
||||
),
|
||||
arguments['mount'].mount_point,
|
||||
arguments['mount'].paths,
|
||||
arguments['mount'].foreground,
|
||||
arguments['mount'].options,
|
||||
storage,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
)
|
||||
if 'restore' in arguments:
|
||||
if arguments['restore'].repository is None or validate.repositories_match(
|
||||
repository, arguments['restore'].repository
|
||||
):
|
||||
logger.info(
|
||||
'{}: Restoring databases from archive {}'.format(
|
||||
repository, arguments['restore'].archive
|
||||
)
|
||||
)
|
||||
dispatch.call_hooks(
|
||||
'remove_database_dumps',
|
||||
hooks,
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
action_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
elif action_name == 'extract':
|
||||
borgmatic.actions.extract.run_extract(
|
||||
config_filename,
|
||||
repository,
|
||||
dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
|
||||
restore_names = arguments['restore'].databases or []
|
||||
if 'all' in restore_names:
|
||||
restore_names = []
|
||||
|
||||
archive_name = borg_list.resolve_archive_name(
|
||||
repository, arguments['restore'].archive, storage, local_path, remote_path
|
||||
)
|
||||
found_names = set()
|
||||
|
||||
for hook_name, per_hook_restore_databases in hooks.items():
|
||||
if hook_name not in dump.DATABASE_HOOK_NAMES:
|
||||
continue
|
||||
|
||||
for restore_database in per_hook_restore_databases:
|
||||
database_name = restore_database['name']
|
||||
if restore_names and database_name not in restore_names:
|
||||
continue
|
||||
|
||||
found_names.add(database_name)
|
||||
dump_pattern = dispatch.call_hooks(
|
||||
'make_database_dump_pattern',
|
||||
storage,
|
||||
hooks,
|
||||
repository,
|
||||
dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
database_name,
|
||||
)[hook_name]
|
||||
|
||||
# Kick off a single database extract to stdout.
|
||||
extract_process = borg_extract.extract_archive(
|
||||
dry_run=global_arguments.dry_run,
|
||||
repository=repository,
|
||||
archive=archive_name,
|
||||
paths=dump.convert_glob_patterns_to_borg_patterns([dump_pattern]),
|
||||
location_config=location,
|
||||
storage_config=storage,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
destination_path='/',
|
||||
# A directory format dump isn't a single file, and therefore can't extract
|
||||
# to stdout. In this case, the extract_process return value is None.
|
||||
extract_to_stdout=bool(restore_database.get('format') != 'directory'),
|
||||
hook_context,
|
||||
local_borg_version,
|
||||
action_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
|
||||
# Run a single database restore, consuming the extract stdout (if any).
|
||||
dispatch.call_hooks(
|
||||
'restore_database_dump',
|
||||
{hook_name: [restore_database]},
|
||||
repository,
|
||||
dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
global_arguments.dry_run,
|
||||
extract_process,
|
||||
)
|
||||
|
||||
dispatch.call_hooks(
|
||||
'remove_database_dumps',
|
||||
hooks,
|
||||
repository,
|
||||
dump.DATABASE_HOOK_NAMES,
|
||||
location,
|
||||
global_arguments.dry_run,
|
||||
)
|
||||
|
||||
if not restore_names and not found_names:
|
||||
raise ValueError('No databases were found to restore')
|
||||
|
||||
missing_names = sorted(set(restore_names) - found_names)
|
||||
if missing_names:
|
||||
raise ValueError(
|
||||
'Cannot restore database(s) {} missing from borgmatic\'s configuration'.format(
|
||||
', '.join(missing_names)
|
||||
)
|
||||
)
|
||||
|
||||
if 'list' in arguments:
|
||||
if arguments['list'].repository is None or validate.repositories_match(
|
||||
repository, arguments['list'].repository
|
||||
):
|
||||
list_arguments = copy.copy(arguments['list'])
|
||||
if not list_arguments.json:
|
||||
logger.warning('{}: Listing archives'.format(repository))
|
||||
list_arguments.archive = borg_list.resolve_archive_name(
|
||||
repository, list_arguments.archive, storage, local_path, remote_path
|
||||
)
|
||||
json_output = borg_list.list_archives(
|
||||
elif action_name == 'export-tar':
|
||||
borgmatic.actions.export_tar.run_export_tar(
|
||||
repository,
|
||||
storage,
|
||||
list_arguments=list_arguments,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
local_borg_version,
|
||||
action_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
if json_output:
|
||||
yield json.loads(json_output)
|
||||
if 'info' in arguments:
|
||||
if arguments['info'].repository is None or validate.repositories_match(
|
||||
repository, arguments['info'].repository
|
||||
):
|
||||
info_arguments = copy.copy(arguments['info'])
|
||||
if not info_arguments.json:
|
||||
logger.warning('{}: Displaying summary info for archives'.format(repository))
|
||||
info_arguments.archive = borg_list.resolve_archive_name(
|
||||
repository, info_arguments.archive, storage, local_path, remote_path
|
||||
)
|
||||
json_output = borg_info.display_archives_info(
|
||||
elif action_name == 'mount':
|
||||
borgmatic.actions.mount.run_mount(
|
||||
repository,
|
||||
storage,
|
||||
info_arguments=info_arguments,
|
||||
local_path=local_path,
|
||||
remote_path=remote_path,
|
||||
local_borg_version,
|
||||
arguments['mount'],
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
elif action_name == 'restore':
|
||||
borgmatic.actions.restore.run_restore(
|
||||
repository,
|
||||
location,
|
||||
storage,
|
||||
hooks,
|
||||
local_borg_version,
|
||||
action_arguments,
|
||||
global_arguments,
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
elif action_name == 'rlist':
|
||||
yield from borgmatic.actions.rlist.run_rlist(
|
||||
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
|
||||
)
|
||||
elif action_name == 'list':
|
||||
yield from borgmatic.actions.list.run_list(
|
||||
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
|
||||
)
|
||||
elif action_name == 'rinfo':
|
||||
yield from borgmatic.actions.rinfo.run_rinfo(
|
||||
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
|
||||
)
|
||||
elif action_name == 'info':
|
||||
yield from borgmatic.actions.info.run_info(
|
||||
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
|
||||
)
|
||||
elif action_name == 'break-lock':
|
||||
borgmatic.actions.break_lock.run_break_lock(
|
||||
repository,
|
||||
storage,
|
||||
local_borg_version,
|
||||
arguments['break-lock'],
|
||||
local_path,
|
||||
remote_path,
|
||||
)
|
||||
elif action_name == 'borg':
|
||||
borgmatic.actions.borg.run_borg(
|
||||
repository, storage, local_borg_version, action_arguments, local_path, remote_path,
|
||||
)
|
||||
|
||||
command.execute_hook(
|
||||
hooks.get('after_actions'),
|
||||
hooks.get('umask'),
|
||||
config_filename,
|
||||
'post-actions',
|
||||
global_arguments.dry_run,
|
||||
**hook_context,
|
||||
)
|
||||
if json_output:
|
||||
yield json.loads(json_output)
|
||||
|
||||
|
||||
def load_configurations(config_filenames, overrides=None):
|
||||
def load_configurations(config_filenames, overrides=None, resolve_env=True):
|
||||
'''
|
||||
Given a sequence of configuration filenames, load and validate each configuration file. Return
|
||||
the results as a tuple of: dict of configuration filename to corresponding parsed configuration,
|
||||
|
@ -558,8 +461,23 @@ def load_configurations(config_filenames, overrides=None):
|
|||
# Parse and load each configuration file.
|
||||
for config_filename in config_filenames:
|
||||
try:
|
||||
configs[config_filename] = validate.parse_configuration(
|
||||
config_filename, validate.schema_filename(), overrides
|
||||
configs[config_filename], parse_logs = validate.parse_configuration(
|
||||
config_filename, validate.schema_filename(), overrides, resolve_env
|
||||
)
|
||||
logs.extend(parse_logs)
|
||||
except PermissionError:
|
||||
logs.extend(
|
||||
[
|
||||
logging.makeLogRecord(
|
||||
dict(
|
||||
levelno=logging.WARNING,
|
||||
levelname='WARNING',
|
||||
msg='{}: Insufficient permissions to read configuration file'.format(
|
||||
config_filename
|
||||
),
|
||||
)
|
||||
),
|
||||
]
|
||||
)
|
||||
except (ValueError, OSError, validate.Validation_error) as error:
|
||||
logs.extend(
|
||||
|
@ -593,28 +511,39 @@ def log_record(suppress_log=False, **kwargs):
|
|||
return record
|
||||
|
||||
|
||||
def make_error_log_records(message, error=None):
|
||||
def log_error_records(
|
||||
message, error=None, levelno=logging.CRITICAL, log_command_error_output=False
|
||||
):
|
||||
'''
|
||||
Given error message text and an optional exception object, yield a series of logging.LogRecord
|
||||
instances with error summary information. As a side effect, log each record.
|
||||
Given error message text, an optional exception object, an optional log level, and whether to
|
||||
log the error output of a CalledProcessError (if any), log error summary information and also
|
||||
yield it as a series of logging.LogRecord instances.
|
||||
|
||||
Note that because the logs are yielded as a generator, logs won't get logged unless you consume
|
||||
the generator output.
|
||||
'''
|
||||
level_name = logging._levelToName[levelno]
|
||||
|
||||
if not error:
|
||||
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=message)
|
||||
yield log_record(levelno=levelno, levelname=level_name, msg=message)
|
||||
return
|
||||
|
||||
try:
|
||||
raise error
|
||||
except CalledProcessError as error:
|
||||
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=message)
|
||||
yield log_record(levelno=levelno, levelname=level_name, msg=message)
|
||||
if error.output:
|
||||
# Suppress these logs for now and save full error output for the log summary at the end.
|
||||
yield log_record(
|
||||
levelno=logging.CRITICAL, levelname='CRITICAL', msg=error.output, suppress_log=True
|
||||
levelno=levelno,
|
||||
levelname=level_name,
|
||||
msg=error.output,
|
||||
suppress_log=not log_command_error_output,
|
||||
)
|
||||
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=error)
|
||||
yield log_record(levelno=levelno, levelname=level_name, msg=error)
|
||||
except (ValueError, OSError) as error:
|
||||
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=message)
|
||||
yield log_record(levelno=logging.CRITICAL, levelname='CRITICAL', msg=error)
|
||||
yield log_record(levelno=levelno, levelname=level_name, msg=message)
|
||||
yield log_record(levelno=levelno, levelname=level_name, msg=error)
|
||||
except: # noqa: E722
|
||||
# Raising above only as a means of determining the error type. Swallow the exception here
|
||||
# because we don't want the exception to propagate out of this function.
|
||||
|
@ -640,24 +569,24 @@ def collect_configuration_run_summary_logs(configs, arguments):
|
|||
any, to stdout.
|
||||
'''
|
||||
# Run cross-file validation checks.
|
||||
if 'extract' in arguments:
|
||||
repository = arguments['extract'].repository
|
||||
elif 'list' in arguments and arguments['list'].archive:
|
||||
repository = arguments['list'].repository
|
||||
elif 'mount' in arguments:
|
||||
repository = arguments['mount'].repository
|
||||
else:
|
||||
repository = None
|
||||
|
||||
if repository:
|
||||
for action_name, action_arguments in arguments.items():
|
||||
if hasattr(action_arguments, 'repository'):
|
||||
repository = getattr(action_arguments, 'repository')
|
||||
break
|
||||
|
||||
try:
|
||||
if 'extract' in arguments or 'mount' in arguments:
|
||||
validate.guard_single_repository_selected(repository, configs)
|
||||
|
||||
validate.guard_configuration_contains_repository(repository, configs)
|
||||
except ValueError as error:
|
||||
yield from make_error_log_records(str(error))
|
||||
yield from log_error_records(str(error))
|
||||
return
|
||||
|
||||
if not configs:
|
||||
yield from make_error_log_records(
|
||||
yield from log_error_records(
|
||||
'{}: No valid configuration files found'.format(
|
||||
' '.join(arguments['global'].config_paths)
|
||||
)
|
||||
|
@ -676,7 +605,7 @@ def collect_configuration_run_summary_logs(configs, arguments):
|
|||
arguments['global'].dry_run,
|
||||
)
|
||||
except (CalledProcessError, ValueError, OSError) as error:
|
||||
yield from make_error_log_records('Error running pre-everything hook', error)
|
||||
yield from log_error_records('Error running pre-everything hook', error)
|
||||
return
|
||||
|
||||
# Execute the actions corresponding to each configuration file.
|
||||
|
@ -686,7 +615,7 @@ def collect_configuration_run_summary_logs(configs, arguments):
|
|||
error_logs = tuple(result for result in results if isinstance(result, logging.LogRecord))
|
||||
|
||||
if error_logs:
|
||||
yield from make_error_log_records(
|
||||
yield from log_error_records(
|
||||
'{}: Error running configuration file'.format(config_filename)
|
||||
)
|
||||
yield from error_logs
|
||||
|
@ -705,10 +634,10 @@ def collect_configuration_run_summary_logs(configs, arguments):
|
|||
logger.info('Unmounting mount point {}'.format(arguments['umount'].mount_point))
|
||||
try:
|
||||
borg_umount.unmount_archive(
|
||||
mount_point=arguments['umount'].mount_point, local_path=get_local_path(configs)
|
||||
mount_point=arguments['umount'].mount_point, local_path=get_local_path(configs),
|
||||
)
|
||||
except (CalledProcessError, OSError) as error:
|
||||
yield from make_error_log_records('Error unmounting mount point', error)
|
||||
yield from log_error_records('Error unmounting mount point', error)
|
||||
|
||||
if json_results:
|
||||
sys.stdout.write(json.dumps(json_results))
|
||||
|
@ -725,7 +654,7 @@ def collect_configuration_run_summary_logs(configs, arguments):
|
|||
arguments['global'].dry_run,
|
||||
)
|
||||
except (CalledProcessError, ValueError, OSError) as error:
|
||||
yield from make_error_log_records('Error running post-everything hook', error)
|
||||
yield from log_error_records('Error running post-everything hook', error)
|
||||
|
||||
|
||||
def exit_with_help_link(): # pragma: no cover
|
||||
|
@ -757,9 +686,14 @@ def main(): # pragma: no cover
|
|||
if global_arguments.version:
|
||||
print(pkg_resources.require('borgmatic')[0].version)
|
||||
sys.exit(0)
|
||||
if global_arguments.bash_completion:
|
||||
print(borgmatic.commands.completion.bash_completion())
|
||||
sys.exit(0)
|
||||
|
||||
config_filenames = tuple(collect.collect_config_filenames(global_arguments.config_paths))
|
||||
configs, parse_logs = load_configurations(config_filenames, global_arguments.overrides)
|
||||
configs, parse_logs = load_configurations(
|
||||
config_filenames, global_arguments.overrides, global_arguments.resolve_env
|
||||
)
|
||||
|
||||
any_json_flags = any(
|
||||
getattr(sub_arguments, 'json', False) for sub_arguments in arguments.values()
|
||||
|
|
57
borgmatic/commands/completion.py
Normal file
57
borgmatic/commands/completion.py
Normal file
|
@ -0,0 +1,57 @@
|
|||
from borgmatic.commands import arguments
|
||||
|
||||
UPGRADE_MESSAGE = '''
|
||||
Your bash completions script is from a different version of borgmatic than is
|
||||
currently installed. Please upgrade your script so your completions match the
|
||||
command-line flags in your installed borgmatic! Try this to upgrade:
|
||||
|
||||
sudo sh -c "borgmatic --bash-completion > $BASH_SOURCE"
|
||||
source $BASH_SOURCE
|
||||
'''
|
||||
|
||||
|
||||
def parser_flags(parser):
|
||||
'''
|
||||
Given an argparse.ArgumentParser instance, return its argument flags in a space-separated
|
||||
string.
|
||||
'''
|
||||
return ' '.join(option for action in parser._actions for option in action.option_strings)
|
||||
|
||||
|
||||
def bash_completion():
|
||||
'''
|
||||
Return a bash completion script for the borgmatic command. Produce this by introspecting
|
||||
borgmatic's command-line argument parsers.
|
||||
'''
|
||||
top_level_parser, subparsers = arguments.make_parsers()
|
||||
global_flags = parser_flags(top_level_parser)
|
||||
actions = ' '.join(subparsers.choices.keys())
|
||||
|
||||
# Avert your eyes.
|
||||
return '\n'.join(
|
||||
(
|
||||
'check_version() {',
|
||||
' local this_script="$(cat "$BASH_SOURCE" 2> /dev/null)"',
|
||||
' local installed_script="$(borgmatic --bash-completion 2> /dev/null)"',
|
||||
' if [ "$this_script" != "$installed_script" ] && [ "$installed_script" != "" ];'
|
||||
' then cat << EOF\n%s\nEOF' % UPGRADE_MESSAGE,
|
||||
' fi',
|
||||
'}',
|
||||
'complete_borgmatic() {',
|
||||
)
|
||||
+ tuple(
|
||||
''' if [[ " ${COMP_WORDS[*]} " =~ " %s " ]]; then
|
||||
COMPREPLY=($(compgen -W "%s %s %s" -- "${COMP_WORDS[COMP_CWORD]}"))
|
||||
return 0
|
||||
fi'''
|
||||
% (action, parser_flags(subparser), actions, global_flags)
|
||||
for action, subparser in subparsers.choices.items()
|
||||
)
|
||||
+ (
|
||||
' COMPREPLY=($(compgen -W "%s %s" -- "${COMP_WORDS[COMP_CWORD]}"))'
|
||||
% (actions, global_flags),
|
||||
' (check_version &)',
|
||||
'}',
|
||||
'\ncomplete -o bashdefault -o default -F complete_borgmatic borgmatic',
|
||||
)
|
||||
)
|
|
@ -23,10 +23,16 @@ def parse_arguments(*arguments):
|
|||
'--destination',
|
||||
dest='destination_filename',
|
||||
default=DEFAULT_DESTINATION_CONFIG_FILENAME,
|
||||
help='Destination YAML configuration file. Default: {}'.format(
|
||||
help='Destination YAML configuration file, default: {}'.format(
|
||||
DEFAULT_DESTINATION_CONFIG_FILENAME
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
'--overwrite',
|
||||
default=False,
|
||||
action='store_true',
|
||||
help='Whether to overwrite any existing destination file, defaults to false',
|
||||
)
|
||||
|
||||
return parser.parse_args(arguments)
|
||||
|
||||
|
@ -36,7 +42,10 @@ def main(): # pragma: no cover
|
|||
args = parse_arguments(*sys.argv[1:])
|
||||
|
||||
generate.generate_sample_configuration(
|
||||
args.source_filename, args.destination_filename, validate.schema_filename()
|
||||
args.source_filename,
|
||||
args.destination_filename,
|
||||
validate.schema_filename(),
|
||||
overwrite=args.overwrite,
|
||||
)
|
||||
|
||||
print('Generated a sample configuration file at {}.'.format(args.destination_filename))
|
||||
|
@ -51,8 +60,8 @@ def main(): # pragma: no cover
|
|||
' diff --unified {} {}'.format(args.source_filename, args.destination_filename)
|
||||
)
|
||||
print()
|
||||
print('Please edit the file to suit your needs. The values are representative.')
|
||||
print('All fields are optional except where indicated.')
|
||||
print('This includes all available configuration options with example values. The few')
|
||||
print('required options are indicated. Please edit the file to suit your needs.')
|
||||
print()
|
||||
print('If you ever need help: https://torsion.org/borgmatic/#issues')
|
||||
except (ValueError, OSError) as error:
|
||||
|
|
|
@ -17,7 +17,7 @@ def _convert_section(source_section_config, section_schema):
|
|||
(
|
||||
option_name,
|
||||
int(option_value)
|
||||
if section_schema['map'].get(option_name, {}).get('type') == 'int'
|
||||
if section_schema['properties'].get(option_name, {}).get('type') == 'integer'
|
||||
else option_value,
|
||||
)
|
||||
for option_name, option_value in source_section_config.items()
|
||||
|
@ -38,7 +38,7 @@ def convert_legacy_parsed_config(source_config, source_excludes, schema):
|
|||
'''
|
||||
destination_config = yaml.comments.CommentedMap(
|
||||
[
|
||||
(section_name, _convert_section(section_config, schema['map'][section_name]))
|
||||
(section_name, _convert_section(section_config, schema['properties'][section_name]))
|
||||
for section_name, section_config in source_config._asdict().items()
|
||||
]
|
||||
)
|
||||
|
@ -54,11 +54,11 @@ def convert_legacy_parsed_config(source_config, source_excludes, schema):
|
|||
destination_config['consistency']['checks'] = source_config.consistency['checks'].split(' ')
|
||||
|
||||
# Add comments to each section, and then add comments to the fields in each section.
|
||||
generate.add_comments_to_configuration_map(destination_config, schema)
|
||||
generate.add_comments_to_configuration_object(destination_config, schema)
|
||||
|
||||
for section_name, section_config in destination_config.items():
|
||||
generate.add_comments_to_configuration_map(
|
||||
section_config, schema['map'][section_name], indent=generate.INDENT
|
||||
generate.add_comments_to_configuration_object(
|
||||
section_config, schema['properties'][section_name], indent=generate.INDENT
|
||||
)
|
||||
|
||||
return destination_config
|
||||
|
|
42
borgmatic/config/environment.py
Normal file
42
borgmatic/config/environment.py
Normal file
|
@ -0,0 +1,42 @@
|
|||
import os
|
||||
import re
|
||||
|
||||
_VARIABLE_PATTERN = re.compile(
|
||||
r'(?P<escape>\\)?(?P<variable>\$\{(?P<name>[A-Za-z0-9_]+)((:?-)(?P<default>[^}]+))?\})'
|
||||
)
|
||||
|
||||
|
||||
def _resolve_string(matcher):
|
||||
'''
|
||||
Get the value from environment given a matcher containing a name and an optional default value.
|
||||
If the variable is not defined in environment and no default value is provided, an Error is raised.
|
||||
'''
|
||||
if matcher.group('escape') is not None:
|
||||
# in case of escaped envvar, unescape it
|
||||
return matcher.group('variable')
|
||||
# resolve the env var
|
||||
name, default = matcher.group('name'), matcher.group('default')
|
||||
out = os.getenv(name, default=default)
|
||||
if out is None:
|
||||
raise ValueError('Cannot find variable ${name} in environment'.format(name=name))
|
||||
return out
|
||||
|
||||
|
||||
def resolve_env_variables(item):
|
||||
'''
|
||||
Resolves variables like or ${FOO} from given configuration with values from process environment
|
||||
Supported formats:
|
||||
- ${FOO} will return FOO env variable
|
||||
- ${FOO-bar} or ${FOO:-bar} will return FOO env variable if it exists, else "bar"
|
||||
|
||||
If any variable is missing in environment and no default value is provided, an Error is raised.
|
||||
'''
|
||||
if isinstance(item, str):
|
||||
return _VARIABLE_PATTERN.sub(_resolve_string, item)
|
||||
if isinstance(item, list):
|
||||
for i, subitem in enumerate(item):
|
||||
item[i] = resolve_env_variables(subitem)
|
||||
if isinstance(item, dict):
|
||||
for key, value in item.items():
|
||||
item[key] = resolve_env_variables(value)
|
||||
return item
|
|
@ -5,7 +5,7 @@ import re
|
|||
|
||||
from ruamel import yaml
|
||||
|
||||
from borgmatic.config import load
|
||||
from borgmatic.config import load, normalize
|
||||
|
||||
INDENT = 4
|
||||
SEQUENCE_INDENT = 2
|
||||
|
@ -24,29 +24,27 @@ def _insert_newline_before_comment(config, field_name):
|
|||
def _schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
|
||||
'''
|
||||
Given a loaded configuration schema, generate and return sample config for it. Include comments
|
||||
for each section based on the schema "desc" description.
|
||||
for each section based on the schema "description".
|
||||
'''
|
||||
schema_type = schema.get('type')
|
||||
example = schema.get('example')
|
||||
if example is not None:
|
||||
return example
|
||||
|
||||
if 'seq' in schema:
|
||||
if schema_type == 'array':
|
||||
config = yaml.comments.CommentedSeq(
|
||||
[
|
||||
_schema_to_sample_configuration(item_schema, level, parent_is_sequence=True)
|
||||
for item_schema in schema['seq']
|
||||
]
|
||||
[_schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)]
|
||||
)
|
||||
add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT))
|
||||
elif 'map' in schema:
|
||||
elif schema_type == 'object':
|
||||
config = yaml.comments.CommentedMap(
|
||||
[
|
||||
(field_name, _schema_to_sample_configuration(sub_schema, level + 1))
|
||||
for field_name, sub_schema in schema['map'].items()
|
||||
for field_name, sub_schema in schema['properties'].items()
|
||||
]
|
||||
)
|
||||
indent = (level * INDENT) + (SEQUENCE_INDENT if parent_is_sequence else 0)
|
||||
add_comments_to_configuration_map(
|
||||
add_comments_to_configuration_object(
|
||||
config, schema, indent=indent, skip_first=parent_is_sequence
|
||||
)
|
||||
else:
|
||||
|
@ -111,13 +109,18 @@ def render_configuration(config):
|
|||
return rendered.getvalue()
|
||||
|
||||
|
||||
def write_configuration(config_filename, rendered_config, mode=0o600):
|
||||
def write_configuration(config_filename, rendered_config, mode=0o600, overwrite=False):
|
||||
'''
|
||||
Given a target config filename and rendered config YAML, write it out to file. Create any
|
||||
containing directories as needed.
|
||||
containing directories as needed. But if the file already exists and overwrite is False,
|
||||
abort before writing anything.
|
||||
'''
|
||||
if os.path.exists(config_filename):
|
||||
raise FileExistsError('{} already exists. Aborting.'.format(config_filename))
|
||||
if not overwrite and os.path.exists(config_filename):
|
||||
raise FileExistsError(
|
||||
'{} already exists. Aborting. Use --overwrite to replace the file.'.format(
|
||||
config_filename
|
||||
)
|
||||
)
|
||||
|
||||
try:
|
||||
os.makedirs(os.path.dirname(config_filename), mode=0o700)
|
||||
|
@ -132,8 +135,8 @@ def write_configuration(config_filename, rendered_config, mode=0o600):
|
|||
|
||||
def add_comments_to_configuration_sequence(config, schema, indent=0):
|
||||
'''
|
||||
If the given config sequence's items are maps, then mine the schema for the description of the
|
||||
map's first item, and slap that atop the sequence. Indent the comment the given number of
|
||||
If the given config sequence's items are object, then mine the schema for the description of the
|
||||
object's first item, and slap that atop the sequence. Indent the comment the given number of
|
||||
characters.
|
||||
|
||||
Doing this for sequences of maps results in nice comments that look like:
|
||||
|
@ -142,16 +145,16 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
|
|||
things:
|
||||
# First key description. Added by this function.
|
||||
- key: foo
|
||||
# Second key description. Added by add_comments_to_configuration_map().
|
||||
# Second key description. Added by add_comments_to_configuration_object().
|
||||
other: bar
|
||||
```
|
||||
'''
|
||||
if 'map' not in schema['seq'][0]:
|
||||
if schema['items'].get('type') != 'object':
|
||||
return
|
||||
|
||||
for field_name in config[0].keys():
|
||||
field_schema = schema['seq'][0]['map'].get(field_name, {})
|
||||
description = field_schema.get('desc')
|
||||
field_schema = schema['items']['properties'].get(field_name, {})
|
||||
description = field_schema.get('description')
|
||||
|
||||
# No description to use? Skip it.
|
||||
if not field_schema or not description:
|
||||
|
@ -160,7 +163,7 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
|
|||
config[0].yaml_set_start_comment(description, indent=indent)
|
||||
|
||||
# We only want the first key's description here, as the rest of the keys get commented by
|
||||
# add_comments_to_configuration_map().
|
||||
# add_comments_to_configuration_object().
|
||||
return
|
||||
|
||||
|
||||
|
@ -169,7 +172,7 @@ REQUIRED_KEYS = {'source_directories', 'repositories', 'keep_daily'}
|
|||
COMMENTED_OUT_SENTINEL = 'COMMENT_OUT'
|
||||
|
||||
|
||||
def add_comments_to_configuration_map(config, schema, indent=0, skip_first=False):
|
||||
def add_comments_to_configuration_object(config, schema, indent=0, skip_first=False):
|
||||
'''
|
||||
Using descriptions from a schema as a source, add those descriptions as comments to the given
|
||||
config mapping, before each field. Indent the comment the given number of characters.
|
||||
|
@ -178,8 +181,8 @@ def add_comments_to_configuration_map(config, schema, indent=0, skip_first=False
|
|||
if skip_first and index == 0:
|
||||
continue
|
||||
|
||||
field_schema = schema['map'].get(field_name, {})
|
||||
description = field_schema.get('desc', '').strip()
|
||||
field_schema = schema['properties'].get(field_name, {})
|
||||
description = field_schema.get('description', '').strip()
|
||||
|
||||
# If this is an optional key, add an indicator to the comment flagging it to be commented
|
||||
# out from the sample configuration. This sentinel is consumed by downstream processing that
|
||||
|
@ -265,18 +268,22 @@ def merge_source_configuration_into_destination(destination_config, source_confi
|
|||
return destination_config
|
||||
|
||||
|
||||
def generate_sample_configuration(source_filename, destination_filename, schema_filename):
|
||||
def generate_sample_configuration(
|
||||
source_filename, destination_filename, schema_filename, overwrite=False
|
||||
):
|
||||
'''
|
||||
Given an optional source configuration filename, and a required destination configuration
|
||||
filename, and the path to a schema filename in pykwalify YAML schema format, write out a
|
||||
sample configuration file based on that schema. If a source filename is provided, merge the
|
||||
parsed contents of that configuration into the generated configuration.
|
||||
filename, the path to a schema filename in a YAML rendition of the JSON Schema format, and
|
||||
whether to overwrite a destination file, write out a sample configuration file based on that
|
||||
schema. If a source filename is provided, merge the parsed contents of that configuration into
|
||||
the generated configuration.
|
||||
'''
|
||||
schema = yaml.round_trip_load(open(schema_filename))
|
||||
source_config = None
|
||||
|
||||
if source_filename:
|
||||
source_config = load.load_configuration(source_filename)
|
||||
normalize.normalize(source_filename, source_config)
|
||||
|
||||
destination_config = merge_source_configuration_into_destination(
|
||||
_schema_to_sample_configuration(schema), source_config
|
||||
|
@ -285,4 +292,5 @@ def generate_sample_configuration(source_filename, destination_filename, schema_
|
|||
write_configuration(
|
||||
destination_filename,
|
||||
_comment_out_optional_configuration(render_configuration(destination_config)),
|
||||
overwrite=overwrite,
|
||||
)
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
import functools
|
||||
import logging
|
||||
import os
|
||||
|
||||
|
@ -6,6 +7,77 @@ import ruamel.yaml
|
|||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def include_configuration(loader, filename_node, include_directory):
|
||||
'''
|
||||
Given a ruamel.yaml.loader.Loader, a ruamel.yaml.serializer.ScalarNode containing the included
|
||||
filename, and an include directory path to search for matching files, load the given YAML
|
||||
filename (ignoring the given loader so we can use our own) and return its contents as a data
|
||||
structure of nested dicts and lists. If the filename is relative, probe for it within 1. the
|
||||
current working directory and 2. the given include directory.
|
||||
|
||||
Raise FileNotFoundError if an included file was not found.
|
||||
'''
|
||||
include_directories = [os.getcwd(), os.path.abspath(include_directory)]
|
||||
include_filename = os.path.expanduser(filename_node.value)
|
||||
|
||||
if not os.path.isabs(include_filename):
|
||||
candidate_filenames = [
|
||||
os.path.join(directory, include_filename) for directory in include_directories
|
||||
]
|
||||
|
||||
for candidate_filename in candidate_filenames:
|
||||
if os.path.exists(candidate_filename):
|
||||
include_filename = candidate_filename
|
||||
break
|
||||
else:
|
||||
raise FileNotFoundError(
|
||||
f'Could not find include {filename_node.value} at {" or ".join(candidate_filenames)}'
|
||||
)
|
||||
|
||||
return load_configuration(include_filename)
|
||||
|
||||
|
||||
class Include_constructor(ruamel.yaml.SafeConstructor):
|
||||
'''
|
||||
A YAML "constructor" (a ruamel.yaml concept) that supports a custom "!include" tag for including
|
||||
separate YAML configuration files. Example syntax: `retention: !include common.yaml`
|
||||
'''
|
||||
|
||||
def __init__(self, preserve_quotes=None, loader=None, include_directory=None):
|
||||
super(Include_constructor, self).__init__(preserve_quotes, loader)
|
||||
self.add_constructor(
|
||||
'!include',
|
||||
functools.partial(include_configuration, include_directory=include_directory),
|
||||
)
|
||||
|
||||
def flatten_mapping(self, node):
|
||||
'''
|
||||
Support the special case of deep merging included configuration into an existing mapping
|
||||
using the YAML '<<' merge key. Example syntax:
|
||||
|
||||
```
|
||||
retention:
|
||||
keep_daily: 1
|
||||
|
||||
<<: !include common.yaml
|
||||
```
|
||||
|
||||
These includes are deep merged into the current configuration file. For instance, in this
|
||||
example, any "retention" options in common.yaml will get merged into the "retention" section
|
||||
in the example configuration file.
|
||||
'''
|
||||
representer = ruamel.yaml.representer.SafeRepresenter()
|
||||
|
||||
for index, (key_node, value_node) in enumerate(node.value):
|
||||
if key_node.tag == u'tag:yaml.org,2002:merge' and value_node.tag == '!include':
|
||||
included_value = representer.represent_data(self.construct_object(value_node))
|
||||
node.value[index] = (key_node, included_value)
|
||||
|
||||
super(Include_constructor, self).flatten_mapping(node)
|
||||
|
||||
node.value = deep_merge_nodes(node.value)
|
||||
|
||||
|
||||
def load_configuration(filename):
|
||||
'''
|
||||
Load the given configuration file and return its contents as a data structure of nested dicts
|
||||
|
@ -14,46 +86,131 @@ def load_configuration(filename):
|
|||
Raise ruamel.yaml.error.YAMLError if something goes wrong parsing the YAML, or RecursionError
|
||||
if there are too many recursive includes.
|
||||
'''
|
||||
# Use an embedded derived class for the include constructor so as to capture the filename
|
||||
# value. (functools.partial doesn't work for this use case because yaml.Constructor has to be
|
||||
# an actual class.)
|
||||
class Include_constructor_with_include_directory(Include_constructor):
|
||||
def __init__(self, preserve_quotes=None, loader=None):
|
||||
super(Include_constructor_with_include_directory, self).__init__(
|
||||
preserve_quotes, loader, include_directory=os.path.dirname(filename)
|
||||
)
|
||||
|
||||
yaml = ruamel.yaml.YAML(typ='safe')
|
||||
yaml.Constructor = Include_constructor
|
||||
yaml.Constructor = Include_constructor_with_include_directory
|
||||
|
||||
return yaml.load(open(filename))
|
||||
|
||||
|
||||
def include_configuration(loader, filename_node):
|
||||
DELETED_NODE = object()
|
||||
|
||||
|
||||
def deep_merge_nodes(nodes):
|
||||
'''
|
||||
Load the given YAML filename (ignoring the given loader so we can use our own), and return its
|
||||
contents as a data structure of nested dicts and lists.
|
||||
Given a nested borgmatic configuration data structure as a list of tuples in the form of:
|
||||
|
||||
(
|
||||
ruamel.yaml.nodes.ScalarNode as a key,
|
||||
ruamel.yaml.nodes.MappingNode or other Node as a value,
|
||||
),
|
||||
|
||||
... deep merge any node values corresponding to duplicate keys and return the result. If
|
||||
there are colliding keys with non-MappingNode values (e.g., integers or strings), the last
|
||||
of the values wins.
|
||||
|
||||
For instance, given node values of:
|
||||
|
||||
[
|
||||
(
|
||||
ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
|
||||
MappingNode(tag='tag:yaml.org,2002:map', value=[
|
||||
(
|
||||
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_hourly'),
|
||||
ScalarNode(tag='tag:yaml.org,2002:int', value='24')
|
||||
),
|
||||
(
|
||||
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
|
||||
ScalarNode(tag='tag:yaml.org,2002:int', value='7')
|
||||
),
|
||||
]),
|
||||
),
|
||||
(
|
||||
ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
|
||||
MappingNode(tag='tag:yaml.org,2002:map', value=[
|
||||
(
|
||||
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
|
||||
ScalarNode(tag='tag:yaml.org,2002:int', value='5')
|
||||
),
|
||||
]),
|
||||
),
|
||||
]
|
||||
|
||||
... the returned result would be:
|
||||
|
||||
[
|
||||
(
|
||||
ScalarNode(tag='tag:yaml.org,2002:str', value='retention'),
|
||||
MappingNode(tag='tag:yaml.org,2002:map', value=[
|
||||
(
|
||||
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_hourly'),
|
||||
ScalarNode(tag='tag:yaml.org,2002:int', value='24')
|
||||
),
|
||||
(
|
||||
ScalarNode(tag='tag:yaml.org,2002:str', value='keep_daily'),
|
||||
ScalarNode(tag='tag:yaml.org,2002:int', value='5')
|
||||
),
|
||||
]),
|
||||
),
|
||||
]
|
||||
|
||||
The purpose of deep merging like this is to support, for instance, merging one borgmatic
|
||||
configuration file into another for reuse, such that a configuration section ("retention",
|
||||
etc.) does not completely replace the corresponding section in a merged file.
|
||||
'''
|
||||
return load_configuration(os.path.expanduser(filename_node.value))
|
||||
# Map from original node key/value to the replacement merged node. DELETED_NODE as a replacement
|
||||
# node indications deletion.
|
||||
replaced_nodes = {}
|
||||
|
||||
# To find nodes that require merging, compare each node with each other node.
|
||||
for a_key, a_value in nodes:
|
||||
for b_key, b_value in nodes:
|
||||
# If we've already considered one of the nodes for merging, skip it.
|
||||
if (a_key, a_value) in replaced_nodes or (b_key, b_value) in replaced_nodes:
|
||||
continue
|
||||
|
||||
class Include_constructor(ruamel.yaml.SafeConstructor):
|
||||
'''
|
||||
A YAML "constructor" (a ruamel.yaml concept) that supports a custom "!include" tag for including
|
||||
separate YAML configuration files. Example syntax: `retention: !include common.yaml`
|
||||
'''
|
||||
# If the keys match and the values are different, we need to merge these two A and B nodes.
|
||||
if a_key.tag == b_key.tag and a_key.value == b_key.value and a_value != b_value:
|
||||
# Since we're merging into the B node, consider the A node a duplicate and remove it.
|
||||
replaced_nodes[(a_key, a_value)] = DELETED_NODE
|
||||
|
||||
def __init__(self, preserve_quotes=None, loader=None):
|
||||
super(Include_constructor, self).__init__(preserve_quotes, loader)
|
||||
self.add_constructor('!include', include_configuration)
|
||||
# If we're dealing with MappingNodes, recurse and merge its values as well.
|
||||
if isinstance(b_value, ruamel.yaml.nodes.MappingNode):
|
||||
replaced_nodes[(b_key, b_value)] = (
|
||||
b_key,
|
||||
ruamel.yaml.nodes.MappingNode(
|
||||
tag=b_value.tag,
|
||||
value=deep_merge_nodes(a_value.value + b_value.value),
|
||||
start_mark=b_value.start_mark,
|
||||
end_mark=b_value.end_mark,
|
||||
flow_style=b_value.flow_style,
|
||||
comment=b_value.comment,
|
||||
anchor=b_value.anchor,
|
||||
),
|
||||
)
|
||||
# If we're dealing with SequenceNodes, merge by appending one sequence to the other.
|
||||
elif isinstance(b_value, ruamel.yaml.nodes.SequenceNode):
|
||||
replaced_nodes[(b_key, b_value)] = (
|
||||
b_key,
|
||||
ruamel.yaml.nodes.SequenceNode(
|
||||
tag=b_value.tag,
|
||||
value=a_value.value + b_value.value,
|
||||
start_mark=b_value.start_mark,
|
||||
end_mark=b_value.end_mark,
|
||||
flow_style=b_value.flow_style,
|
||||
comment=b_value.comment,
|
||||
anchor=b_value.anchor,
|
||||
),
|
||||
)
|
||||
|
||||
def flatten_mapping(self, node):
|
||||
'''
|
||||
Support the special case of shallow merging included configuration into an existing mapping
|
||||
using the YAML '<<' merge key. Example syntax:
|
||||
|
||||
```
|
||||
retention:
|
||||
keep_daily: 1
|
||||
<<: !include common.yaml
|
||||
```
|
||||
'''
|
||||
representer = ruamel.yaml.representer.SafeRepresenter()
|
||||
|
||||
for index, (key_node, value_node) in enumerate(node.value):
|
||||
if key_node.tag == u'tag:yaml.org,2002:merge' and value_node.tag == '!include':
|
||||
included_value = representer.represent_data(self.construct_object(value_node))
|
||||
node.value[index] = (key_node, included_value)
|
||||
|
||||
super(Include_constructor, self).flatten_mapping(node)
|
||||
return [
|
||||
replaced_nodes.get(node, node) for node in nodes if replaced_nodes.get(node) != DELETED_NODE
|
||||
]
|
||||
|
|
|
@ -1,10 +1,88 @@
|
|||
def normalize(config):
|
||||
'''
|
||||
Given a configuration dict, apply particular hard-coded rules to normalize its contents to
|
||||
adhere to the configuration schema.
|
||||
'''
|
||||
exclude_if_present = config.get('location', {}).get('exclude_if_present')
|
||||
import logging
|
||||
|
||||
# "Upgrade" exclude_if_present from a string to a list.
|
||||
|
||||
def normalize(config_filename, config):
|
||||
'''
|
||||
Given a configuration filename and a configuration dict of its loaded contents, apply particular
|
||||
hard-coded rules to normalize the configuration to adhere to the current schema. Return any log
|
||||
message warnings produced based on the normalization performed.
|
||||
'''
|
||||
logs = []
|
||||
location = config.get('location') or {}
|
||||
storage = config.get('storage') or {}
|
||||
consistency = config.get('consistency') or {}
|
||||
hooks = config.get('hooks') or {}
|
||||
|
||||
# Upgrade exclude_if_present from a string to a list.
|
||||
exclude_if_present = location.get('exclude_if_present')
|
||||
if isinstance(exclude_if_present, str):
|
||||
config['location']['exclude_if_present'] = [exclude_if_present]
|
||||
|
||||
# Upgrade various monitoring hooks from a string to a dict.
|
||||
healthchecks = hooks.get('healthchecks')
|
||||
if isinstance(healthchecks, str):
|
||||
config['hooks']['healthchecks'] = {'ping_url': healthchecks}
|
||||
|
||||
cronitor = hooks.get('cronitor')
|
||||
if isinstance(cronitor, str):
|
||||
config['hooks']['cronitor'] = {'ping_url': cronitor}
|
||||
|
||||
pagerduty = hooks.get('pagerduty')
|
||||
if isinstance(pagerduty, str):
|
||||
config['hooks']['pagerduty'] = {'integration_key': pagerduty}
|
||||
|
||||
cronhub = hooks.get('cronhub')
|
||||
if isinstance(cronhub, str):
|
||||
config['hooks']['cronhub'] = {'ping_url': cronhub}
|
||||
|
||||
# Upgrade consistency checks from a list of strings to a list of dicts.
|
||||
checks = consistency.get('checks')
|
||||
if isinstance(checks, list) and len(checks) and isinstance(checks[0], str):
|
||||
config['consistency']['checks'] = [{'name': check_type} for check_type in checks]
|
||||
|
||||
# Rename various configuration options.
|
||||
numeric_owner = location.pop('numeric_owner', None)
|
||||
if numeric_owner is not None:
|
||||
config['location']['numeric_ids'] = numeric_owner
|
||||
|
||||
bsd_flags = location.pop('bsd_flags', None)
|
||||
if bsd_flags is not None:
|
||||
config['location']['flags'] = bsd_flags
|
||||
|
||||
remote_rate_limit = storage.pop('remote_rate_limit', None)
|
||||
if remote_rate_limit is not None:
|
||||
config['storage']['upload_rate_limit'] = remote_rate_limit
|
||||
|
||||
# Upgrade remote repositories to ssh:// syntax, required in Borg 2.
|
||||
repositories = location.get('repositories')
|
||||
if repositories:
|
||||
config['location']['repositories'] = []
|
||||
for repository in repositories:
|
||||
if '~' in repository:
|
||||
logs.append(
|
||||
logging.makeLogRecord(
|
||||
dict(
|
||||
levelno=logging.WARNING,
|
||||
levelname='WARNING',
|
||||
msg=f'{config_filename}: Repository paths containing "~" are deprecated in borgmatic and no longer work in Borg 2.x+.',
|
||||
)
|
||||
)
|
||||
)
|
||||
if ':' in repository and not repository.startswith('ssh://'):
|
||||
rewritten_repository = (
|
||||
f"ssh://{repository.replace(':~', '/~').replace(':/', '/').replace(':', '/./')}"
|
||||
)
|
||||
logs.append(
|
||||
logging.makeLogRecord(
|
||||
dict(
|
||||
levelno=logging.WARNING,
|
||||
levelname='WARNING',
|
||||
msg=f'{config_filename}: Remote repository paths without ssh:// syntax are deprecated. Interpreting "{repository}" as "{rewritten_repository}"',
|
||||
)
|
||||
)
|
||||
)
|
||||
config['location']['repositories'].append(rewritten_repository)
|
||||
else:
|
||||
config['location']['repositories'].append(repository)
|
||||
|
||||
return logs
|
||||
|
|
|
@ -26,6 +26,8 @@ def convert_value_type(value):
|
|||
'''
|
||||
Given a string value, determine its logical type (string, boolean, integer, etc.), and return it
|
||||
converted to that type.
|
||||
|
||||
Raise ruamel.yaml.error.YAMLError if there's a parse issue with the YAML.
|
||||
'''
|
||||
return ruamel.yaml.YAML(typ='safe').load(io.StringIO(value))
|
||||
|
||||
|
@ -50,20 +52,26 @@ def parse_overrides(raw_overrides):
|
|||
if not raw_overrides:
|
||||
return ()
|
||||
|
||||
parsed_overrides = []
|
||||
|
||||
for raw_override in raw_overrides:
|
||||
try:
|
||||
return tuple(
|
||||
(tuple(raw_keys.split('.')), convert_value_type(value))
|
||||
for raw_override in raw_overrides
|
||||
for raw_keys, value in (raw_override.split('=', 1),)
|
||||
)
|
||||
raw_keys, value = raw_override.split('=', 1)
|
||||
parsed_overrides.append((tuple(raw_keys.split('.')), convert_value_type(value),))
|
||||
except ValueError:
|
||||
raise ValueError('Invalid override. Make sure you use the form: SECTION.OPTION=VALUE')
|
||||
raise ValueError(
|
||||
f"Invalid override '{raw_override}'. Make sure you use the form: SECTION.OPTION=VALUE"
|
||||
)
|
||||
except ruamel.yaml.error.YAMLError as error:
|
||||
raise ValueError(f"Invalid override '{raw_override}': {error.problem}")
|
||||
|
||||
return tuple(parsed_overrides)
|
||||
|
||||
|
||||
def apply_overrides(config, raw_overrides):
|
||||
'''
|
||||
Given a sequence of configuration file override strings in the form of "section.option=value"
|
||||
and a configuration dict, parse each override and set it the configuration dict.
|
||||
Given a configuration dict and a sequence of configuration file override strings in the form of
|
||||
"section.option=value", parse each override and set it the configuration dict.
|
||||
'''
|
||||
overrides = parse_overrides(raw_overrides)
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,12 +1,10 @@
|
|||
import logging
|
||||
import os
|
||||
|
||||
import jsonschema
|
||||
import pkg_resources
|
||||
import pykwalify.core
|
||||
import pykwalify.errors
|
||||
import ruamel.yaml
|
||||
|
||||
from borgmatic.config import load, normalize, override
|
||||
from borgmatic.config import environment, load, normalize, override
|
||||
|
||||
|
||||
def schema_filename():
|
||||
|
@ -17,15 +15,40 @@ def schema_filename():
|
|||
return pkg_resources.resource_filename('borgmatic', 'config/schema.yaml')
|
||||
|
||||
|
||||
def format_json_error_path_element(path_element):
|
||||
'''
|
||||
Given a path element into a JSON data structure, format it for display as a string.
|
||||
'''
|
||||
if isinstance(path_element, int):
|
||||
return str('[{}]'.format(path_element))
|
||||
|
||||
return str('.{}'.format(path_element))
|
||||
|
||||
|
||||
def format_json_error(error):
|
||||
'''
|
||||
Given an instance of jsonschema.exceptions.ValidationError, format it for display as a string.
|
||||
'''
|
||||
if not error.path:
|
||||
return 'At the top level: {}'.format(error.message)
|
||||
|
||||
formatted_path = ''.join(format_json_error_path_element(element) for element in error.path)
|
||||
return "At '{}': {}".format(formatted_path.lstrip('.'), error.message)
|
||||
|
||||
|
||||
class Validation_error(ValueError):
|
||||
'''
|
||||
A collection of error message strings generated when attempting to validate a particular
|
||||
configurartion file.
|
||||
A collection of error messages generated when attempting to validate a particular
|
||||
configuration file.
|
||||
'''
|
||||
|
||||
def __init__(self, config_filename, error_messages):
|
||||
def __init__(self, config_filename, errors):
|
||||
'''
|
||||
Given a configuration filename path and a sequence of string error messages, create a
|
||||
Validation_error.
|
||||
'''
|
||||
self.config_filename = config_filename
|
||||
self.error_messages = error_messages
|
||||
self.errors = errors
|
||||
|
||||
def __str__(self):
|
||||
'''
|
||||
|
@ -33,7 +56,7 @@ class Validation_error(ValueError):
|
|||
'''
|
||||
return 'An error occurred while parsing a configuration file at {}:\n'.format(
|
||||
self.config_filename
|
||||
) + '\n'.join(self.error_messages)
|
||||
) + '\n'.join(error for error in self.errors)
|
||||
|
||||
|
||||
def apply_logical_validation(config_filename, parsed_configuration):
|
||||
|
@ -42,15 +65,6 @@ def apply_logical_validation(config_filename, parsed_configuration):
|
|||
below), run through any additional logical validation checks. If there are any such validation
|
||||
problems, raise a Validation_error.
|
||||
'''
|
||||
archive_name_format = parsed_configuration.get('storage', {}).get('archive_name_format')
|
||||
prefix = parsed_configuration.get('retention', {}).get('prefix')
|
||||
|
||||
if archive_name_format and not prefix:
|
||||
raise Validation_error(
|
||||
config_filename,
|
||||
('If you provide an archive_name_format, you must also specify a retention prefix.',),
|
||||
)
|
||||
|
||||
location_repositories = parsed_configuration.get('location', {}).get('repositories')
|
||||
check_repositories = parsed_configuration.get('consistency', {}).get('check_repositories', [])
|
||||
for repository in check_repositories:
|
||||
|
@ -58,45 +72,29 @@ def apply_logical_validation(config_filename, parsed_configuration):
|
|||
raise Validation_error(
|
||||
config_filename,
|
||||
(
|
||||
'Unknown repository in the consistency section\'s check_repositories: {}'.format(
|
||||
'Unknown repository in the "consistency" section\'s "check_repositories": {}'.format(
|
||||
repository
|
||||
),
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def remove_examples(schema):
|
||||
def parse_configuration(config_filename, schema_filename, overrides=None, resolve_env=True):
|
||||
'''
|
||||
pykwalify gets angry if the example field is not a string. So rather than bend to its will,
|
||||
remove all examples from the given schema before passing the schema to pykwalify.
|
||||
'''
|
||||
if 'map' in schema:
|
||||
for item_name, item_schema in schema['map'].items():
|
||||
item_schema.pop('example', None)
|
||||
remove_examples(item_schema)
|
||||
elif 'seq' in schema:
|
||||
for item_schema in schema['seq']:
|
||||
item_schema.pop('example', None)
|
||||
remove_examples(item_schema)
|
||||
|
||||
return schema
|
||||
|
||||
|
||||
def parse_configuration(config_filename, schema_filename, overrides=None):
|
||||
'''
|
||||
Given the path to a config filename in YAML format, the path to a schema filename in pykwalify
|
||||
YAML schema format, a sequence of configuration file override strings in the form of
|
||||
"section.option=value", return the parsed configuration as a data structure of nested dicts and
|
||||
lists corresponding to the schema. Example return value:
|
||||
Given the path to a config filename in YAML format, the path to a schema filename in a YAML
|
||||
rendition of JSON Schema format, a sequence of configuration file override strings in the form
|
||||
of "section.option=value", return the parsed configuration as a data structure of nested dicts
|
||||
and lists corresponding to the schema. Example return value:
|
||||
|
||||
{'location': {'source_directories': ['/home', '/etc'], 'repository': 'hostname.borg'},
|
||||
'retention': {'keep_daily': 7}, 'consistency': {'checks': ['repository', 'archives']}}
|
||||
|
||||
Also return a sequence of logging.LogRecord instances containing any warnings about the
|
||||
configuration.
|
||||
|
||||
Raise FileNotFoundError if the file does not exist, PermissionError if the user does not
|
||||
have permissions to read the file, or Validation_error if the config does not match the schema.
|
||||
'''
|
||||
logging.getLogger('pykwalify').setLevel(logging.ERROR)
|
||||
|
||||
try:
|
||||
config = load.load_configuration(config_filename)
|
||||
schema = load.load_configuration(schema_filename)
|
||||
|
@ -104,17 +102,24 @@ def parse_configuration(config_filename, schema_filename, overrides=None):
|
|||
raise Validation_error(config_filename, (str(error),))
|
||||
|
||||
override.apply_overrides(config, overrides)
|
||||
normalize.normalize(config)
|
||||
logs = normalize.normalize(config_filename, config)
|
||||
if resolve_env:
|
||||
environment.resolve_env_variables(config)
|
||||
|
||||
validator = pykwalify.core.Core(source_data=config, schema_data=remove_examples(schema))
|
||||
parsed_result = validator.validate(raise_exception=False)
|
||||
try:
|
||||
validator = jsonschema.Draft7Validator(schema)
|
||||
except AttributeError: # pragma: no cover
|
||||
validator = jsonschema.Draft4Validator(schema)
|
||||
validation_errors = tuple(validator.iter_errors(config))
|
||||
|
||||
if validator.validation_errors:
|
||||
raise Validation_error(config_filename, validator.validation_errors)
|
||||
if validation_errors:
|
||||
raise Validation_error(
|
||||
config_filename, tuple(format_json_error(error) for error in validation_errors)
|
||||
)
|
||||
|
||||
apply_logical_validation(config_filename, parsed_result)
|
||||
apply_logical_validation(config_filename, config)
|
||||
|
||||
return parsed_result
|
||||
return config, logs
|
||||
|
||||
|
||||
def normalize_repository_path(repository):
|
||||
|
@ -138,27 +143,13 @@ def repositories_match(first, second):
|
|||
def guard_configuration_contains_repository(repository, configurations):
|
||||
'''
|
||||
Given a repository path and a dict mapping from config filename to corresponding parsed config
|
||||
dict, ensure that the repository is declared exactly once in all of the configurations.
|
||||
|
||||
If no repository is given, then error if there are multiple configured repositories.
|
||||
dict, ensure that the repository is declared exactly once in all of the configurations. If no
|
||||
repository is given, skip this check.
|
||||
|
||||
Raise ValueError if the repository is not found in a configuration, or is declared multiple
|
||||
times.
|
||||
'''
|
||||
if not repository:
|
||||
count = len(
|
||||
tuple(
|
||||
config_repository
|
||||
for config in configurations.values()
|
||||
for config_repository in config['location']['repositories']
|
||||
)
|
||||
)
|
||||
|
||||
if count > 1:
|
||||
raise ValueError(
|
||||
'Can\'t determine which repository to use. Use --repository option to disambiguate'
|
||||
)
|
||||
|
||||
return
|
||||
|
||||
count = len(
|
||||
|
@ -174,3 +165,26 @@ def guard_configuration_contains_repository(repository, configurations):
|
|||
raise ValueError('Repository {} not found in configuration files'.format(repository))
|
||||
if count > 1:
|
||||
raise ValueError('Repository {} found in multiple configuration files'.format(repository))
|
||||
|
||||
|
||||
def guard_single_repository_selected(repository, configurations):
|
||||
'''
|
||||
Given a repository path and a dict mapping from config filename to corresponding parsed config
|
||||
dict, ensure either a single repository exists across all configuration files or a repository
|
||||
path was given.
|
||||
'''
|
||||
if repository:
|
||||
return
|
||||
|
||||
count = len(
|
||||
tuple(
|
||||
config_repository
|
||||
for config in configurations.values()
|
||||
for config_repository in config['location']['repositories']
|
||||
)
|
||||
)
|
||||
|
||||
if count != 1:
|
||||
raise ValueError(
|
||||
"Can't determine which repository to use. Use --repository to disambiguate"
|
||||
)
|
||||
|
|
|
@ -23,7 +23,7 @@ def exit_code_indicates_error(process, exit_code, borg_local_path=None):
|
|||
command = process.args.split(' ') if isinstance(process.args, str) else process.args
|
||||
|
||||
if borg_local_path and command[0] == borg_local_path:
|
||||
return bool(exit_code >= BORG_ERROR_EXIT_CODE)
|
||||
return bool(exit_code < 0 or exit_code >= BORG_ERROR_EXIT_CODE)
|
||||
|
||||
return bool(exit_code != 0)
|
||||
|
||||
|
@ -49,7 +49,11 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
|
|||
'''
|
||||
Given a sequence of subprocess.Popen() instances for multiple processes, log the output for each
|
||||
process with the requested log level. Additionally, raise a CalledProcessError if a process
|
||||
exits with an error (or a warning for exit code 1, if that process matches the Borg local path).
|
||||
exits with an error (or a warning for exit code 1, if that process does not match the Borg local
|
||||
path).
|
||||
|
||||
If output log level is None, then instead of logging, capture output for each process and return
|
||||
it as a dict from the process to its output.
|
||||
|
||||
For simplicity, it's assumed that the output buffer for each process is its stdout. But if any
|
||||
stdouts are given to exclude, then for any matching processes, log from their stderr instead.
|
||||
|
@ -59,11 +63,14 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
|
|||
'''
|
||||
# Map from output buffer to sequence of last lines.
|
||||
buffer_last_lines = collections.defaultdict(list)
|
||||
output_buffers = [
|
||||
output_buffer_for_process(process, exclude_stdouts)
|
||||
process_for_output_buffer = {
|
||||
output_buffer_for_process(process, exclude_stdouts): process
|
||||
for process in processes
|
||||
if process.stdout or process.stderr
|
||||
]
|
||||
}
|
||||
output_buffers = list(process_for_output_buffer.keys())
|
||||
captured_outputs = collections.defaultdict(list)
|
||||
still_running = True
|
||||
|
||||
# Log output for each process until they all exit.
|
||||
while True:
|
||||
|
@ -71,9 +78,25 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
|
|||
(ready_buffers, _, _) = select.select(output_buffers, [], [])
|
||||
|
||||
for ready_buffer in ready_buffers:
|
||||
ready_process = process_for_output_buffer.get(ready_buffer)
|
||||
|
||||
# The "ready" process has exited, but it might be a pipe destination with other
|
||||
# processes (pipe sources) waiting to be read from. So as a measure to prevent
|
||||
# hangs, vent all processes when one exits.
|
||||
if ready_process and ready_process.poll() is not None:
|
||||
for other_process in processes:
|
||||
if (
|
||||
other_process.poll() is None
|
||||
and other_process.stdout
|
||||
and other_process.stdout not in output_buffers
|
||||
):
|
||||
# Add the process's output to output_buffers to ensure it'll get read.
|
||||
output_buffers.append(other_process.stdout)
|
||||
|
||||
while True:
|
||||
line = ready_buffer.readline().rstrip().decode()
|
||||
if not line:
|
||||
continue
|
||||
if not line or not ready_process:
|
||||
break
|
||||
|
||||
# Keep the last few lines of output in case the process errors, and we need the output for
|
||||
# the exception below.
|
||||
|
@ -82,8 +105,14 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
|
|||
if len(last_lines) > ERROR_OUTPUT_MAX_LINE_COUNT:
|
||||
last_lines.pop(0)
|
||||
|
||||
if output_log_level is None:
|
||||
captured_outputs[ready_process].append(line)
|
||||
else:
|
||||
logger.log(output_log_level, line)
|
||||
|
||||
if not still_running:
|
||||
break
|
||||
|
||||
still_running = False
|
||||
|
||||
for process in processes:
|
||||
|
@ -113,23 +142,13 @@ def log_outputs(processes, exclude_stdouts, output_log_level, borg_local_path):
|
|||
exit_code, command_for_process(process), '\n'.join(last_lines)
|
||||
)
|
||||
|
||||
if not still_running:
|
||||
break
|
||||
|
||||
# Consume any remaining output that we missed (if any).
|
||||
for process in processes:
|
||||
output_buffer = output_buffer_for_process(process, exclude_stdouts)
|
||||
|
||||
if not output_buffer:
|
||||
continue
|
||||
|
||||
remaining_output = output_buffer.read().rstrip().decode()
|
||||
|
||||
if remaining_output: # pragma: no cover
|
||||
logger.log(output_log_level, remaining_output)
|
||||
if captured_outputs:
|
||||
return {
|
||||
process: '\n'.join(output_lines) for process, output_lines in captured_outputs.items()
|
||||
}
|
||||
|
||||
|
||||
def log_command(full_command, input_file, output_file):
|
||||
def log_command(full_command, input_file=None, output_file=None):
|
||||
'''
|
||||
Log the given command (a sequence of command/argument strings), along with its input/output file
|
||||
paths.
|
||||
|
@ -160,15 +179,14 @@ def execute_command(
|
|||
):
|
||||
'''
|
||||
Execute the given command (a sequence of command/argument strings) and log its output at the
|
||||
given log level. If output log level is None, instead capture and return the output. (Implies
|
||||
run_to_completion.) If an open output file object is given, then write stdout to the file and
|
||||
only log stderr (but only if an output log level is set). If an open input file object is given,
|
||||
then read stdin from the file. If shell is True, execute the command within a shell. If an extra
|
||||
environment dict is given, then use it to augment the current environment, and pass the result
|
||||
into the command. If a working directory is given, use that as the present working directory
|
||||
when running the command. If a Borg local path is given, and the command matches it (regardless
|
||||
of arguments), treat exit code 1 as a warning instead of an error. If run to completion is
|
||||
False, then return the process for the command without executing it to completion.
|
||||
given log level. If an open output file object is given, then write stdout to the file and only
|
||||
log stderr. If an open input file object is given, then read stdin from the file. If shell is
|
||||
True, execute the command within a shell. If an extra environment dict is given, then use it to
|
||||
augment the current environment, and pass the result into the command. If a working directory is
|
||||
given, use that as the present working directory when running the command. If a Borg local path
|
||||
is given, and the command matches it (regardless of arguments), treat exit code 1 as a warning
|
||||
instead of an error. If run to completion is False, then return the process for the command
|
||||
without executing it to completion.
|
||||
|
||||
Raise subprocesses.CalledProcessError if an error occurs while running the command.
|
||||
'''
|
||||
|
@ -177,12 +195,6 @@ def execute_command(
|
|||
do_not_capture = bool(output_file is DO_NOT_CAPTURE)
|
||||
command = ' '.join(full_command) if shell else full_command
|
||||
|
||||
if output_log_level is None:
|
||||
output = subprocess.check_output(
|
||||
command, shell=shell, env=environment, cwd=working_directory
|
||||
)
|
||||
return output.decode() if output is not None else None
|
||||
|
||||
process = subprocess.Popen(
|
||||
command,
|
||||
stdin=input_file,
|
||||
|
@ -200,6 +212,33 @@ def execute_command(
|
|||
)
|
||||
|
||||
|
||||
def execute_command_and_capture_output(
|
||||
full_command, capture_stderr=False, shell=False, extra_environment=None, working_directory=None,
|
||||
):
|
||||
'''
|
||||
Execute the given command (a sequence of command/argument strings), capturing and returning its
|
||||
output (stdout). If capture stderr is True, then capture and return stderr in addition to
|
||||
stdout. If shell is True, execute the command within a shell. If an extra environment dict is
|
||||
given, then use it to augment the current environment, and pass the result into the command. If
|
||||
a working directory is given, use that as the present working directory when running the command.
|
||||
|
||||
Raise subprocesses.CalledProcessError if an error occurs while running the command.
|
||||
'''
|
||||
log_command(full_command)
|
||||
environment = {**os.environ, **extra_environment} if extra_environment else None
|
||||
command = ' '.join(full_command) if shell else full_command
|
||||
|
||||
output = subprocess.check_output(
|
||||
command,
|
||||
stderr=subprocess.STDOUT if capture_stderr else None,
|
||||
shell=shell,
|
||||
env=environment,
|
||||
cwd=working_directory,
|
||||
)
|
||||
|
||||
return output.decode() if output is not None else None
|
||||
|
||||
|
||||
def execute_command_with_processes(
|
||||
full_command,
|
||||
processes,
|
||||
|
@ -217,13 +256,14 @@ def execute_command_with_processes(
|
|||
run as well. This is useful, for instance, for processes that are streaming output to a named
|
||||
pipe that the given command is consuming from.
|
||||
|
||||
If an open output file object is given, then write stdout to the file and only log stderr (but
|
||||
only if an output log level is set). If an open input file object is given, then read stdin from
|
||||
the file. If shell is True, execute the command within a shell. If an extra environment dict is
|
||||
given, then use it to augment the current environment, and pass the result into the command. If
|
||||
a working directory is given, use that as the present working directory when running the
|
||||
command. If a Borg local path is given, then for any matching command or process (regardless of
|
||||
arguments), treat exit code 1 as a warning instead of an error.
|
||||
If an open output file object is given, then write stdout to the file and only log stderr. But
|
||||
if output log level is None, instead suppress logging and return the captured output for (only)
|
||||
the given command. If an open input file object is given, then read stdin from the file. If
|
||||
shell is True, execute the command within a shell. If an extra environment dict is given, then
|
||||
use it to augment the current environment, and pass the result into the command. If a working
|
||||
directory is given, use that as the present working directory when running the command. If a
|
||||
Borg local path is given, then for any matching command or process (regardless of arguments),
|
||||
treat exit code 1 as a warning instead of an error.
|
||||
|
||||
Raise subprocesses.CalledProcessError if an error occurs while running the command or in the
|
||||
upstream process.
|
||||
|
@ -254,9 +294,12 @@ def execute_command_with_processes(
|
|||
process.kill()
|
||||
raise
|
||||
|
||||
log_outputs(
|
||||
captured_outputs = log_outputs(
|
||||
tuple(processes) + (command_process,),
|
||||
(input_file, output_file),
|
||||
output_log_level,
|
||||
borg_local_path=borg_local_path,
|
||||
)
|
||||
|
||||
if output_log_level is None:
|
||||
return captured_outputs.get(command_process)
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
import logging
|
||||
import os
|
||||
import re
|
||||
|
||||
from borgmatic import execute
|
||||
|
||||
|
@ -9,14 +10,19 @@ logger = logging.getLogger(__name__)
|
|||
SOFT_FAIL_EXIT_CODE = 75
|
||||
|
||||
|
||||
def interpolate_context(command, context):
|
||||
def interpolate_context(config_filename, hook_description, command, context):
|
||||
'''
|
||||
Given a single hook command and a dict of context names/values, interpolate the values by
|
||||
"{name}" into the command and return the result.
|
||||
Given a config filename, a hook description, a single hook command, and a dict of context
|
||||
names/values, interpolate the values by "{name}" into the command and return the result.
|
||||
'''
|
||||
for name, value in context.items():
|
||||
command = command.replace('{%s}' % name, str(value))
|
||||
|
||||
for unsupported_variable in re.findall(r'{\w+}', command):
|
||||
logger.warning(
|
||||
f"{config_filename}: Variable '{unsupported_variable}' is not supported in {hook_description} hook"
|
||||
)
|
||||
|
||||
return command
|
||||
|
||||
|
||||
|
@ -26,8 +32,7 @@ def execute_hook(commands, umask, config_filename, description, dry_run, **conte
|
|||
a hook description, and whether this is a dry run, run the given commands. Or, don't run them
|
||||
if this is a dry run.
|
||||
|
||||
The context contains optional values interpolated by name into the hook commands. Currently,
|
||||
this only applies to the on_error hook.
|
||||
The context contains optional values interpolated by name into the hook commands.
|
||||
|
||||
Raise ValueError if the umask cannot be parsed.
|
||||
Raise subprocesses.CalledProcessError if an error occurs in a hook.
|
||||
|
@ -39,7 +44,9 @@ def execute_hook(commands, umask, config_filename, description, dry_run, **conte
|
|||
dry_run_label = ' (dry run; not actually running hooks)' if dry_run else ''
|
||||
|
||||
context['configuration_filename'] = config_filename
|
||||
commands = [interpolate_context(command, context) for command in commands]
|
||||
commands = [
|
||||
interpolate_context(config_filename, description, command, context) for command in commands
|
||||
]
|
||||
|
||||
if len(commands) == 1:
|
||||
logger.info(
|
||||
|
|
|
@ -22,14 +22,24 @@ def initialize_monitor(
|
|||
pass
|
||||
|
||||
|
||||
def ping_monitor(ping_url, config_filename, state, monitoring_log_level, dry_run):
|
||||
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
|
||||
'''
|
||||
Ping the given Cronhub URL, modified with the monitor.State. Use the given configuration
|
||||
Ping the configured Cronhub URL, modified with the monitor.State. Use the given configuration
|
||||
filename in any log entries. If this is a dry run, then don't actually ping anything.
|
||||
'''
|
||||
if state not in MONITOR_STATE_TO_CRONHUB:
|
||||
logger.debug(
|
||||
f'{config_filename}: Ignoring unsupported monitoring {state.name.lower()} in Cronhub hook'
|
||||
)
|
||||
return
|
||||
|
||||
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
|
||||
formatted_state = '/{}/'.format(MONITOR_STATE_TO_CRONHUB[state])
|
||||
ping_url = ping_url.replace('/start/', formatted_state).replace('/ping/', formatted_state)
|
||||
ping_url = (
|
||||
hook_config['ping_url']
|
||||
.replace('/start/', formatted_state)
|
||||
.replace('/ping/', formatted_state)
|
||||
)
|
||||
|
||||
logger.info(
|
||||
'{}: Pinging Cronhub {}{}'.format(config_filename, state.name.lower(), dry_run_label)
|
||||
|
@ -38,7 +48,12 @@ def ping_monitor(ping_url, config_filename, state, monitoring_log_level, dry_run
|
|||
|
||||
if not dry_run:
|
||||
logging.getLogger('urllib3').setLevel(logging.ERROR)
|
||||
requests.get(ping_url)
|
||||
try:
|
||||
response = requests.get(ping_url)
|
||||
if not response.ok:
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.RequestException as error:
|
||||
logger.warning(f'{config_filename}: Cronhub error: {error}')
|
||||
|
||||
|
||||
def destroy_monitor(
|
||||
|
|
|
@ -22,13 +22,19 @@ def initialize_monitor(
|
|||
pass
|
||||
|
||||
|
||||
def ping_monitor(ping_url, config_filename, state, monitoring_log_level, dry_run):
|
||||
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
|
||||
'''
|
||||
Ping the given Cronitor URL, modified with the monitor.State. Use the given configuration
|
||||
Ping the configured Cronitor URL, modified with the monitor.State. Use the given configuration
|
||||
filename in any log entries. If this is a dry run, then don't actually ping anything.
|
||||
'''
|
||||
if state not in MONITOR_STATE_TO_CRONITOR:
|
||||
logger.debug(
|
||||
f'{config_filename}: Ignoring unsupported monitoring {state.name.lower()} in Cronitor hook'
|
||||
)
|
||||
return
|
||||
|
||||
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
|
||||
ping_url = '{}/{}'.format(ping_url, MONITOR_STATE_TO_CRONITOR[state])
|
||||
ping_url = '{}/{}'.format(hook_config['ping_url'], MONITOR_STATE_TO_CRONITOR[state])
|
||||
|
||||
logger.info(
|
||||
'{}: Pinging Cronitor {}{}'.format(config_filename, state.name.lower(), dry_run_label)
|
||||
|
@ -37,7 +43,12 @@ def ping_monitor(ping_url, config_filename, state, monitoring_log_level, dry_run
|
|||
|
||||
if not dry_run:
|
||||
logging.getLogger('urllib3').setLevel(logging.ERROR)
|
||||
requests.get(ping_url)
|
||||
try:
|
||||
response = requests.get(ping_url)
|
||||
if not response.ok:
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.RequestException as error:
|
||||
logger.warning(f'{config_filename}: Cronitor error: {error}')
|
||||
|
||||
|
||||
def destroy_monitor(
|
||||
|
|
|
@ -1,16 +1,29 @@
|
|||
import logging
|
||||
|
||||
from borgmatic.hooks import cronhub, cronitor, healthchecks, mysql, pagerduty, postgresql
|
||||
from borgmatic.hooks import (
|
||||
cronhub,
|
||||
cronitor,
|
||||
healthchecks,
|
||||
mongodb,
|
||||
mysql,
|
||||
ntfy,
|
||||
pagerduty,
|
||||
postgresql,
|
||||
sqlite,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
HOOK_NAME_TO_MODULE = {
|
||||
'healthchecks': healthchecks,
|
||||
'cronitor': cronitor,
|
||||
'cronhub': cronhub,
|
||||
'cronitor': cronitor,
|
||||
'healthchecks': healthchecks,
|
||||
'mongodb_databases': mongodb,
|
||||
'mysql_databases': mysql,
|
||||
'ntfy': ntfy,
|
||||
'pagerduty': pagerduty,
|
||||
'postgresql_databases': postgresql,
|
||||
'mysql_databases': mysql,
|
||||
'sqlite_databases': sqlite,
|
||||
}
|
||||
|
||||
|
||||
|
@ -18,19 +31,14 @@ def call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs):
|
|||
'''
|
||||
Given the hooks configuration dict and a prefix to use in log entries, call the requested
|
||||
function of the Python module corresponding to the given hook name. Supply that call with the
|
||||
configuration for this hook, the log prefix, and any given args and kwargs. Return any return
|
||||
value.
|
||||
|
||||
If the hook name is not present in the hooks configuration, then bail without calling anything.
|
||||
configuration for this hook (if any), the log prefix, and any given args and kwargs. Return any
|
||||
return value.
|
||||
|
||||
Raise ValueError if the hook name is unknown.
|
||||
Raise AttributeError if the function name is not found in the module.
|
||||
Raise anything else that the called function raises.
|
||||
'''
|
||||
config = hooks.get(hook_name)
|
||||
if not config:
|
||||
logger.debug('{}: No {} hook configured.'.format(log_prefix, hook_name))
|
||||
return
|
||||
config = hooks.get(hook_name, {})
|
||||
|
||||
try:
|
||||
module = HOOK_NAME_TO_MODULE[hook_name]
|
||||
|
@ -48,7 +56,7 @@ def call_hooks(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
|
|||
configuration for that hook, the log prefix, and any given args and kwargs. Collect any return
|
||||
values into a dict from hook name to return value.
|
||||
|
||||
If the hook name is not present in the hooks configuration, then don't call the function for it,
|
||||
If the hook name is not present in the hooks configuration, then don't call the function for it
|
||||
and omit it from the return values.
|
||||
|
||||
Raise ValueError if the hook name is unknown.
|
||||
|
@ -60,3 +68,19 @@ def call_hooks(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
|
|||
for hook_name in hook_names
|
||||
if hooks.get(hook_name)
|
||||
}
|
||||
|
||||
|
||||
def call_hooks_even_if_unconfigured(function_name, hooks, log_prefix, hook_names, *args, **kwargs):
|
||||
'''
|
||||
Given the hooks configuration dict and a prefix to use in log entries, call the requested
|
||||
function of the Python module corresponding to each given hook name. Supply each call with the
|
||||
configuration for that hook, the log prefix, and any given args and kwargs. Collect any return
|
||||
values into a dict from hook name to return value.
|
||||
|
||||
Raise AttributeError if the function name is not found in the module.
|
||||
Raise anything else that a called function raises. An error stops calls to subsequent functions.
|
||||
'''
|
||||
return {
|
||||
hook_name: call_hook(function_name, hooks, log_prefix, hook_name, *args, **kwargs)
|
||||
for hook_name in hook_names
|
||||
}
|
||||
|
|
|
@ -2,11 +2,16 @@ import logging
|
|||
import os
|
||||
import shutil
|
||||
|
||||
from borgmatic.borg.create import DEFAULT_BORGMATIC_SOURCE_DIRECTORY
|
||||
from borgmatic.borg.state import DEFAULT_BORGMATIC_SOURCE_DIRECTORY
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DATABASE_HOOK_NAMES = ('postgresql_databases', 'mysql_databases')
|
||||
DATABASE_HOOK_NAMES = (
|
||||
'postgresql_databases',
|
||||
'mysql_databases',
|
||||
'mongodb_databases',
|
||||
'sqlite_databases',
|
||||
)
|
||||
|
||||
|
||||
def make_database_dump_path(borgmatic_source_directory, database_hook_name):
|
||||
|
@ -55,7 +60,7 @@ def remove_database_dumps(dump_path, database_type_name, log_prefix, dry_run):
|
|||
'''
|
||||
dry_run_label = ' (dry run; not actually removing anything)' if dry_run else ''
|
||||
|
||||
logger.info(
|
||||
logger.debug(
|
||||
'{}: Removing {} database dumps{}'.format(log_prefix, database_type_name, dry_run_label)
|
||||
)
|
||||
|
||||
|
|
|
@ -10,16 +10,18 @@ MONITOR_STATE_TO_HEALTHCHECKS = {
|
|||
monitor.State.START: 'start',
|
||||
monitor.State.FINISH: None, # Healthchecks doesn't append to the URL for the finished state.
|
||||
monitor.State.FAIL: 'fail',
|
||||
monitor.State.LOG: 'log',
|
||||
}
|
||||
|
||||
PAYLOAD_TRUNCATION_INDICATOR = '...\n'
|
||||
PAYLOAD_LIMIT_BYTES = 10 * 1024 - len(PAYLOAD_TRUNCATION_INDICATOR)
|
||||
DEFAULT_PING_BODY_LIMIT_BYTES = 100000
|
||||
|
||||
|
||||
class Forgetful_buffering_handler(logging.Handler):
|
||||
'''
|
||||
A buffering log handler that stores log messages in memory, and throws away messages (oldest
|
||||
first) once a particular capacity in bytes is reached.
|
||||
first) once a particular capacity in bytes is reached. But if the given byte capacity is zero,
|
||||
don't throw away any messages.
|
||||
'''
|
||||
|
||||
def __init__(self, byte_capacity, log_level):
|
||||
|
@ -36,6 +38,9 @@ class Forgetful_buffering_handler(logging.Handler):
|
|||
self.byte_count += len(message)
|
||||
self.buffer.append(message)
|
||||
|
||||
if not self.byte_capacity:
|
||||
return
|
||||
|
||||
while self.byte_count > self.byte_capacity and self.buffer:
|
||||
self.byte_count -= len(self.buffer[0])
|
||||
self.buffer.pop(0)
|
||||
|
@ -65,31 +70,45 @@ def format_buffered_logs_for_payload():
|
|||
return payload
|
||||
|
||||
|
||||
def initialize_monitor(
|
||||
ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
|
||||
): # pragma: no cover
|
||||
def initialize_monitor(hook_config, config_filename, monitoring_log_level, dry_run):
|
||||
'''
|
||||
Add a handler to the root logger that stores in memory the most recent logs emitted. That
|
||||
way, we can send them all to Healthchecks upon a finish or failure state.
|
||||
Add a handler to the root logger that stores in memory the most recent logs emitted. That way,
|
||||
we can send them all to Healthchecks upon a finish or failure state. But skip this if the
|
||||
"send_logs" option is false.
|
||||
'''
|
||||
if hook_config.get('send_logs') is False:
|
||||
return
|
||||
|
||||
ping_body_limit = max(
|
||||
hook_config.get('ping_body_limit', DEFAULT_PING_BODY_LIMIT_BYTES)
|
||||
- len(PAYLOAD_TRUNCATION_INDICATOR),
|
||||
0,
|
||||
)
|
||||
|
||||
logging.getLogger().addHandler(
|
||||
Forgetful_buffering_handler(PAYLOAD_LIMIT_BYTES, monitoring_log_level)
|
||||
Forgetful_buffering_handler(ping_body_limit, monitoring_log_level)
|
||||
)
|
||||
|
||||
|
||||
def ping_monitor(ping_url_or_uuid, config_filename, state, monitoring_log_level, dry_run):
|
||||
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
|
||||
'''
|
||||
Ping the given Healthchecks URL or UUID, modified with the monitor.State. Use the given
|
||||
Ping the configured Healthchecks URL or UUID, modified with the monitor.State. Use the given
|
||||
configuration filename in any log entries, and log to Healthchecks with the giving log level.
|
||||
If this is a dry run, then don't actually ping anything.
|
||||
'''
|
||||
ping_url = (
|
||||
ping_url_or_uuid
|
||||
if ping_url_or_uuid.startswith('http')
|
||||
else 'https://hc-ping.com/{}'.format(ping_url_or_uuid)
|
||||
hook_config['ping_url']
|
||||
if hook_config['ping_url'].startswith('http')
|
||||
else 'https://hc-ping.com/{}'.format(hook_config['ping_url'])
|
||||
)
|
||||
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
|
||||
|
||||
if 'states' in hook_config and state.name.lower() not in hook_config['states']:
|
||||
logger.info(
|
||||
f'{config_filename}: Skipping Healthchecks {state.name.lower()} ping due to configured states'
|
||||
)
|
||||
return
|
||||
|
||||
healthchecks_state = MONITOR_STATE_TO_HEALTHCHECKS.get(state)
|
||||
if healthchecks_state:
|
||||
ping_url = '{}/{}'.format(ping_url, healthchecks_state)
|
||||
|
@ -99,17 +118,24 @@ def ping_monitor(ping_url_or_uuid, config_filename, state, monitoring_log_level,
|
|||
)
|
||||
logger.debug('{}: Using Healthchecks ping URL {}'.format(config_filename, ping_url))
|
||||
|
||||
if state in (monitor.State.FINISH, monitor.State.FAIL):
|
||||
if state in (monitor.State.FINISH, monitor.State.FAIL, monitor.State.LOG):
|
||||
payload = format_buffered_logs_for_payload()
|
||||
else:
|
||||
payload = ''
|
||||
|
||||
if not dry_run:
|
||||
logging.getLogger('urllib3').setLevel(logging.ERROR)
|
||||
requests.post(ping_url, data=payload.encode('utf-8'))
|
||||
try:
|
||||
response = requests.post(
|
||||
ping_url, data=payload.encode('utf-8'), verify=hook_config.get('verify_tls', True)
|
||||
)
|
||||
if not response.ok:
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.RequestException as error:
|
||||
logger.warning(f'{config_filename}: Healthchecks error: {error}')
|
||||
|
||||
|
||||
def destroy_monitor(ping_url_or_uuid, config_filename, monitoring_log_level, dry_run):
|
||||
def destroy_monitor(hook_config, config_filename, monitoring_log_level, dry_run):
|
||||
'''
|
||||
Remove the monitor handler that was added to the root logger. This prevents the handler from
|
||||
getting reused by other instances of this monitor.
|
||||
|
|
168
borgmatic/hooks/mongodb.py
Normal file
168
borgmatic/hooks/mongodb.py
Normal file
|
@ -0,0 +1,168 @@
|
|||
import logging
|
||||
|
||||
from borgmatic.execute import execute_command, execute_command_with_processes
|
||||
from borgmatic.hooks import dump
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def make_dump_path(location_config): # pragma: no cover
|
||||
'''
|
||||
Make the dump path from the given location configuration and the name of this hook.
|
||||
'''
|
||||
return dump.make_database_dump_path(
|
||||
location_config.get('borgmatic_source_directory'), 'mongodb_databases'
|
||||
)
|
||||
|
||||
|
||||
def dump_databases(databases, log_prefix, location_config, dry_run):
|
||||
'''
|
||||
Dump the given MongoDB databases to a named pipe. The databases are supplied as a sequence of
|
||||
dicts, one dict describing each database as per the configuration schema. Use the given log
|
||||
prefix in any log entries. Use the given location configuration dict to construct the
|
||||
destination path.
|
||||
|
||||
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
|
||||
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
|
||||
'''
|
||||
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
|
||||
|
||||
logger.info('{}: Dumping MongoDB databases{}'.format(log_prefix, dry_run_label))
|
||||
|
||||
processes = []
|
||||
for database in databases:
|
||||
name = database['name']
|
||||
dump_filename = dump.make_database_dump_filename(
|
||||
make_dump_path(location_config), name, database.get('hostname')
|
||||
)
|
||||
dump_format = database.get('format', 'archive')
|
||||
|
||||
logger.debug(
|
||||
'{}: Dumping MongoDB database {} to {}{}'.format(
|
||||
log_prefix, name, dump_filename, dry_run_label
|
||||
)
|
||||
)
|
||||
if dry_run:
|
||||
continue
|
||||
|
||||
command = build_dump_command(database, dump_filename, dump_format)
|
||||
|
||||
if dump_format == 'directory':
|
||||
dump.create_parent_directory_for_dump(dump_filename)
|
||||
execute_command(command, shell=True)
|
||||
else:
|
||||
dump.create_named_pipe_for_dump(dump_filename)
|
||||
processes.append(execute_command(command, shell=True, run_to_completion=False))
|
||||
|
||||
return processes
|
||||
|
||||
|
||||
def build_dump_command(database, dump_filename, dump_format):
|
||||
'''
|
||||
Return the mongodump command from a single database configuration.
|
||||
'''
|
||||
all_databases = database['name'] == 'all'
|
||||
command = ['mongodump']
|
||||
if dump_format == 'directory':
|
||||
command.extend(('--out', dump_filename))
|
||||
if 'hostname' in database:
|
||||
command.extend(('--host', database['hostname']))
|
||||
if 'port' in database:
|
||||
command.extend(('--port', str(database['port'])))
|
||||
if 'username' in database:
|
||||
command.extend(('--username', database['username']))
|
||||
if 'password' in database:
|
||||
command.extend(('--password', database['password']))
|
||||
if 'authentication_database' in database:
|
||||
command.extend(('--authenticationDatabase', database['authentication_database']))
|
||||
if not all_databases:
|
||||
command.extend(('--db', database['name']))
|
||||
if 'options' in database:
|
||||
command.extend(database['options'].split(' '))
|
||||
if dump_format != 'directory':
|
||||
command.extend(('--archive', '>', dump_filename))
|
||||
return command
|
||||
|
||||
|
||||
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
|
||||
'''
|
||||
Remove all database dump files for this hook regardless of the given databases. Use the log
|
||||
prefix in any log entries. Use the given location configuration dict to construct the
|
||||
destination path. If this is a dry run, then don't actually remove anything.
|
||||
'''
|
||||
dump.remove_database_dumps(make_dump_path(location_config), 'MongoDB', log_prefix, dry_run)
|
||||
|
||||
|
||||
def make_database_dump_pattern(
|
||||
databases, log_prefix, location_config, name=None
|
||||
): # pragma: no cover
|
||||
'''
|
||||
Given a sequence of configurations dicts, a prefix to log with, a location configuration dict,
|
||||
and a database name to match, return the corresponding glob patterns to match the database dump
|
||||
in an archive.
|
||||
'''
|
||||
return dump.make_database_dump_filename(make_dump_path(location_config), name, hostname='*')
|
||||
|
||||
|
||||
def restore_database_dump(database_config, log_prefix, location_config, dry_run, extract_process):
|
||||
'''
|
||||
Restore the given MongoDB database from an extract stream. The database is supplied as a
|
||||
one-element sequence containing a dict describing the database, as per the configuration schema.
|
||||
Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
|
||||
anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
|
||||
output to consume.
|
||||
|
||||
If the extract process is None, then restore the dump from the filesystem rather than from an
|
||||
extract stream.
|
||||
'''
|
||||
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
|
||||
|
||||
if len(database_config) != 1:
|
||||
raise ValueError('The database configuration value is invalid')
|
||||
|
||||
database = database_config[0]
|
||||
dump_filename = dump.make_database_dump_filename(
|
||||
make_dump_path(location_config), database['name'], database.get('hostname')
|
||||
)
|
||||
restore_command = build_restore_command(extract_process, database, dump_filename)
|
||||
|
||||
logger.debug(
|
||||
'{}: Restoring MongoDB database {}{}'.format(log_prefix, database['name'], dry_run_label)
|
||||
)
|
||||
if dry_run:
|
||||
return
|
||||
|
||||
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
|
||||
# if the restore paths don't exist in the archive.
|
||||
execute_command_with_processes(
|
||||
restore_command,
|
||||
[extract_process] if extract_process else [],
|
||||
output_log_level=logging.DEBUG,
|
||||
input_file=extract_process.stdout if extract_process else None,
|
||||
)
|
||||
|
||||
|
||||
def build_restore_command(extract_process, database, dump_filename):
|
||||
'''
|
||||
Return the mongorestore command from a single database configuration.
|
||||
'''
|
||||
command = ['mongorestore']
|
||||
if extract_process:
|
||||
command.append('--archive')
|
||||
else:
|
||||
command.extend(('--dir', dump_filename))
|
||||
if database['name'] != 'all':
|
||||
command.extend(('--drop', '--db', database['name']))
|
||||
if 'hostname' in database:
|
||||
command.extend(('--host', database['hostname']))
|
||||
if 'port' in database:
|
||||
command.extend(('--port', str(database['port'])))
|
||||
if 'username' in database:
|
||||
command.extend(('--username', database['username']))
|
||||
if 'password' in database:
|
||||
command.extend(('--password', database['password']))
|
||||
if 'authentication_database' in database:
|
||||
command.extend(('--authenticationDatabase', database['authentication_database']))
|
||||
if 'restore_options' in database:
|
||||
command.extend(database['restore_options'].split(' '))
|
||||
return command
|
|
@ -1,9 +1,10 @@
|
|||
from enum import Enum
|
||||
|
||||
MONITOR_HOOK_NAMES = ('healthchecks', 'cronitor', 'cronhub', 'pagerduty')
|
||||
MONITOR_HOOK_NAMES = ('healthchecks', 'cronitor', 'cronhub', 'pagerduty', 'ntfy')
|
||||
|
||||
|
||||
class State(Enum):
|
||||
START = 1
|
||||
FINISH = 2
|
||||
FAIL = 3
|
||||
LOG = 4
|
||||
|
|
|
@ -1,6 +1,12 @@
|
|||
import copy
|
||||
import logging
|
||||
import os
|
||||
|
||||
from borgmatic.execute import execute_command, execute_command_with_processes
|
||||
from borgmatic.execute import (
|
||||
execute_command,
|
||||
execute_command_and_capture_output,
|
||||
execute_command_with_processes,
|
||||
)
|
||||
from borgmatic.hooks import dump
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
@ -18,19 +24,20 @@ def make_dump_path(location_config): # pragma: no cover
|
|||
SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
|
||||
|
||||
|
||||
def database_names_to_dump(database, extra_environment, log_prefix, dry_run_label):
|
||||
def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
|
||||
'''
|
||||
Given a requested database name, return the corresponding sequence of database names to dump.
|
||||
Given a requested database config, return the corresponding sequence of database names to dump.
|
||||
In the case of "all", query for the names of databases on the configured host and return them,
|
||||
excluding any system databases that will cause problems during restore.
|
||||
'''
|
||||
requested_name = database['name']
|
||||
|
||||
if requested_name != 'all':
|
||||
return (requested_name,)
|
||||
if database['name'] != 'all':
|
||||
return (database['name'],)
|
||||
if dry_run:
|
||||
return ()
|
||||
|
||||
show_command = (
|
||||
('mysql',)
|
||||
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
|
||||
+ (('--host', database['hostname']) if 'hostname' in database else ())
|
||||
+ (('--port', str(database['port'])) if 'port' in database else ())
|
||||
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
|
||||
|
@ -38,11 +45,9 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run_labe
|
|||
+ ('--skip-column-names', '--batch')
|
||||
+ ('--execute', 'show schemas')
|
||||
)
|
||||
logger.debug(
|
||||
'{}: Querying for "all" MySQL databases to dump{}'.format(log_prefix, dry_run_label)
|
||||
)
|
||||
show_output = execute_command(
|
||||
show_command, output_log_level=None, extra_environment=extra_environment
|
||||
logger.debug(f'{log_prefix}: Querying for "all" MySQL databases to dump')
|
||||
show_output = execute_command_and_capture_output(
|
||||
show_command, extra_environment=extra_environment
|
||||
)
|
||||
|
||||
return tuple(
|
||||
|
@ -52,6 +57,55 @@ def database_names_to_dump(database, extra_environment, log_prefix, dry_run_labe
|
|||
)
|
||||
|
||||
|
||||
def execute_dump_command(
|
||||
database, log_prefix, dump_path, database_names, extra_environment, dry_run, dry_run_label
|
||||
):
|
||||
'''
|
||||
Kick off a dump for the given MySQL/MariaDB database (provided as a configuration dict) to a
|
||||
named pipe constructed from the given dump path and database names. Use the given log prefix in
|
||||
any log entries.
|
||||
|
||||
Return a subprocess.Popen instance for the dump process ready to spew to a named pipe. But if
|
||||
this is a dry run, then don't actually dump anything and return None.
|
||||
'''
|
||||
database_name = database['name']
|
||||
dump_filename = dump.make_database_dump_filename(
|
||||
dump_path, database['name'], database.get('hostname')
|
||||
)
|
||||
if os.path.exists(dump_filename):
|
||||
logger.warning(
|
||||
f'{log_prefix}: Skipping duplicate dump of MySQL database "{database_name}" to {dump_filename}'
|
||||
)
|
||||
return None
|
||||
|
||||
dump_command = (
|
||||
('mysqldump',)
|
||||
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
|
||||
+ (('--add-drop-database',) if database.get('add_drop_database', True) else ())
|
||||
+ (('--host', database['hostname']) if 'hostname' in database else ())
|
||||
+ (('--port', str(database['port'])) if 'port' in database else ())
|
||||
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
|
||||
+ (('--user', database['username']) if 'username' in database else ())
|
||||
+ ('--databases',)
|
||||
+ database_names
|
||||
# Use shell redirection rather than execute_command(output_file=open(...)) to prevent
|
||||
# the open() call on a named pipe from hanging the main borgmatic process.
|
||||
+ ('>', dump_filename)
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
f'{log_prefix}: Dumping MySQL database "{database_name}" to {dump_filename}{dry_run_label}'
|
||||
)
|
||||
if dry_run:
|
||||
return None
|
||||
|
||||
dump.create_named_pipe_for_dump(dump_filename)
|
||||
|
||||
return execute_command(
|
||||
dump_command, shell=True, extra_environment=extra_environment, run_to_completion=False,
|
||||
)
|
||||
|
||||
|
||||
def dump_databases(databases, log_prefix, location_config, dry_run):
|
||||
'''
|
||||
Dump the given MySQL/MariaDB databases to a named pipe. The databases are supplied as a sequence
|
||||
|
@ -68,52 +122,47 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
|
|||
logger.info('{}: Dumping MySQL databases{}'.format(log_prefix, dry_run_label))
|
||||
|
||||
for database in databases:
|
||||
requested_name = database['name']
|
||||
dump_filename = dump.make_database_dump_filename(
|
||||
make_dump_path(location_config), requested_name, database.get('hostname')
|
||||
)
|
||||
dump_path = make_dump_path(location_config)
|
||||
extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
|
||||
dump_database_names = database_names_to_dump(
|
||||
database, extra_environment, log_prefix, dry_run_label
|
||||
database, extra_environment, log_prefix, dry_run
|
||||
)
|
||||
|
||||
if not dump_database_names:
|
||||
raise ValueError('Cannot find any MySQL databases to dump.')
|
||||
|
||||
dump_command = (
|
||||
('mysqldump',)
|
||||
+ ('--add-drop-database',)
|
||||
+ (('--host', database['hostname']) if 'hostname' in database else ())
|
||||
+ (('--port', str(database['port'])) if 'port' in database else ())
|
||||
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
|
||||
+ (('--user', database['username']) if 'username' in database else ())
|
||||
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
|
||||
+ ('--databases',)
|
||||
+ dump_database_names
|
||||
# Use shell redirection rather than execute_command(output_file=open(...)) to prevent
|
||||
# the open() call on a named pipe from hanging the main borgmatic process.
|
||||
+ ('>', dump_filename)
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
'{}: Dumping MySQL database {} to {}{}'.format(
|
||||
log_prefix, requested_name, dump_filename, dry_run_label
|
||||
)
|
||||
)
|
||||
if dry_run:
|
||||
continue
|
||||
|
||||
dump.create_named_pipe_for_dump(dump_filename)
|
||||
raise ValueError('Cannot find any MySQL databases to dump.')
|
||||
|
||||
if database['name'] == 'all' and database.get('format'):
|
||||
for dump_name in dump_database_names:
|
||||
renamed_database = copy.copy(database)
|
||||
renamed_database['name'] = dump_name
|
||||
processes.append(
|
||||
execute_command(
|
||||
dump_command,
|
||||
shell=True,
|
||||
extra_environment=extra_environment,
|
||||
run_to_completion=False,
|
||||
execute_dump_command(
|
||||
renamed_database,
|
||||
log_prefix,
|
||||
dump_path,
|
||||
(dump_name,),
|
||||
extra_environment,
|
||||
dry_run,
|
||||
dry_run_label,
|
||||
)
|
||||
)
|
||||
else:
|
||||
processes.append(
|
||||
execute_dump_command(
|
||||
database,
|
||||
log_prefix,
|
||||
dump_path,
|
||||
dump_database_names,
|
||||
extra_environment,
|
||||
dry_run,
|
||||
dry_run_label,
|
||||
)
|
||||
)
|
||||
|
||||
return processes
|
||||
return [process for process in processes if process]
|
||||
|
||||
|
||||
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
|
||||
|
@ -151,7 +200,8 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
|
|||
|
||||
database = database_config[0]
|
||||
restore_command = (
|
||||
('mysql', '--batch', '--verbose')
|
||||
('mysql', '--batch')
|
||||
+ (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
|
||||
+ (('--host', database['hostname']) if 'hostname' in database else ())
|
||||
+ (('--port', str(database['port'])) if 'port' in database else ())
|
||||
+ (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
|
||||
|
@ -165,11 +215,12 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
|
|||
if dry_run:
|
||||
return
|
||||
|
||||
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
|
||||
# if the restore paths don't exist in the archive.
|
||||
execute_command_with_processes(
|
||||
restore_command,
|
||||
[extract_process],
|
||||
output_log_level=logging.DEBUG,
|
||||
input_file=extract_process.stdout,
|
||||
extra_environment=extra_environment,
|
||||
borg_local_path=location_config.get('local_path', 'borg'),
|
||||
)
|
||||
|
|
83
borgmatic/hooks/ntfy.py
Normal file
83
borgmatic/hooks/ntfy.py
Normal file
|
@ -0,0 +1,83 @@
|
|||
import logging
|
||||
|
||||
import requests
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def initialize_monitor(
|
||||
ping_url, config_filename, monitoring_log_level, dry_run
|
||||
): # pragma: no cover
|
||||
'''
|
||||
No initialization is necessary for this monitor.
|
||||
'''
|
||||
pass
|
||||
|
||||
|
||||
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
|
||||
'''
|
||||
Ping the configured Ntfy topic. Use the given configuration filename in any log entries.
|
||||
If this is a dry run, then don't actually ping anything.
|
||||
'''
|
||||
|
||||
run_states = hook_config.get('states', ['fail'])
|
||||
|
||||
if state.name.lower() in run_states:
|
||||
dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''
|
||||
|
||||
state_config = hook_config.get(
|
||||
state.name.lower(),
|
||||
{
|
||||
'title': f'A Borgmatic {state.name} event happened',
|
||||
'message': f'A Borgmatic {state.name} event happened',
|
||||
'priority': 'default',
|
||||
'tags': 'borgmatic',
|
||||
},
|
||||
)
|
||||
|
||||
base_url = hook_config.get('server', 'https://ntfy.sh')
|
||||
topic = hook_config.get('topic')
|
||||
|
||||
logger.info(f'{config_filename}: Pinging ntfy topic {topic}{dry_run_label}')
|
||||
logger.debug(f'{config_filename}: Using Ntfy ping URL {base_url}/{topic}')
|
||||
|
||||
headers = {
|
||||
'X-Title': state_config.get('title'),
|
||||
'X-Message': state_config.get('message'),
|
||||
'X-Priority': state_config.get('priority'),
|
||||
'X-Tags': state_config.get('tags'),
|
||||
}
|
||||
|
||||
username = hook_config.get('username')
|
||||
password = hook_config.get('password')
|
||||
|
||||
auth = None
|
||||
if (username and password) is not None:
|
||||
auth = requests.auth.HTTPBasicAuth(username, password)
|
||||
logger.info(f'{config_filename}: Using basic auth with user {username} for ntfy')
|
||||
elif username is not None:
|
||||
logger.warning(
|
||||
f'{config_filename}: Password missing for ntfy authentication, defaulting to no auth'
|
||||
)
|
||||
elif password is not None:
|
||||
logger.warning(
|
||||
f'{config_filename}: Username missing for ntfy authentication, defaulting to no auth'
|
||||
)
|
||||
|
||||
if not dry_run:
|
||||
logging.getLogger('urllib3').setLevel(logging.ERROR)
|
||||
try:
|
||||
response = requests.post(f'{base_url}/{topic}', headers=headers, auth=auth)
|
||||
if not response.ok:
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.RequestException as error:
|
||||
logger.warning(f'{config_filename}: ntfy error: {error}')
|
||||
|
||||
|
||||
def destroy_monitor(
|
||||
ping_url_or_uuid, config_filename, monitoring_log_level, dry_run
|
||||
): # pragma: no cover
|
||||
'''
|
||||
No destruction is necessary for this monitor.
|
||||
'''
|
||||
pass
|
|
@ -21,10 +21,10 @@ def initialize_monitor(
|
|||
pass
|
||||
|
||||
|
||||
def ping_monitor(integration_key, config_filename, state, monitoring_log_level, dry_run):
|
||||
def ping_monitor(hook_config, config_filename, state, monitoring_log_level, dry_run):
|
||||
'''
|
||||
If this is an error state, create a PagerDuty event with the given integration key. Use the
|
||||
given configuration filename in any log entries. If this is a dry run, then don't actually
|
||||
If this is an error state, create a PagerDuty event with the configured integration key. Use
|
||||
the given configuration filename in any log entries. If this is a dry run, then don't actually
|
||||
create an event.
|
||||
'''
|
||||
if state != monitor.State.FAIL:
|
||||
|
@ -47,7 +47,7 @@ def ping_monitor(integration_key, config_filename, state, monitoring_log_level,
|
|||
)
|
||||
payload = json.dumps(
|
||||
{
|
||||
'routing_key': integration_key,
|
||||
'routing_key': hook_config['integration_key'],
|
||||
'event_action': 'trigger',
|
||||
'payload': {
|
||||
'summary': 'backup failed on {}'.format(hostname),
|
||||
|
@ -68,7 +68,12 @@ def ping_monitor(integration_key, config_filename, state, monitoring_log_level,
|
|||
logger.debug('{}: Using PagerDuty payload: {}'.format(config_filename, payload))
|
||||
|
||||
logging.getLogger('urllib3').setLevel(logging.ERROR)
|
||||
requests.post(EVENTS_API_URL, data=payload.encode('utf-8'))
|
||||
try:
|
||||
response = requests.post(EVENTS_API_URL, data=payload.encode('utf-8'))
|
||||
if not response.ok:
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.RequestException as error:
|
||||
logger.warning(f'{config_filename}: PagerDuty error: {error}')
|
||||
|
||||
|
||||
def destroy_monitor(
|
||||
|
|
|
@ -1,6 +1,12 @@
|
|||
import csv
|
||||
import logging
|
||||
import os
|
||||
|
||||
from borgmatic.execute import execute_command, execute_command_with_processes
|
||||
from borgmatic.execute import (
|
||||
execute_command,
|
||||
execute_command_and_capture_output,
|
||||
execute_command_with_processes,
|
||||
)
|
||||
from borgmatic.hooks import dump
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
@ -34,6 +40,44 @@ def make_extra_environment(database):
|
|||
return extra
|
||||
|
||||
|
||||
EXCLUDED_DATABASE_NAMES = ('template0', 'template1')
|
||||
|
||||
|
||||
def database_names_to_dump(database, extra_environment, log_prefix, dry_run):
|
||||
'''
|
||||
Given a requested database config, return the corresponding sequence of database names to dump.
|
||||
In the case of "all" when a database format is given, query for the names of databases on the
|
||||
configured host and return them. For "all" without a database format, just return a sequence
|
||||
containing "all".
|
||||
'''
|
||||
requested_name = database['name']
|
||||
|
||||
if requested_name != 'all':
|
||||
return (requested_name,)
|
||||
if not database.get('format'):
|
||||
return ('all',)
|
||||
if dry_run:
|
||||
return ()
|
||||
|
||||
list_command = (
|
||||
('psql', '--list', '--no-password', '--csv', '--tuples-only')
|
||||
+ (('--host', database['hostname']) if 'hostname' in database else ())
|
||||
+ (('--port', str(database['port'])) if 'port' in database else ())
|
||||
+ (('--username', database['username']) if 'username' in database else ())
|
||||
+ (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
|
||||
)
|
||||
logger.debug(f'{log_prefix}: Querying for "all" PostgreSQL databases to dump')
|
||||
list_output = execute_command_and_capture_output(
|
||||
list_command, extra_environment=extra_environment
|
||||
)
|
||||
|
||||
return tuple(
|
||||
row[0]
|
||||
for row in csv.reader(list_output.splitlines(), delimiter=',', quotechar='"')
|
||||
if row[0] not in EXCLUDED_DATABASE_NAMES
|
||||
)
|
||||
|
||||
|
||||
def dump_databases(databases, log_prefix, location_config, dry_run):
|
||||
'''
|
||||
Dump the given PostgreSQL databases to a named pipe. The databases are supplied as a sequence of
|
||||
|
@ -43,6 +87,8 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
|
|||
|
||||
Return a sequence of subprocess.Popen instances for the dump processes ready to spew to a named
|
||||
pipe. But if this is a dry run, then don't actually dump anything and return an empty sequence.
|
||||
|
||||
Raise ValueError if the databases to dump cannot be determined.
|
||||
'''
|
||||
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
|
||||
processes = []
|
||||
|
@ -50,49 +96,65 @@ def dump_databases(databases, log_prefix, location_config, dry_run):
|
|||
logger.info('{}: Dumping PostgreSQL databases{}'.format(log_prefix, dry_run_label))
|
||||
|
||||
for database in databases:
|
||||
name = database['name']
|
||||
extra_environment = make_extra_environment(database)
|
||||
dump_path = make_dump_path(location_config)
|
||||
dump_database_names = database_names_to_dump(
|
||||
database, extra_environment, log_prefix, dry_run
|
||||
)
|
||||
|
||||
if not dump_database_names:
|
||||
if dry_run:
|
||||
continue
|
||||
|
||||
raise ValueError('Cannot find any PostgreSQL databases to dump.')
|
||||
|
||||
for database_name in dump_database_names:
|
||||
dump_format = database.get('format', None if database_name == 'all' else 'custom')
|
||||
default_dump_command = 'pg_dumpall' if database_name == 'all' else 'pg_dump'
|
||||
dump_command = database.get('pg_dump_command') or default_dump_command
|
||||
dump_filename = dump.make_database_dump_filename(
|
||||
make_dump_path(location_config), name, database.get('hostname')
|
||||
dump_path, database_name, database.get('hostname')
|
||||
)
|
||||
all_databases = bool(name == 'all')
|
||||
dump_format = database.get('format', 'custom')
|
||||
if os.path.exists(dump_filename):
|
||||
logger.warning(
|
||||
f'{log_prefix}: Skipping duplicate dump of PostgreSQL database "{database_name}" to {dump_filename}'
|
||||
)
|
||||
continue
|
||||
|
||||
command = (
|
||||
(
|
||||
'pg_dumpall' if all_databases else 'pg_dump',
|
||||
'--no-password',
|
||||
'--clean',
|
||||
'--if-exists',
|
||||
)
|
||||
(dump_command, '--no-password', '--clean', '--if-exists',)
|
||||
+ (('--host', database['hostname']) if 'hostname' in database else ())
|
||||
+ (('--port', str(database['port'])) if 'port' in database else ())
|
||||
+ (('--username', database['username']) if 'username' in database else ())
|
||||
+ (() if all_databases else ('--format', dump_format))
|
||||
+ (('--format', dump_format) if dump_format else ())
|
||||
+ (('--file', dump_filename) if dump_format == 'directory' else ())
|
||||
+ (tuple(database['options'].split(' ')) if 'options' in database else ())
|
||||
+ (() if all_databases else (name,))
|
||||
+ (() if database_name == 'all' else (database_name,))
|
||||
# Use shell redirection rather than the --file flag to sidestep synchronization issues
|
||||
# when pg_dump/pg_dumpall tries to write to a named pipe. But for the directory dump
|
||||
# format in a particular, a named destination is required, and redirection doesn't work.
|
||||
+ (('>', dump_filename) if dump_format != 'directory' else ())
|
||||
)
|
||||
extra_environment = make_extra_environment(database)
|
||||
|
||||
logger.debug(
|
||||
'{}: Dumping PostgreSQL database {} to {}{}'.format(
|
||||
log_prefix, name, dump_filename, dry_run_label
|
||||
)
|
||||
f'{log_prefix}: Dumping PostgreSQL database "{database_name}" to {dump_filename}{dry_run_label}'
|
||||
)
|
||||
if dry_run:
|
||||
continue
|
||||
|
||||
if dump_format == 'directory':
|
||||
dump.create_parent_directory_for_dump(dump_filename)
|
||||
execute_command(
|
||||
command, shell=True, extra_environment=extra_environment,
|
||||
)
|
||||
else:
|
||||
dump.create_named_pipe_for_dump(dump_filename)
|
||||
|
||||
processes.append(
|
||||
execute_command(
|
||||
command, shell=True, extra_environment=extra_environment, run_to_completion=False
|
||||
command,
|
||||
shell=True,
|
||||
extra_environment=extra_environment,
|
||||
run_to_completion=False,
|
||||
)
|
||||
)
|
||||
|
||||
|
@ -140,16 +202,19 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
|
|||
dump_filename = dump.make_database_dump_filename(
|
||||
make_dump_path(location_config), database['name'], database.get('hostname')
|
||||
)
|
||||
psql_command = database.get('psql_command') or 'psql'
|
||||
analyze_command = (
|
||||
('psql', '--no-password', '--quiet')
|
||||
(psql_command, '--no-password', '--quiet')
|
||||
+ (('--host', database['hostname']) if 'hostname' in database else ())
|
||||
+ (('--port', str(database['port'])) if 'port' in database else ())
|
||||
+ (('--username', database['username']) if 'username' in database else ())
|
||||
+ (('--dbname', database['name']) if not all_databases else ())
|
||||
+ (tuple(database['analyze_options'].split(' ')) if 'analyze_options' in database else ())
|
||||
+ ('--command', 'ANALYZE')
|
||||
)
|
||||
pg_restore_command = database.get('pg_restore_command') or 'pg_restore'
|
||||
restore_command = (
|
||||
('psql' if all_databases else 'pg_restore', '--no-password')
|
||||
(psql_command if all_databases else pg_restore_command, '--no-password')
|
||||
+ (
|
||||
('--if-exists', '--exit-on-error', '--clean', '--dbname', database['name'])
|
||||
if not all_databases
|
||||
|
@ -158,6 +223,7 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
|
|||
+ (('--host', database['hostname']) if 'hostname' in database else ())
|
||||
+ (('--port', str(database['port'])) if 'port' in database else ())
|
||||
+ (('--username', database['username']) if 'username' in database else ())
|
||||
+ (tuple(database['restore_options'].split(' ')) if 'restore_options' in database else ())
|
||||
+ (() if extract_process else (dump_filename,))
|
||||
)
|
||||
extra_environment = make_extra_environment(database)
|
||||
|
@ -168,12 +234,13 @@ def restore_database_dump(database_config, log_prefix, location_config, dry_run,
|
|||
if dry_run:
|
||||
return
|
||||
|
||||
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
|
||||
# if the restore paths don't exist in the archive.
|
||||
execute_command_with_processes(
|
||||
restore_command,
|
||||
[extract_process] if extract_process else [],
|
||||
output_log_level=logging.DEBUG,
|
||||
input_file=extract_process.stdout if extract_process else None,
|
||||
extra_environment=extra_environment,
|
||||
borg_local_path=location_config.get('local_path', 'borg'),
|
||||
)
|
||||
execute_command(analyze_command, extra_environment=extra_environment)
|
||||
|
|
125
borgmatic/hooks/sqlite.py
Normal file
125
borgmatic/hooks/sqlite.py
Normal file
|
@ -0,0 +1,125 @@
|
|||
import logging
|
||||
import os
|
||||
|
||||
from borgmatic.execute import execute_command, execute_command_with_processes
|
||||
from borgmatic.hooks import dump
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def make_dump_path(location_config): # pragma: no cover
|
||||
'''
|
||||
Make the dump path from the given location configuration and the name of this hook.
|
||||
'''
|
||||
return dump.make_database_dump_path(
|
||||
location_config.get('borgmatic_source_directory'), 'sqlite_databases'
|
||||
)
|
||||
|
||||
|
||||
def dump_databases(databases, log_prefix, location_config, dry_run):
|
||||
'''
|
||||
Dump the given SQLite3 databases to a file. The databases are supplied as a sequence of
|
||||
configuration dicts, as per the configuration schema. Use the given log prefix in any log
|
||||
entries. Use the given location configuration dict to construct the destination path. If this
|
||||
is a dry run, then don't actually dump anything.
|
||||
'''
|
||||
dry_run_label = ' (dry run; not actually dumping anything)' if dry_run else ''
|
||||
processes = []
|
||||
|
||||
logger.info('{}: Dumping SQLite databases{}'.format(log_prefix, dry_run_label))
|
||||
|
||||
for database in databases:
|
||||
database_path = database['path']
|
||||
|
||||
if database['name'] == 'all':
|
||||
logger.warning('The "all" database name has no meaning for SQLite3 databases')
|
||||
if not os.path.exists(database_path):
|
||||
logger.warning(
|
||||
f'{log_prefix}: No SQLite database at {database_path}; An empty database will be created and dumped'
|
||||
)
|
||||
|
||||
dump_path = make_dump_path(location_config)
|
||||
dump_filename = dump.make_database_dump_filename(dump_path, database['name'])
|
||||
if os.path.exists(dump_filename):
|
||||
logger.warning(
|
||||
f'{log_prefix}: Skipping duplicate dump of SQLite database at {database_path} to {dump_filename}'
|
||||
)
|
||||
continue
|
||||
|
||||
command = (
|
||||
'sqlite3',
|
||||
database_path,
|
||||
'.dump',
|
||||
'>',
|
||||
dump_filename,
|
||||
)
|
||||
logger.debug(
|
||||
f'{log_prefix}: Dumping SQLite database at {database_path} to {dump_filename}{dry_run_label}'
|
||||
)
|
||||
if dry_run:
|
||||
continue
|
||||
|
||||
dump.create_parent_directory_for_dump(dump_filename)
|
||||
processes.append(execute_command(command, shell=True, run_to_completion=False))
|
||||
|
||||
return processes
|
||||
|
||||
|
||||
def remove_database_dumps(databases, log_prefix, location_config, dry_run): # pragma: no cover
|
||||
'''
|
||||
Remove the given SQLite3 database dumps from the filesystem. The databases are supplied as a
|
||||
sequence of configuration dicts, as per the configuration schema. Use the given log prefix in
|
||||
any log entries. Use the given location configuration dict to construct the destination path.
|
||||
If this is a dry run, then don't actually remove anything.
|
||||
'''
|
||||
dump.remove_database_dumps(make_dump_path(location_config), 'SQLite', log_prefix, dry_run)
|
||||
|
||||
|
||||
def make_database_dump_pattern(
|
||||
databases, log_prefix, location_config, name=None
|
||||
): # pragma: no cover
|
||||
'''
|
||||
Make a pattern that matches the given SQLite3 databases. The databases are supplied as a
|
||||
sequence of configuration dicts, as per the configuration schema.
|
||||
'''
|
||||
return dump.make_database_dump_filename(make_dump_path(location_config), name)
|
||||
|
||||
|
||||
def restore_database_dump(database_config, log_prefix, location_config, dry_run, extract_process):
|
||||
'''
|
||||
Restore the given SQLite3 database from an extract stream. The database is supplied as a
|
||||
one-element sequence containing a dict describing the database, as per the configuration schema.
|
||||
Use the given log prefix in any log entries. If this is a dry run, then don't actually restore
|
||||
anything. Trigger the given active extract process (an instance of subprocess.Popen) to produce
|
||||
output to consume.
|
||||
'''
|
||||
dry_run_label = ' (dry run; not actually restoring anything)' if dry_run else ''
|
||||
|
||||
if len(database_config) != 1:
|
||||
raise ValueError('The database configuration value is invalid')
|
||||
|
||||
database_path = database_config[0]['path']
|
||||
|
||||
logger.debug(f'{log_prefix}: Restoring SQLite database at {database_path}{dry_run_label}')
|
||||
if dry_run:
|
||||
return
|
||||
|
||||
try:
|
||||
os.remove(database_path)
|
||||
logger.warning(f'{log_prefix}: Removed existing SQLite database at {database_path}')
|
||||
except FileNotFoundError: # pragma: no cover
|
||||
pass
|
||||
|
||||
restore_command = (
|
||||
'sqlite3',
|
||||
database_path,
|
||||
)
|
||||
|
||||
# Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
|
||||
# if the restore paths don't exist in the archive.
|
||||
execute_command_with_processes(
|
||||
restore_command,
|
||||
[extract_process],
|
||||
output_log_level=logging.DEBUG,
|
||||
input_file=extract_process.stdout,
|
||||
)
|
|
@ -1,4 +1,5 @@
|
|||
import logging
|
||||
import logging.handlers
|
||||
import os
|
||||
import sys
|
||||
|
||||
|
@ -84,18 +85,19 @@ class Multi_stream_handler(logging.Handler):
|
|||
handler.setLevel(level)
|
||||
|
||||
|
||||
LOG_LEVEL_TO_COLOR = {
|
||||
class Console_color_formatter(logging.Formatter):
|
||||
def format(self, record):
|
||||
add_custom_log_levels()
|
||||
|
||||
color = {
|
||||
logging.CRITICAL: colorama.Fore.RED,
|
||||
logging.ERROR: colorama.Fore.RED,
|
||||
logging.WARN: colorama.Fore.YELLOW,
|
||||
logging.ANSWER: colorama.Fore.MAGENTA,
|
||||
logging.INFO: colorama.Fore.GREEN,
|
||||
logging.DEBUG: colorama.Fore.CYAN,
|
||||
}
|
||||
}.get(record.levelno)
|
||||
|
||||
|
||||
class Console_color_formatter(logging.Formatter):
|
||||
def format(self, record):
|
||||
color = LOG_LEVEL_TO_COLOR.get(record.levelno)
|
||||
return color_text(color, record.msg)
|
||||
|
||||
|
||||
|
@ -109,6 +111,45 @@ def color_text(color, message):
|
|||
return '{}{}{}'.format(color, message, colorama.Style.RESET_ALL)
|
||||
|
||||
|
||||
def add_logging_level(level_name, level_number):
|
||||
'''
|
||||
Globally add a custom logging level based on the given (all uppercase) level name and number.
|
||||
Do this idempotently.
|
||||
|
||||
Inspired by https://stackoverflow.com/questions/2183233/how-to-add-a-custom-loglevel-to-pythons-logging-facility/35804945#35804945
|
||||
'''
|
||||
method_name = level_name.lower()
|
||||
|
||||
if not hasattr(logging, level_name):
|
||||
logging.addLevelName(level_number, level_name)
|
||||
setattr(logging, level_name, level_number)
|
||||
|
||||
if not hasattr(logging, method_name):
|
||||
|
||||
def log_for_level(self, message, *args, **kwargs): # pragma: no cover
|
||||
if self.isEnabledFor(level_number):
|
||||
self._log(level_number, message, args, **kwargs)
|
||||
|
||||
setattr(logging.getLoggerClass(), method_name, log_for_level)
|
||||
|
||||
if not hasattr(logging.getLoggerClass(), method_name):
|
||||
|
||||
def log_to_root(message, *args, **kwargs): # pragma: no cover
|
||||
logging.log(level_number, message, *args, **kwargs)
|
||||
|
||||
setattr(logging, method_name, log_to_root)
|
||||
|
||||
|
||||
ANSWER = logging.WARN - 5
|
||||
|
||||
|
||||
def add_custom_log_levels(): # pragma: no cover
|
||||
'''
|
||||
Add a custom log level between WARN and INFO for user-requested answers.
|
||||
'''
|
||||
add_logging_level('ANSWER', ANSWER)
|
||||
|
||||
|
||||
def configure_logging(
|
||||
console_log_level,
|
||||
syslog_log_level=None,
|
||||
|
@ -129,6 +170,8 @@ def configure_logging(
|
|||
if monitoring_log_level is None:
|
||||
monitoring_log_level = console_log_level
|
||||
|
||||
add_custom_log_levels()
|
||||
|
||||
# Log certain log levels to console stderr and others to stdout. This supports use cases like
|
||||
# grepping (non-error) output.
|
||||
console_error_handler = logging.StreamHandler(sys.stderr)
|
||||
|
@ -137,7 +180,8 @@ def configure_logging(
|
|||
{
|
||||
logging.CRITICAL: console_error_handler,
|
||||
logging.ERROR: console_error_handler,
|
||||
logging.WARN: console_standard_handler,
|
||||
logging.WARN: console_error_handler,
|
||||
logging.ANSWER: console_standard_handler,
|
||||
logging.INFO: console_standard_handler,
|
||||
logging.DEBUG: console_standard_handler,
|
||||
}
|
||||
|
@ -151,6 +195,8 @@ def configure_logging(
|
|||
syslog_path = '/dev/log'
|
||||
elif os.path.exists('/var/run/syslog'):
|
||||
syslog_path = '/var/run/syslog'
|
||||
elif os.path.exists('/var/run/log'):
|
||||
syslog_path = '/var/run/log'
|
||||
|
||||
if syslog_path and not interactive_console():
|
||||
syslog_handler = logging.handlers.SysLogHandler(address=syslog_path)
|
||||
|
|
|
@ -1,23 +1,34 @@
|
|||
import logging
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _handle_signal(signal_number, frame): # pragma: no cover
|
||||
EXIT_CODE_FROM_SIGNAL = 128
|
||||
|
||||
|
||||
def handle_signal(signal_number, frame):
|
||||
'''
|
||||
Send the signal to all processes in borgmatic's process group, which includes child processes.
|
||||
'''
|
||||
# Prevent infinite signal handler recursion. If the parent frame is this very same handler
|
||||
# function, we know we're recursing.
|
||||
if frame.f_back.f_code.co_name == _handle_signal.__name__:
|
||||
if frame.f_back.f_code.co_name == handle_signal.__name__:
|
||||
return
|
||||
|
||||
os.killpg(os.getpgrp(), signal_number)
|
||||
|
||||
if signal_number == signal.SIGTERM:
|
||||
logger.critical('Exiting due to TERM signal')
|
||||
sys.exit(EXIT_CODE_FROM_SIGNAL + signal.SIGTERM)
|
||||
|
||||
def configure_signals(): # pragma: no cover
|
||||
|
||||
def configure_signals():
|
||||
'''
|
||||
Configure borgmatic's signal handlers to pass relevant signals through to any child processes
|
||||
like Borg. Note that SIGINT gets passed through even without these changes.
|
||||
'''
|
||||
for signal_number in (signal.SIGHUP, signal.SIGTERM, signal.SIGUSR1, signal.SIGUSR2):
|
||||
signal.signal(signal_number, _handle_signal)
|
||||
signal.signal(signal_number, handle_signal)
|
||||
|
|
|
@ -1,7 +1,9 @@
|
|||
import logging
|
||||
|
||||
import borgmatic.logger
|
||||
|
||||
VERBOSITY_ERROR = -1
|
||||
VERBOSITY_WARNING = 0
|
||||
VERBOSITY_ANSWER = 0
|
||||
VERBOSITY_SOME = 1
|
||||
VERBOSITY_LOTS = 2
|
||||
|
||||
|
@ -10,9 +12,11 @@ def verbosity_to_log_level(verbosity):
|
|||
'''
|
||||
Given a borgmatic verbosity value, return the corresponding Python log level.
|
||||
'''
|
||||
borgmatic.logger.add_custom_log_levels()
|
||||
|
||||
return {
|
||||
VERBOSITY_ERROR: logging.ERROR,
|
||||
VERBOSITY_WARNING: logging.WARNING,
|
||||
VERBOSITY_ANSWER: logging.ANSWER,
|
||||
VERBOSITY_SOME: logging.INFO,
|
||||
VERBOSITY_LOTS: logging.DEBUG,
|
||||
}.get(verbosity, logging.WARNING)
|
||||
|
|
|
@ -1,13 +1,14 @@
|
|||
FROM python:3.8.1-alpine3.11 as borgmatic
|
||||
FROM alpine:3.17.1 as borgmatic
|
||||
|
||||
COPY . /app
|
||||
RUN apk add --no-cache py3-pip py3-ruamel.yaml py3-ruamel.yaml.clib
|
||||
RUN pip install --no-cache /app && generate-borgmatic-config && chmod +r /etc/borgmatic/config.yaml
|
||||
RUN borgmatic --help > /command-line.txt \
|
||||
&& for action in init prune create check extract mount umount restore list info; do \
|
||||
&& for action in rcreate transfer create prune compact check extract export-tar mount umount restore rlist list rinfo info break-lock borg; do \
|
||||
echo -e "\n--------------------------------------------------------------------------------\n" >> /command-line.txt \
|
||||
&& borgmatic "$action" --help >> /command-line.txt; done
|
||||
|
||||
FROM node:13.7.0-alpine as html
|
||||
FROM node:19.5.0-alpine as html
|
||||
|
||||
ARG ENVIRONMENT=production
|
||||
|
||||
|
@ -26,7 +27,7 @@ COPY . /source
|
|||
RUN NODE_ENV=${ENVIRONMENT} npx eleventy --input=/source/docs --output=/output/docs \
|
||||
&& mv /output/docs/index.html /output/index.html
|
||||
|
||||
FROM nginx:1.16.1-alpine
|
||||
FROM nginx:1.22.1-alpine
|
||||
|
||||
COPY --from=html /output /usr/share/nginx/html
|
||||
COPY --from=borgmatic /etc/borgmatic/config.yaml /usr/share/nginx/html/docs/reference/config.yaml
|
||||
|
|
|
@ -63,11 +63,6 @@
|
|||
top: -2px;
|
||||
bottom: 2px;
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.inlinelist .inlinelist-item code:before {
|
||||
border-left-color: rgba(0,0,0,.8);
|
||||
}
|
||||
}
|
||||
}
|
||||
a.buzzword {
|
||||
text-decoration: underline;
|
||||
|
@ -91,26 +86,9 @@ a.buzzword {
|
|||
.buzzword {
|
||||
background-color: #f7f7f7;
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.buzzword-list li,
|
||||
.buzzword {
|
||||
background-color: #080808;
|
||||
}
|
||||
}
|
||||
.inlinelist .inlinelist-item {
|
||||
background-color: #e9e9e9;
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.inlinelist .inlinelist-item {
|
||||
background-color: #000;
|
||||
}
|
||||
.inlinelist .inlinelist-item a {
|
||||
color: #fff;
|
||||
}
|
||||
.inlinelist .inlinelist-item code {
|
||||
color: inherit;
|
||||
}
|
||||
}
|
||||
.inlinelist .inlinelist-item:hover,
|
||||
.inlinelist .inlinelist-item:focus,
|
||||
.buzzword-list li:hover,
|
||||
|
@ -217,12 +195,6 @@ main p a.buzzword {
|
|||
height: 1.75em;
|
||||
font-weight: 600;
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.numberflag {
|
||||
background-color: #00bcd4;
|
||||
color: #222;
|
||||
}
|
||||
}
|
||||
h1 .numberflag,
|
||||
h2 .numberflag,
|
||||
h3 .numberflag,
|
||||
|
@ -244,11 +216,6 @@ h2 .numberflag:after {
|
|||
background-color: #fff;
|
||||
width: calc(100% + 0.4em); /* 16px /40 */
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
h2 .numberflag:after {
|
||||
background-color: #222;
|
||||
}
|
||||
}
|
||||
|
||||
/* Super featured list on home page */
|
||||
.list-superfeatured .avatar {
|
||||
|
|
|
@ -12,16 +12,6 @@
|
|||
line-height: 1.285714285714; /* 18px /14 */
|
||||
font-family: system-ui, -apple-system, sans-serif;
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.minilink {
|
||||
background-color: #222;
|
||||
/*
|
||||
!important to override .elv-callout a
|
||||
see _includes/components/callout.css
|
||||
*/
|
||||
color: #fff !important;
|
||||
}
|
||||
}
|
||||
table .minilink {
|
||||
margin-top: 6px;
|
||||
}
|
||||
|
@ -32,12 +22,6 @@ table .minilink {
|
|||
.minilink[href]:focus {
|
||||
background-color: #bbb;
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.minilink[href]:hover,
|
||||
.minilink[href]:focus {
|
||||
background-color: #444;
|
||||
}
|
||||
}
|
||||
pre + .minilink {
|
||||
color: #fff;
|
||||
border-radius: 0 0 0.2857142857143em 0.2857142857143em; /* 4px /14 */
|
||||
|
@ -74,11 +58,6 @@ h4 .minilink {
|
|||
text-transform: none;
|
||||
box-shadow: 0 0 0 1px rgba(0,0,0,0.3);
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.minilink-addedin {
|
||||
box-shadow: 0 0 0 1px rgba(255,255,255,0.3);
|
||||
}
|
||||
}
|
||||
.minilink-addedin:not(:first-child) {
|
||||
margin-left: .5em;
|
||||
}
|
||||
|
|
|
@ -1,18 +0,0 @@
|
|||
#suggestion-form textarea {
|
||||
font-family: sans-serif;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
#suggestion-form label {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
#suggestion-form input[type=email] {
|
||||
font-size: 16px;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
#suggestion-form .form-error {
|
||||
color: red;
|
||||
}
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
<h2>Improve this documentation</h2>
|
||||
|
||||
<p>Have an idea on how to make this documentation even better? Send your
|
||||
feedback below! But if you need help with borgmatic, or have an idea for a
|
||||
borgmatic feature, please use our <a href="https://torsion.org/borgmatic/#issues">issue
|
||||
tracker</a> instead.</p>
|
||||
|
||||
<form id="suggestion-form">
|
||||
<div><label for="suggestion">Documentation suggestion</label></div>
|
||||
<textarea id="suggestion" rows="8" cols="60" name="suggestion"></textarea>
|
||||
<div data-sk-error="suggestion" class="form-error"></div>
|
||||
<input id="_page" type="hidden" name="_page">
|
||||
<input id="_subject" type="hidden" name="_subject" value="borgmatic documentation suggestion">
|
||||
<br />
|
||||
<label for="email">Email address</label>
|
||||
<div><input id="email" type="email" name="email" placeholder="Only required if you want a response!"></div>
|
||||
<div data-sk-error="email" class="form-error"></div>
|
||||
<br />
|
||||
<div><button type="submit">Send</button></div>
|
||||
<br />
|
||||
</form>
|
||||
|
||||
<script>
|
||||
document.getElementById('_page').value = window.location.href;
|
||||
window.sk=window.sk||function(){(sk.q=sk.q||[]).push(arguments)};
|
||||
|
||||
sk('form', 'init', {
|
||||
id: '1d536680ab96',
|
||||
element: '#suggestion-form'
|
||||
});
|
||||
</script>
|
||||
|
||||
<script defer src="https://js.statickit.com/statickit.js"></script>
|
5
docs/_includes/components/suggestion-link.html
Normal file
5
docs/_includes/components/suggestion-link.html
Normal file
|
@ -0,0 +1,5 @@
|
|||
<h2>Improve this documentation</h2>
|
||||
|
||||
<p>Have an idea on how to make this documentation even better? Use our <a
|
||||
href="https://projects.torsion.org/borgmatic-collective/borgmatic/issues">issue tracker</a> to send your
|
||||
feedback!</p>
|
|
@ -79,22 +79,11 @@
|
|||
border-bottom: 1px solid #ddd;
|
||||
margin-bottom: 0.25em; /* 4px /16 */
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.elv-toc-list > li > a {
|
||||
color: #fff;
|
||||
border-color: #444;
|
||||
}
|
||||
}
|
||||
|
||||
/* Active links */
|
||||
.elv-toc-list li.elv-toc-active > a {
|
||||
background-color: #dff7ff;
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.elv-toc-list li.elv-toc-active > a {
|
||||
background-color: #353535;
|
||||
}
|
||||
}
|
||||
.elv-toc-list ul .elv-toc-active > a:after {
|
||||
content: "";
|
||||
}
|
||||
|
|
|
@ -258,6 +258,7 @@ footer.elv-layout {
|
|||
/* Header */
|
||||
.elv-header {
|
||||
position: relative;
|
||||
text-align: center;
|
||||
}
|
||||
.elv-header-default {
|
||||
display: flex;
|
||||
|
@ -284,11 +285,6 @@ footer.elv-layout {
|
|||
.elv-hero {
|
||||
background-color: #222;
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.elv-hero {
|
||||
background-color: #292929;
|
||||
}
|
||||
}
|
||||
.elv-hero img,
|
||||
.elv-hero svg {
|
||||
width: 42.95774646vh;
|
||||
|
@ -529,3 +525,11 @@ main .elv-toc + h1 .direct-link {
|
|||
display: none ;
|
||||
}
|
||||
}
|
||||
|
||||
.header-anchor {
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
.header-anchor:hover::after {
|
||||
content: " 🔗";
|
||||
}
|
||||
|
|
|
@ -11,7 +11,6 @@
|
|||
{% include 'components/minilink.css' %}
|
||||
{% include 'components/toc.css' %}
|
||||
{% include 'components/info-blocks.css' %}
|
||||
{% include 'components/suggestion-form.css' %}
|
||||
{% include 'prism-theme.css' %}
|
||||
{% include 'asciinema.css' %}
|
||||
{% endset %}
|
||||
|
|
|
@ -28,5 +28,5 @@ headerClass: elv-header-default
|
|||
|
||||
{{ content | safe }}
|
||||
|
||||
{% include 'components/suggestion-form.html' %}
|
||||
{% include 'components/suggestion-link.html' %}
|
||||
</main>
|
||||
|
|
|
@ -1,17 +1,18 @@
|
|||
---
|
||||
title: How to add preparation and cleanup steps to backups
|
||||
eleventyNavigation:
|
||||
key: Add preparation and cleanup steps
|
||||
key: 🧹 Add preparation and cleanup steps
|
||||
parent: How-to guides
|
||||
order: 8
|
||||
order: 9
|
||||
---
|
||||
## Preparation and cleanup hooks
|
||||
|
||||
If you find yourself performing prepraration tasks before your backup runs, or
|
||||
If you find yourself performing preparation tasks before your backup runs, or
|
||||
cleanup work afterwards, borgmatic hooks may be of interest. Hooks are shell
|
||||
commands that borgmatic executes for you at various points, and they're
|
||||
configured in the `hooks` section of your configuration file. But if you're
|
||||
looking to backup a database, it's probably easier to use the [database backup
|
||||
commands that borgmatic executes for you at various points as it runs, and
|
||||
they're configured in the `hooks` section of your configuration file. But if
|
||||
you're looking to backup a database, it's probably easier to use the [database
|
||||
backup
|
||||
feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
|
||||
instead.
|
||||
|
||||
|
@ -27,15 +28,52 @@ hooks:
|
|||
- umount /some/filesystem
|
||||
```
|
||||
|
||||
The `before_backup` and `after_backup` hooks each run once per configuration
|
||||
file. `before_backup` hooks run prior to backups of all repositories in a
|
||||
configuration file, right before the `create` action. `after_backup` hooks run
|
||||
afterwards, but not if an error occurs in a previous hook or in the backups
|
||||
themselves.
|
||||
<span class="minilink minilink-addedin">New in version 1.6.0</span> The
|
||||
`before_backup` and `after_backup` hooks each run once per repository in a
|
||||
configuration file. `before_backup` hooks runs right before the `create`
|
||||
action for a particular repository, and `after_backup` hooks run afterwards,
|
||||
but not if an error occurs in a previous hook or in the backups themselves.
|
||||
(Prior to borgmatic 1.6.0, these hooks instead ran once per configuration file
|
||||
rather than once per repository.)
|
||||
|
||||
There are additional hooks for the `prune` and `check` actions as well.
|
||||
`before_prune` and `after_prune` run if there are any `prune` actions, while
|
||||
`before_check` and `after_check` run if there are any `check` actions.
|
||||
There are additional hooks that run before/after other actions as well. For
|
||||
instance, `before_prune` runs before a `prune` action for a repository, while
|
||||
`after_prune` runs after it.
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
|
||||
`before_actions` and `after_actions` hooks run before/after all the actions
|
||||
(like `create`, `prune`, etc.) for each repository. These hooks are a good
|
||||
place to run per-repository steps like mounting/unmounting a remote
|
||||
filesystem.
|
||||
|
||||
|
||||
## Variable interpolation
|
||||
|
||||
The before and after action hooks support interpolating particular runtime
|
||||
variables into the hook command. Here's an example that assumes you provide a
|
||||
separate shell script:
|
||||
|
||||
```yaml
|
||||
hooks:
|
||||
after_prune:
|
||||
- record-prune.sh "{configuration_filename}" "{repository}"
|
||||
```
|
||||
|
||||
In this example, when the hook is triggered, borgmatic interpolates runtime
|
||||
values into the hook command: the borgmatic configuration filename and the
|
||||
paths of the current Borg repository. Here's the full set of supported
|
||||
variables you can use here:
|
||||
|
||||
* `configuration_filename`: borgmatic configuration filename in which the
|
||||
hook was defined
|
||||
* `repository`: path of the current repository as configured in the current
|
||||
borgmatic configuration file
|
||||
|
||||
Note that you can also interpolate in [arbitrary environment
|
||||
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
|
||||
|
||||
|
||||
## Global hooks
|
||||
|
||||
You can also use `before_everything` and `after_everything` hooks to perform
|
||||
global setup or cleanup:
|
||||
|
@ -58,6 +96,8 @@ but only if there is a `create` action. It runs even if an error occurs during
|
|||
a backup or a backup hook, but not if an error occurs during a
|
||||
`before_everything` hook.
|
||||
|
||||
## Error hooks
|
||||
|
||||
borgmatic also runs `on_error` hooks if an error occurs, either when creating
|
||||
a backup or running a backup hook. See the [monitoring and alerting
|
||||
documentation](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/)
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: How to backup to a removable drive or an intermittent server
|
||||
eleventyNavigation:
|
||||
key: Backup to a removable drive or server
|
||||
key: 💾 Backup to a removable drive/server
|
||||
parent: How-to guides
|
||||
order: 9
|
||||
order: 10
|
||||
---
|
||||
## Occasional backups
|
||||
|
||||
|
@ -16,9 +16,14 @@ But if you run borgmatic and your hard drive isn't plugged in, or your buddy's
|
|||
server is offline, then you'll get an annoying error message and the overall
|
||||
borgmatic run will fail (even if individual repositories still complete).
|
||||
|
||||
Another variant is when the source machine is only sometimes available for
|
||||
backups, e.g. a laptop where you want to skip backups when the battery falls
|
||||
below a certain level.
|
||||
|
||||
So what if you want borgmatic to swallow the error of a missing drive
|
||||
or an offline server, and continue trucking along? That's where the concept of
|
||||
"soft failure" come in.
|
||||
or an offline server or a low battery—and exit gracefully? That's where the
|
||||
concept of "soft failure" come in.
|
||||
|
||||
|
||||
## Soft failure command hooks
|
||||
|
||||
|
@ -63,6 +68,9 @@ borgmatic. borgmatic logs the soft failure, skips all further actions in that
|
|||
configurable file, and proceeds onward to any other borgmatic configuration
|
||||
files you may have.
|
||||
|
||||
Note that `before_backup` only runs on the `create` action. See below about
|
||||
optionally using `before_actions` instead.
|
||||
|
||||
You can imagine a similar check for the sometimes-online server case:
|
||||
|
||||
```yaml
|
||||
|
@ -71,13 +79,30 @@ location:
|
|||
- /home
|
||||
|
||||
repositories:
|
||||
- me@buddys-server.org:backup.borg
|
||||
- ssh://me@buddys-server.org/./backup.borg
|
||||
|
||||
hooks:
|
||||
before_backup:
|
||||
- ping -q -c 1 buddys-server.org > /dev/null || exit 75
|
||||
```
|
||||
|
||||
Or to only run backups if the battery level is high enough:
|
||||
|
||||
```yaml
|
||||
hooks:
|
||||
before_backup:
|
||||
- is_battery_percent_at_least.sh 25
|
||||
```
|
||||
|
||||
(Writing the battery script is left as an exercise to the reader.)
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.7.0</span> The
|
||||
`before_actions` and `after_actions` hooks run before/after all the actions
|
||||
(like `create`, `prune`, etc.) for each repository. So if you'd like your soft
|
||||
failure command hook to run regardless of action, consider using
|
||||
`before_actions` instead of `before_backup`.
|
||||
|
||||
|
||||
## Caveats and details
|
||||
|
||||
There are some caveats you should be aware of with this feature.
|
||||
|
@ -99,6 +124,6 @@ There are some caveats you should be aware of with this feature.
|
|||
* The soft failure doesn't have to apply to a repository. You can even perform
|
||||
a test to make sure that individual source directories are mounted and
|
||||
available. Use your imagination!
|
||||
* The soft failure feature also works for `before_prune`, `after_prune`,
|
||||
`before_check`, and `after_check` hooks. But it is not implemented for
|
||||
`before_everything` or `after_everything`.
|
||||
* The soft failure feature also works for before/after hooks for other
|
||||
actions as well. But it is not implemented for `before_everything` or
|
||||
`after_everything`.
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: How to backup your databases
|
||||
eleventyNavigation:
|
||||
key: Backup your databases
|
||||
key: 🗄️ Backup your databases
|
||||
parent: How-to guides
|
||||
order: 7
|
||||
order: 8
|
||||
---
|
||||
## Database dump hooks
|
||||
|
||||
|
@ -15,7 +15,7 @@ consistent snapshot that is more suited for backups.
|
|||
|
||||
Fortunately, borgmatic includes built-in support for creating database dumps
|
||||
prior to running backups. For example, here is everything you need to dump and
|
||||
backup a couple of local PostgreSQL databases and a MySQL/MariaDB database:
|
||||
backup a couple of local PostgreSQL databases and a MySQL/MariaDB database.
|
||||
|
||||
```yaml
|
||||
hooks:
|
||||
|
@ -26,10 +26,31 @@ hooks:
|
|||
- name: posts
|
||||
```
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.5.22</span> You can
|
||||
also dump MongoDB databases. For example:
|
||||
|
||||
```yaml
|
||||
hooks:
|
||||
mongodb_databases:
|
||||
- name: messages
|
||||
```
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.7.9</span>
|
||||
Additionally, you can dump SQLite databases. For example:
|
||||
|
||||
```yaml
|
||||
hooks:
|
||||
sqlite_databases:
|
||||
- name: mydb
|
||||
path: /var/lib/sqlite3/mydb.sqlite
|
||||
```
|
||||
|
||||
As part of each backup, borgmatic streams a database dump for each configured
|
||||
database directly to Borg, so it's included in the backup without consuming
|
||||
additional disk space. (The one exception is PostgreSQL's "directory" dump
|
||||
format, which can't stream and therefore does consume temporary disk space.)
|
||||
additional disk space. (The exceptions are the PostgreSQL/MongoDB "directory"
|
||||
dump formats, which can't stream and therefore do consume temporary disk
|
||||
space. Additionally, prior to borgmatic 1.5.3, all database dumps consumed
|
||||
temporary disk space.)
|
||||
|
||||
To support this, borgmatic creates temporary named pipes in `~/.borgmatic` by
|
||||
default. To customize this path, set the `borgmatic_source_directory` option
|
||||
|
@ -47,6 +68,8 @@ hooks:
|
|||
postgresql_databases:
|
||||
- name: users
|
||||
hostname: database1.example.org
|
||||
- name: orders
|
||||
hostname: database2.example.org
|
||||
port: 5433
|
||||
username: postgres
|
||||
password: trustsome1
|
||||
|
@ -54,13 +77,32 @@ hooks:
|
|||
options: "--role=someone"
|
||||
mysql_databases:
|
||||
- name: posts
|
||||
hostname: database2.example.org
|
||||
hostname: database3.example.org
|
||||
port: 3307
|
||||
username: root
|
||||
password: trustsome1
|
||||
options: "--skip-comments"
|
||||
mongodb_databases:
|
||||
- name: messages
|
||||
hostname: database4.example.org
|
||||
port: 27018
|
||||
username: dbuser
|
||||
password: trustsome1
|
||||
authentication_database: mongousers
|
||||
options: "--ssl"
|
||||
sqlite_databases:
|
||||
- name: mydb
|
||||
path: /var/lib/sqlite3/mydb.sqlite
|
||||
```
|
||||
|
||||
See your [borgmatic configuration
|
||||
file](https://torsion.org/borgmatic/docs/reference/configuration/) for
|
||||
additional customization of the options passed to database commands (when
|
||||
listing databases, restoring databases, etc.).
|
||||
|
||||
|
||||
### All databases
|
||||
|
||||
If you want to dump all databases on a host, use `all` for the database name:
|
||||
|
||||
```yaml
|
||||
|
@ -69,13 +111,39 @@ hooks:
|
|||
- name: all
|
||||
mysql_databases:
|
||||
- name: all
|
||||
mongodb_databases:
|
||||
- name: all
|
||||
```
|
||||
|
||||
Note that you may need to use a `username` of the `postgres` superuser for
|
||||
this to work with PostgreSQL.
|
||||
|
||||
If you would like to backup databases only and not source directories, you can
|
||||
specify an empty `source_directories` value because it is a mandatory field:
|
||||
The SQLite hook in particular does not consider "all" a special database name.
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.7.6</span> With
|
||||
PostgreSQL and MySQL, you can optionally dump "all" databases to separate
|
||||
files instead of one combined dump file, allowing more convenient restores of
|
||||
individual databases. Enable this by specifying your desired database dump
|
||||
`format`:
|
||||
|
||||
```yaml
|
||||
hooks:
|
||||
postgresql_databases:
|
||||
- name: all
|
||||
format: custom
|
||||
mysql_databases:
|
||||
- name: all
|
||||
format: sql
|
||||
```
|
||||
|
||||
### No source directories
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.7.1</span> If you
|
||||
would like to backup databases only and not source directories, you can omit
|
||||
`source_directories` entirely.
|
||||
|
||||
In older versions of borgmatic, instead specify an empty `source_directories`
|
||||
value, as it is a mandatory option prior to version 1.7.1:
|
||||
|
||||
```yaml
|
||||
location:
|
||||
|
@ -86,6 +154,16 @@ hooks:
|
|||
```
|
||||
|
||||
|
||||
|
||||
### External passwords
|
||||
|
||||
If you don't want to keep your database passwords in your borgmatic
|
||||
configuration file, you can instead pass them in via [environment
|
||||
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/)
|
||||
or command-line [configuration
|
||||
overrides](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-overrides).
|
||||
|
||||
|
||||
### Configuration backups
|
||||
|
||||
An important note about this database configuration: You'll need the
|
||||
|
@ -97,38 +175,37 @@ bring back any missing configuration files in order to restore a database.
|
|||
|
||||
## Supported databases
|
||||
|
||||
As of now, borgmatic supports PostgreSQL and MySQL/MariaDB databases
|
||||
directly. But see below about general-purpose preparation and cleanup hooks as
|
||||
a work-around with other database systems. Also, please [file a
|
||||
ticket](https://torsion.org/borgmatic/#issues) for additional database systems
|
||||
that you'd like supported.
|
||||
As of now, borgmatic supports PostgreSQL, MySQL/MariaDB, MongoDB, and SQLite
|
||||
databases directly. But see below about general-purpose preparation and
|
||||
cleanup hooks as a work-around with other database systems. Also, please [file
|
||||
a ticket](https://torsion.org/borgmatic/#issues) for additional database
|
||||
systems that you'd like supported.
|
||||
|
||||
|
||||
## Database restoration
|
||||
|
||||
To restore a database dump from an archive, use the `borgmatic restore`
|
||||
action. But the first step is to figure out which archive to restore from. A
|
||||
good way to do that is to use the `list` action:
|
||||
good way to do that is to use the `rlist` action:
|
||||
|
||||
```bash
|
||||
borgmatic list
|
||||
borgmatic rlist
|
||||
```
|
||||
|
||||
(No borgmatic `list` action? Try the old-style `--list`, or upgrade
|
||||
borgmatic!)
|
||||
(No borgmatic `rlist` action? Try `list` instead or upgrade borgmatic!)
|
||||
|
||||
That should yield output looking something like:
|
||||
|
||||
```text
|
||||
host-2019-01-01T04:05:06.070809 Tue, 2019-01-01 04:05:06 [...]
|
||||
host-2019-01-02T04:06:07.080910 Wed, 2019-01-02 04:06:07 [...]
|
||||
host-2023-01-01T04:05:06.070809 Tue, 2023-01-01 04:05:06 [...]
|
||||
host-2023-01-02T04:06:07.080910 Wed, 2023-01-02 04:06:07 [...]
|
||||
```
|
||||
|
||||
Assuming that you want to restore all database dumps from the archive with the
|
||||
most up-to-date files and therefore the latest timestamp, run a command like:
|
||||
|
||||
```bash
|
||||
borgmatic restore --archive host-2019-01-02T04:06:07.080910
|
||||
borgmatic restore --archive host-2023-01-02T04:06:07.080910
|
||||
```
|
||||
|
||||
(No borgmatic `restore` action? Upgrade borgmatic!)
|
||||
|
@ -157,7 +234,7 @@ But if you have multiple repositories configured, then you'll need to specify
|
|||
the repository path containing the archive to restore. Here's an example:
|
||||
|
||||
```bash
|
||||
borgmatic restore --repository repo.borg --archive host-2019-...
|
||||
borgmatic restore --repository repo.borg --archive host-2023-...
|
||||
```
|
||||
|
||||
### Restore particular databases
|
||||
|
@ -167,9 +244,39 @@ restore one of them, use the `--database` flag to select one or more
|
|||
databases. For instance:
|
||||
|
||||
```bash
|
||||
borgmatic restore --archive host-2019-... --database users
|
||||
borgmatic restore --archive host-2023-... --database users
|
||||
```
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.7.6</span> You can
|
||||
also restore individual databases even if you dumped them as "all"—as long as
|
||||
you dumped them into separate files via use of the "format" option. See above
|
||||
for more information.
|
||||
|
||||
|
||||
### Restore all databases
|
||||
|
||||
To restore all databases:
|
||||
|
||||
```bash
|
||||
borgmatic restore --archive host-2023-... --database all
|
||||
```
|
||||
|
||||
Or omit the `--database` flag entirely:
|
||||
|
||||
|
||||
```bash
|
||||
borgmatic restore --archive host-2023-...
|
||||
```
|
||||
|
||||
Prior to borgmatic version 1.7.6, this restores a combined "all" database
|
||||
dump from the archive.
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.7.6</span> Restoring
|
||||
"all" databases restores each database found in the selected archive. That
|
||||
includes any combined dump file named "all" and any other individual database
|
||||
dumps found in the archive.
|
||||
|
||||
|
||||
### Limitations
|
||||
|
||||
There are a few important limitations with borgmatic's current database
|
||||
|
@ -185,19 +292,34 @@ backups to avoid getting caught without a way to restore a database.
|
|||
databases that share the exact same name on different hosts.
|
||||
4. Because database hooks implicitly enable the `read_special` configuration
|
||||
setting to support dump and restore streaming, you'll need to ensure that any
|
||||
special files are excluded from backups (named pipes, block devices, and
|
||||
character devices) to prevent hanging. Try a command like `find / -type c,b,p`
|
||||
to find such files. Common directories to exclude are `/dev` and `/run`, but
|
||||
that may not be exhaustive.
|
||||
special files are excluded from backups (named pipes, block devices,
|
||||
character devices, and sockets) to prevent hanging. Try a command like
|
||||
`find /your/source/path -type b -or -type c -or -type p -or -type s` to find
|
||||
such files. Common directories to exclude are `/dev` and `/run`, but that may
|
||||
not be exhaustive. <span class="minilink minilink-addedin">New in version
|
||||
1.7.3</span> When database hooks are enabled, borgmatic automatically excludes
|
||||
special files that may cause Borg to hang, so you no longer need to manually
|
||||
exclude them. (This includes symlinks with special files as a destination.) You
|
||||
can override/prevent this behavior by explicitly setting `read_special` to true.
|
||||
|
||||
|
||||
### Manual restoration
|
||||
|
||||
If you prefer to restore a database without the help of borgmatic, first
|
||||
[extract](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/) an
|
||||
archive containing a database dump, and then manually restore the dump file
|
||||
found within the extracted `~/.borgmatic/` path (e.g. with `pg_restore` or
|
||||
`mysql` commands).
|
||||
archive containing a database dump.
|
||||
|
||||
borgmatic extracts the dump file into the *`username`*`/.borgmatic/` directory
|
||||
within the extraction destination path, where *`username`* is the user that
|
||||
created the backup. For example, if you created the backup with the `root`
|
||||
user and you're extracting to `/tmp`, then the dump will be in
|
||||
`/tmp/root/.borgmatic`.
|
||||
|
||||
After extraction, you can manually restore the dump file using native database
|
||||
commands like `pg_restore`, `mysql`, `mongorestore`, `sqlite`, or similar.
|
||||
|
||||
Also see the documentation on [listing database
|
||||
dumps](https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#listing-database-dumps).
|
||||
|
||||
|
||||
## Preparation and cleanup hooks
|
||||
|
@ -230,5 +352,14 @@ hooks:
|
|||
### borgmatic hangs during backup
|
||||
|
||||
See Limitations above about `read_special`. You may need to exclude certain
|
||||
paths with named pipes, block devices, or character devices on which borgmatic
|
||||
is hanging.
|
||||
paths with named pipes, block devices, character devices, or sockets on which
|
||||
borgmatic is hanging.
|
||||
|
||||
Alternatively, if excluding special files is too onerous, you can create two
|
||||
separate borgmatic configuration files—one for your source files and a
|
||||
separate one for backing up databases. That way, the database `read_special`
|
||||
option will not be active when backing up special files.
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.7.3</span> See
|
||||
Limitations above about borgmatic's automatic exclusion of special files to
|
||||
prevent Borg hangs.
|
||||
|
|
|
@ -1,86 +1,158 @@
|
|||
---
|
||||
title: How to deal with very large backups
|
||||
eleventyNavigation:
|
||||
key: Deal with very large backups
|
||||
key: 📏 Deal with very large backups
|
||||
parent: How-to guides
|
||||
order: 3
|
||||
order: 4
|
||||
---
|
||||
## Biggish data
|
||||
|
||||
Borg itself is great for efficiently de-duplicating data across successive
|
||||
backup archives, even when dealing with very large repositories. But you may
|
||||
find that while borgmatic's default mode of "prune, create, and check" works
|
||||
well on small repositories, it's not so great on larger ones. That's because
|
||||
running the default pruning and consistency checks take a long time on large
|
||||
repositories.
|
||||
find that while borgmatic's default actions of `create`, `prune`, `compact`,
|
||||
and `check` works well on small repositories, it's not so great on larger
|
||||
ones. That's because running the default pruning, compact, and consistency
|
||||
checks take a long time on large repositories.
|
||||
|
||||
<span class="minilink minilink-addedin">Prior to version 1.7.9</span> The
|
||||
default action ordering was `prune`, `compact`, `create`, and `check`.
|
||||
|
||||
### A la carte actions
|
||||
|
||||
If you find yourself in this situation, you have some options. First, you can
|
||||
run borgmatic's pruning, creating, or checking actions separately. For
|
||||
instance, the following optional actions are available:
|
||||
If you find yourself wanting to customize the actions, you have some options.
|
||||
First, you can run borgmatic's `prune`, `compact`, `create`, or `check`
|
||||
actions separately. For instance, the following optional actions are
|
||||
available (among others):
|
||||
|
||||
```bash
|
||||
borgmatic prune
|
||||
borgmatic create
|
||||
borgmatic prune
|
||||
borgmatic compact
|
||||
borgmatic check
|
||||
```
|
||||
|
||||
(No borgmatic `prune`, `create`, or `check` actions? Try the old-style
|
||||
`--prune`, `--create`, or `--check`. Or upgrade borgmatic!)
|
||||
|
||||
You can run with only one of these actions provided, or you can mix and match
|
||||
any number of them in a single borgmatic run. This supports approaches like
|
||||
skipping certain actions while running others. For instance, this skips
|
||||
`prune` and only runs `create` and `check`:
|
||||
You can run borgmatic with only one of these actions provided, or you can mix
|
||||
and match any number of them in a single borgmatic run. This supports
|
||||
approaches like skipping certain actions while running others. For instance,
|
||||
this skips `prune` and `compact` and only runs `create` and `check`:
|
||||
|
||||
```bash
|
||||
borgmatic create check
|
||||
```
|
||||
|
||||
Or, you can make backups with `create` on a frequent schedule (e.g. with
|
||||
`borgmatic create` called from one cron job), while only running expensive
|
||||
consistency checks with `check` on a much less frequent basis (e.g. with
|
||||
`borgmatic check` called from a separate cron job).
|
||||
<span class="minilink minilink-addedin">New in version 1.7.9</span> borgmatic
|
||||
now respects your specified command-line action order, running actions in the
|
||||
order you specify. In previous versions, borgmatic ran your specified actions
|
||||
in a fixed ordering regardless of the order they appeared on the command-line.
|
||||
|
||||
But instead of running actions together, another option is to run backups with
|
||||
`create` on a frequent schedule (e.g. with `borgmatic create` called from one
|
||||
cron job), while only running expensive consistency checks with `check` on a
|
||||
much less frequent basis (e.g. with `borgmatic check` called from a separate
|
||||
cron job).
|
||||
|
||||
|
||||
### Consistency check configuration
|
||||
|
||||
Another option is to customize your consistency checks. The default
|
||||
consistency checks run both full-repository checks and per-archive checks
|
||||
within each repository.
|
||||
Another option is to customize your consistency checks. By default, if you
|
||||
omit consistency checks from configuration, borgmatic runs full-repository
|
||||
checks (`repository`) and per-archive checks (`archives`) within each
|
||||
repository, no more than once a month. This is equivalent to what `borg check`
|
||||
does if run without options.
|
||||
|
||||
But if you find that archive checks are too slow, for example, you can
|
||||
configure borgmatic to run repository checks only. Configure this in the
|
||||
`consistency` section of borgmatic configuration:
|
||||
|
||||
```yaml
|
||||
consistency:
|
||||
checks:
|
||||
- name: repository
|
||||
```
|
||||
|
||||
<span class="minilink minilink-addedin">Prior to version 1.6.2</span> `checks`
|
||||
was a plain list of strings without the `name:` part. For example:
|
||||
|
||||
```yaml
|
||||
consistency:
|
||||
checks:
|
||||
- repository
|
||||
```
|
||||
|
||||
|
||||
Here are the available checks from fastest to slowest:
|
||||
|
||||
* `repository`: Checks the consistency of the repository itself.
|
||||
* `archives`: Checks all of the archives in the repository.
|
||||
* `extract`: Performs an extraction dry-run of the most recent archive.
|
||||
* `data`: Verifies the data integrity of all archives contents, decrypting and decompressing all data (implies `archives` as well).
|
||||
* `data`: Verifies the data integrity of all archives contents, decrypting and decompressing all data.
|
||||
|
||||
See [Borg's check documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html) for more information.
|
||||
Note that the `data` check is a more thorough version of the `archives` check,
|
||||
so enabling the `data` check implicitly enables the `archives` check as well.
|
||||
|
||||
See [Borg's check
|
||||
documentation](https://borgbackup.readthedocs.io/en/stable/usage/check.html)
|
||||
for more information.
|
||||
|
||||
### Check frequency
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.6.2</span> You can
|
||||
optionally configure checks to run on a periodic basis rather than every time
|
||||
borgmatic runs checks. For instance:
|
||||
|
||||
```yaml
|
||||
consistency:
|
||||
checks:
|
||||
- name: repository
|
||||
frequency: 2 weeks
|
||||
- name: archives
|
||||
frequency: 1 month
|
||||
```
|
||||
|
||||
This tells borgmatic to run the `repository` consistency check at most once
|
||||
every two weeks for a given repository and the `archives` check at most once a
|
||||
month. The `frequency` value is a number followed by a unit of time, e.g. "3
|
||||
days", "1 week", "2 months", etc. The `frequency` defaults to `always`, which
|
||||
means run this check every time checks run.
|
||||
|
||||
Unlike a real scheduler like cron, borgmatic only makes a best effort to run
|
||||
checks on the configured frequency. It compares that frequency with how long
|
||||
it's been since the last check for a given repository (as recorded in a file
|
||||
within `~/.borgmatic/checks`). If it hasn't been long enough, the check is
|
||||
skipped. And you still have to run `borgmatic check` (or `borgmatic` without
|
||||
actions) in order for checks to run, even when a `frequency` is configured!
|
||||
|
||||
This also applies *across* configuration files that have the same repository
|
||||
configured. Make sure you have the same check frequency configured in each
|
||||
though—or the most frequently configured check will apply.
|
||||
|
||||
If you want to temporarily ignore your configured frequencies, you can invoke
|
||||
`borgmatic check --force` to run checks unconditionally.
|
||||
|
||||
|
||||
### Disabling checks
|
||||
|
||||
If that's still too slow, you can disable consistency checks entirely,
|
||||
either for a single repository or for all repositories.
|
||||
|
||||
Disabling all consistency checks looks like this:
|
||||
|
||||
```yaml
|
||||
consistency:
|
||||
checks:
|
||||
- name: disabled
|
||||
```
|
||||
|
||||
<span class="minilink minilink-addedin">Prior to version 1.6.2</span> `checks`
|
||||
was a plain list of strings without the `name:` part. For instance:
|
||||
|
||||
```yaml
|
||||
consistency:
|
||||
checks:
|
||||
- disabled
|
||||
```
|
||||
|
||||
Or, if you have multiple repositories in your borgmatic configuration file,
|
||||
If you have multiple repositories in your borgmatic configuration file,
|
||||
you can keep running consistency checks, but only against a subset of the
|
||||
repositories:
|
||||
|
||||
|
@ -98,7 +170,8 @@ borgmatic check --only data --only extract
|
|||
```
|
||||
|
||||
This is useful for running slow consistency checks on an infrequent basis,
|
||||
separate from your regular checks.
|
||||
separate from your regular checks. It is still subject to any configured
|
||||
check frequencies unless the `--force` flag is used.
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
|
|
@ -1,26 +1,26 @@
|
|||
---
|
||||
title: How to develop on borgmatic
|
||||
eleventyNavigation:
|
||||
key: Develop on borgmatic
|
||||
key: 🏗️ Develop on borgmatic
|
||||
parent: How-to guides
|
||||
order: 11
|
||||
order: 13
|
||||
---
|
||||
## Source code
|
||||
|
||||
To get set up to hack on borgmatic, first clone master via HTTPS or SSH:
|
||||
|
||||
```bash
|
||||
git clone https://projects.torsion.org/witten/borgmatic.git
|
||||
git clone https://projects.torsion.org/borgmatic-collective/borgmatic.git
|
||||
```
|
||||
|
||||
Or:
|
||||
|
||||
```bash
|
||||
git clone ssh://git@projects.torsion.org:3022/witten/borgmatic.git
|
||||
git clone ssh://git@projects.torsion.org:3022/borgmatic-collective/borgmatic.git
|
||||
```
|
||||
|
||||
Then, install borgmatic
|
||||
"[editable](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs)"
|
||||
"[editable](https://pip.pypa.io/en/stable/cli/pip_install/#editable-installs)"
|
||||
so that you can run borgmatic commands while you're hacking on them to
|
||||
make sure your changes work.
|
||||
|
||||
|
@ -66,8 +66,6 @@ following:
|
|||
tox -e black
|
||||
```
|
||||
|
||||
Note that Black requires at minimum Python 3.6.
|
||||
|
||||
And if you get a complaint from the
|
||||
[isort](https://github.com/timothycrosley/isort) Python import orderer, you
|
||||
can ask isort to order your imports for you:
|
||||
|
@ -118,7 +116,7 @@ See the Black, Flake8, and isort documentation for more information.
|
|||
|
||||
Each pull request triggers a continuous integration build which runs the test
|
||||
suite. You can view these builds on
|
||||
[build.torsion.org](https://build.torsion.org/witten/borgmatic), and they're
|
||||
[build.torsion.org](https://build.torsion.org/borgmatic-collective/borgmatic), and they're
|
||||
also linked from the commits list on each pull request.
|
||||
|
||||
## Documentation development
|
||||
|
|
|
@ -1,41 +1,39 @@
|
|||
---
|
||||
title: How to extract a backup
|
||||
eleventyNavigation:
|
||||
key: Extract a backup
|
||||
key: 📤 Extract a backup
|
||||
parent: How-to guides
|
||||
order: 6
|
||||
order: 7
|
||||
---
|
||||
## Extract
|
||||
|
||||
When the worst happens—or you want to test your backups—the first step is
|
||||
to figure out which archive to extract. A good way to do that is to use the
|
||||
`list` action:
|
||||
`rlist` action:
|
||||
|
||||
```bash
|
||||
borgmatic list
|
||||
borgmatic rlist
|
||||
```
|
||||
|
||||
(No borgmatic `list` action? Try the old-style `--list`, or upgrade
|
||||
borgmatic!)
|
||||
(No borgmatic `rlist` action? Try `list` instead or upgrade borgmatic!)
|
||||
|
||||
That should yield output looking something like:
|
||||
|
||||
```text
|
||||
host-2019-01-01T04:05:06.070809 Tue, 2019-01-01 04:05:06 [...]
|
||||
host-2019-01-02T04:06:07.080910 Wed, 2019-01-02 04:06:07 [...]
|
||||
host-2023-01-01T04:05:06.070809 Tue, 2023-01-01 04:05:06 [...]
|
||||
host-2023-01-02T04:06:07.080910 Wed, 2023-01-02 04:06:07 [...]
|
||||
```
|
||||
|
||||
Assuming that you want to extract the archive with the most up-to-date files
|
||||
and therefore the latest timestamp, run a command like:
|
||||
|
||||
```bash
|
||||
borgmatic extract --archive host-2019-01-02T04:06:07.080910
|
||||
borgmatic extract --archive host-2023-01-02T04:06:07.080910
|
||||
```
|
||||
|
||||
(No borgmatic `extract` action? Try the old-style `--extract`, or upgrade
|
||||
borgmatic!)
|
||||
(No borgmatic `extract` action? Upgrade borgmatic!)
|
||||
|
||||
With newer versions of borgmatic, you can simplify this to:
|
||||
Or simplify this to:
|
||||
|
||||
```bash
|
||||
borgmatic extract --archive latest
|
||||
|
@ -43,7 +41,8 @@ borgmatic extract --archive latest
|
|||
|
||||
The `--archive` value is the name of the archive to extract. This extracts the
|
||||
entire contents of the archive to the current directory, so make sure you're
|
||||
in the right place before running the command.
|
||||
in the right place before running the command—or see below about the
|
||||
`--destination` flag.
|
||||
|
||||
|
||||
## Repository selection
|
||||
|
@ -55,7 +54,7 @@ But if you have multiple repositories configured, then you'll need to specify
|
|||
the repository path containing the archive to extract. Here's an example:
|
||||
|
||||
```bash
|
||||
borgmatic extract --repository repo.borg --archive host-2019-...
|
||||
borgmatic extract --repository repo.borg --archive host-2023-...
|
||||
```
|
||||
|
||||
## Extract particular files
|
||||
|
@ -65,13 +64,22 @@ everything from an archive. To do that, tack on one or more `--path` values.
|
|||
For instance:
|
||||
|
||||
```bash
|
||||
borgmatic extract --archive host-2019-... --path path/1 path/2
|
||||
borgmatic extract --archive latest --path path/1 path/2
|
||||
```
|
||||
|
||||
Note that the specified restore paths should not have a leading slash. Like a
|
||||
whole-archive extract, this also extracts into the current directory. So for
|
||||
example, if you happen to be in the directory `/var` and you run the `extract`
|
||||
command above, borgmatic will extract `/var/path/1` and `/var/path/2`.
|
||||
whole-archive extract, this also extracts into the current directory by
|
||||
default. So for example, if you happen to be in the directory `/var` and you
|
||||
run the `extract` command above, borgmatic will extract `/var/path/1` and
|
||||
`/var/path/2`.
|
||||
|
||||
|
||||
### Searching for files
|
||||
|
||||
If you're not sure which archive contains the files you're looking for, you
|
||||
can [search across
|
||||
archives](https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#searching-for-a-file).
|
||||
|
||||
|
||||
## Extract to a particular destination
|
||||
|
||||
|
@ -80,7 +88,7 @@ extract files to a particular destination directory, use the `--destination`
|
|||
flag:
|
||||
|
||||
```bash
|
||||
borgmatic extract --archive host-2019-... --destination /tmp
|
||||
borgmatic extract --archive latest --destination /tmp
|
||||
```
|
||||
|
||||
When using the `--destination` flag, be careful not to overwrite your system's
|
||||
|
@ -104,7 +112,7 @@ archive as a [FUSE](https://en.wikipedia.org/wiki/Filesystem_in_Userspace)
|
|||
filesystem, you can use the `borgmatic mount` action. Here's an example:
|
||||
|
||||
```bash
|
||||
borgmatic mount --archive host-2019-... --mount-point /mnt
|
||||
borgmatic mount --archive latest --mount-point /mnt
|
||||
```
|
||||
|
||||
This mounts the entire archive on the given mount point `/mnt`, so that you
|
||||
|
@ -116,7 +124,7 @@ Omit the `--archive` flag to mount all archives (lazy-loaded):
|
|||
borgmatic mount --mount-point /mnt
|
||||
```
|
||||
|
||||
Or use the "latest" value for the archive to mount the latest successful archive:
|
||||
Or use the "latest" value for the archive to mount the latest archive:
|
||||
|
||||
```bash
|
||||
borgmatic mount --archive latest --mount-point /mnt
|
||||
|
@ -127,7 +135,7 @@ your archive, use the `--path` flag, similar to the `extract` action above.
|
|||
For instance:
|
||||
|
||||
```bash
|
||||
borgmatic mount --archive host-2019-... --mount-point /mnt --path var/lib
|
||||
borgmatic mount --archive latest --mount-point /mnt --path var/lib
|
||||
```
|
||||
|
||||
When you're all done exploring your files, unmount your mount point. No
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: How to inspect your backups
|
||||
eleventyNavigation:
|
||||
key: Inspect your backups
|
||||
key: 🔎 Inspect your backups
|
||||
parent: How-to guides
|
||||
order: 4
|
||||
order: 5
|
||||
---
|
||||
## Backup progress
|
||||
|
||||
|
@ -37,18 +37,72 @@ borgmatic --stats
|
|||
## Existing backups
|
||||
|
||||
borgmatic provides convenient actions for Borg's
|
||||
[list](https://borgbackup.readthedocs.io/en/stable/usage/list.html) and
|
||||
[info](https://borgbackup.readthedocs.io/en/stable/usage/info.html)
|
||||
[`list`](https://borgbackup.readthedocs.io/en/stable/usage/list.html) and
|
||||
[`info`](https://borgbackup.readthedocs.io/en/stable/usage/info.html)
|
||||
functionality:
|
||||
|
||||
|
||||
```bash
|
||||
borgmatic list
|
||||
borgmatic info
|
||||
```
|
||||
|
||||
(No borgmatic `list` or `info` actions? Try the old-style `--list` or
|
||||
`--info`. Or upgrade borgmatic!)
|
||||
You can change the output format of `borgmatic list` by specifying your own
|
||||
with `--format`. Refer to the [borg list --format
|
||||
documentation](https://borgbackup.readthedocs.io/en/stable/usage/list.html#the-format-specifier-syntax)
|
||||
for available values.
|
||||
|
||||
*(No borgmatic `list` or `info` actions? Upgrade borgmatic!)*
|
||||
|
||||
<span class="minilink minilink-addedin">New in borgmatic version 1.7.0</span>
|
||||
There are also `rlist` and `rinfo` actions for displaying repository
|
||||
information with Borg 2.x:
|
||||
|
||||
```bash
|
||||
borgmatic rlist
|
||||
borgmatic rinfo
|
||||
```
|
||||
|
||||
See the [borgmatic command-line
|
||||
reference](https://torsion.org/borgmatic/docs/reference/command-line/) for
|
||||
more information.
|
||||
|
||||
|
||||
### Searching for a file
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.6.3</span> Let's say
|
||||
you've accidentally deleted a file and want to find the backup archive(s)
|
||||
containing it. `borgmatic list` provides a `--find` flag for exactly this
|
||||
purpose. For instance, if you're looking for a `foo.txt`:
|
||||
|
||||
```bash
|
||||
borgmatic list --find foo.txt
|
||||
```
|
||||
|
||||
This will list your archives and indicate those with files matching
|
||||
`*foo.txt*` anywhere in the archive. The `--find` parameter can alternatively
|
||||
be a [Borg
|
||||
pattern](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-patterns).
|
||||
|
||||
To limit the archives searched, use the standard `list` parameters for
|
||||
filtering archives such as `--last`, `--archive`, `--match-archives`, etc. For
|
||||
example, to search only the last five archives:
|
||||
|
||||
```bash
|
||||
borgmatic list --find foo.txt --last 5
|
||||
```
|
||||
|
||||
## Listing database dumps
|
||||
|
||||
If you have enabled borgmatic's [database
|
||||
hooks](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/), you
|
||||
can list backed up database dumps via borgmatic. For example:
|
||||
|
||||
```bash
|
||||
borgmatic list --archive latest --find .borgmatic/*_databases
|
||||
```
|
||||
|
||||
This gives you a listing of all database dump files contained in the latest
|
||||
archive, complete with file sizes.
|
||||
|
||||
|
||||
## Logging
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: How to make backups redundant
|
||||
eleventyNavigation:
|
||||
key: Make backups redundant
|
||||
key: ☁️ Make backups redundant
|
||||
parent: How-to guides
|
||||
order: 2
|
||||
order: 3
|
||||
---
|
||||
## Multiple repositories
|
||||
|
||||
|
@ -20,9 +20,8 @@ location:
|
|||
|
||||
# Paths of local or remote repositories to backup to.
|
||||
repositories:
|
||||
- 1234@usw-s001.rsync.net:backups.borg
|
||||
- k8pDxu32@k8pDxu32.repo.borgbase.com:repo
|
||||
- user1@scp2.cdn.lima-labs.com:repo
|
||||
- ssh://1234@usw-s001.rsync.net/./backups.borg
|
||||
- ssh://k8pDxu32@k8pDxu32.repo.borgbase.com/./repo
|
||||
- /var/lib/backups/local.borg
|
||||
```
|
||||
|
||||
|
@ -35,8 +34,7 @@ Here's a way of visualizing what borgmatic does with the above configuration:
|
|||
|
||||
1. Backup `/home` and `/etc` to `1234@usw-s001.rsync.net:backups.borg`
|
||||
2. Backup `/home` and `/etc` to `k8pDxu32@k8pDxu32.repo.borgbase.com:repo`
|
||||
3. Backup `/home` and `/etc` to `user1@scp2.cdn.lima-labs.com:repo`
|
||||
4. Backup `/home` and `/etc` to `/var/lib/backups/local.borg`
|
||||
3. Backup `/home` and `/etc` to `/var/lib/backups/local.borg`
|
||||
|
||||
This gives you redundancy of your data across repositories and even
|
||||
potentially across providers.
|
||||
|
@ -44,3 +42,13 @@ potentially across providers.
|
|||
See [Borg repository URLs
|
||||
documentation](https://borgbackup.readthedocs.io/en/stable/usage/general.html#repository-urls)
|
||||
for more information on how to specify local and remote repository paths.
|
||||
|
||||
### Different options per repository
|
||||
|
||||
What if you want borgmatic to backup to multiple repositories—while also
|
||||
setting different options for each one? In that case, you'll need to use
|
||||
[a separate borgmatic configuration file for each
|
||||
repository](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/)
|
||||
instead of the multiple repositories in one configuration file as described
|
||||
above. That's because all of the repositories in a particular configuration
|
||||
file get the same options applied.
|
||||
|
|
|
@ -1,20 +1,22 @@
|
|||
---
|
||||
title: How to make per-application backups
|
||||
eleventyNavigation:
|
||||
key: Make per-application backups
|
||||
key: 🔀 Make per-application backups
|
||||
parent: How-to guides
|
||||
order: 1
|
||||
---
|
||||
## Multiple backup configurations
|
||||
|
||||
You may find yourself wanting to create different backup policies for
|
||||
different applications on your system. For instance, you may want one backup
|
||||
configuration for your database data directory, and a different configuration
|
||||
for your user home directories.
|
||||
different applications on your system or even for different backup
|
||||
repositories. For instance, you might want one backup configuration for your
|
||||
database data directory and a different configuration for your user home
|
||||
directories. Or one backup configuration for your local backups with a
|
||||
different configuration for your remote repository.
|
||||
|
||||
The way to accomplish that is pretty simple: Create multiple separate
|
||||
configuration files and place each one in a `/etc/borgmatic.d/` directory. For
|
||||
instance:
|
||||
instance, for applications:
|
||||
|
||||
```bash
|
||||
sudo mkdir /etc/borgmatic.d
|
||||
|
@ -22,6 +24,14 @@ sudo generate-borgmatic-config --destination /etc/borgmatic.d/app1.yaml
|
|||
sudo generate-borgmatic-config --destination /etc/borgmatic.d/app2.yaml
|
||||
```
|
||||
|
||||
Or, for repositories:
|
||||
|
||||
```bash
|
||||
sudo mkdir /etc/borgmatic.d
|
||||
sudo generate-borgmatic-config --destination /etc/borgmatic.d/repo1.yaml
|
||||
sudo generate-borgmatic-config --destination /etc/borgmatic.d/repo2.yaml
|
||||
```
|
||||
|
||||
When you set up multiple configuration files like this, borgmatic will run
|
||||
each one in turn from a single borgmatic invocation. This includes, by
|
||||
default, the traditional `/etc/borgmatic/config.yaml` as well.
|
||||
|
@ -29,13 +39,20 @@ default, the traditional `/etc/borgmatic/config.yaml` as well.
|
|||
Each configuration file is interpreted independently, as if you ran borgmatic
|
||||
for each configuration file one at a time. In other words, borgmatic does not
|
||||
perform any merging of configuration files by default. If you'd like borgmatic
|
||||
to merge your configuration files, see below about configuration includes.
|
||||
to merge your configuration files, for instance to avoid duplication of
|
||||
settings, see below about configuration includes.
|
||||
|
||||
Additionally, the `~/.config/borgmatic.d/` directory works the same way as
|
||||
`/etc/borgmatic.d`. If you need even more customizability, you can specify
|
||||
alternate configuration paths on the command-line with borgmatic's `--config`
|
||||
flag. See `borgmatic --help` for more information.
|
||||
`/etc/borgmatic.d`.
|
||||
|
||||
If you need even more customizability, you can specify alternate configuration
|
||||
paths on the command-line with borgmatic's `--config` flag. (See `borgmatic
|
||||
--help` for more information.) For instance, if you want to schedule your
|
||||
various borgmatic backups to run at different times, you'll need multiple
|
||||
entries in your [scheduling software of
|
||||
choice](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#autopilot),
|
||||
each entry using borgmatic's `--config` flag instead of relying on
|
||||
`/etc/borgmatic.d`.
|
||||
|
||||
## Configuration includes
|
||||
|
||||
|
@ -69,6 +86,10 @@ themselves and complaining that they are not valid configuration files, you
|
|||
should put them in a directory other than `/etc/borgmatic.d/`. (A subdirectory
|
||||
is fine.)
|
||||
|
||||
When a configuration include is a relative path, borgmatic loads it from either
|
||||
the current working directory or from the directory containing the file doing
|
||||
the including.
|
||||
|
||||
Note that this form of include must be a YAML value rather than a key. For
|
||||
example, this will not work:
|
||||
|
||||
|
@ -80,29 +101,80 @@ location:
|
|||
!include /etc/borgmatic/common_retention.yaml
|
||||
```
|
||||
|
||||
But if you do want to merge in a YAML key and its values, keep reading!
|
||||
But if you do want to merge in a YAML key *and* its values, keep reading!
|
||||
|
||||
|
||||
## Include merging
|
||||
|
||||
If you need to get even fancier and pull in common configuration options while
|
||||
potentially overriding individual options, you can perform a YAML merge of
|
||||
included configuration using the YAML `<<` key. For instance, here's an
|
||||
example of a main configuration file that pulls in two retention options via
|
||||
an include, and then overrides one of them locally:
|
||||
If you need to get even fancier and merge in common configuration options, you
|
||||
can perform a YAML merge of included configuration using the YAML `<<` key.
|
||||
For instance, here's an example of a main configuration file that pulls in
|
||||
retention and consistency options via a single include:
|
||||
|
||||
```yaml
|
||||
<<: !include /etc/borgmatic/common.yaml
|
||||
|
||||
location:
|
||||
...
|
||||
```
|
||||
|
||||
This is what `common.yaml` might look like:
|
||||
|
||||
```yaml
|
||||
retention:
|
||||
keep_hourly: 24
|
||||
keep_daily: 7
|
||||
|
||||
consistency:
|
||||
checks:
|
||||
- name: repository
|
||||
```
|
||||
|
||||
Once this include gets merged in, the resulting configuration would have all
|
||||
of the `location` options from the original configuration file *and* the
|
||||
`retention` and `consistency` options from the include.
|
||||
|
||||
Prior to borgmatic version 1.6.0, when there's a section collision between the
|
||||
local file and the merged include, the local file's section takes precedence.
|
||||
So if the `retention` section appears in both the local file and the include
|
||||
file, the included `retention` is ignored in favor of the local `retention`.
|
||||
But see below about deep merge in version 1.6.0+.
|
||||
|
||||
Note that this `<<` include merging syntax is only for merging in mappings
|
||||
(configuration options and their values). But if you'd like to include a
|
||||
single value directly, please see the section above about standard includes.
|
||||
|
||||
Additionally, there is a limitation preventing multiple `<<` include merges
|
||||
per section. So for instance, that means you can do one `<<` merge at the
|
||||
global level, another `<<` within each configuration section, etc. (This is a
|
||||
YAML limitation.)
|
||||
|
||||
|
||||
### Deep merge
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.6.0</span> borgmatic
|
||||
performs a deep merge of merged include files, meaning that values are merged
|
||||
at all levels in the two configuration files. This allows you to include
|
||||
common configuration—up to full borgmatic configuration files—while overriding
|
||||
only the parts you want to customize.
|
||||
|
||||
For instance, here's an example of a main configuration file that pulls in two
|
||||
retention options via an include and then overrides one of them locally:
|
||||
|
||||
```yaml
|
||||
<<: !include /etc/borgmatic/common.yaml
|
||||
|
||||
location:
|
||||
...
|
||||
|
||||
retention:
|
||||
keep_daily: 5
|
||||
<<: !include /etc/borgmatic/common_retention.yaml
|
||||
```
|
||||
|
||||
This is what `common_retention.yaml` might look like:
|
||||
This is what `common.yaml` might look like:
|
||||
|
||||
```yaml
|
||||
retention:
|
||||
keep_hourly: 24
|
||||
keep_daily: 7
|
||||
```
|
||||
|
@ -110,14 +182,11 @@ keep_daily: 7
|
|||
Once this include gets merged in, the resulting configuration would have a
|
||||
`keep_hourly` value of `24` and an overridden `keep_daily` value of `5`.
|
||||
|
||||
When there is a collision of an option between the local file and the merged
|
||||
include, the local file's option takes precedent. And note that this is a
|
||||
shallow merge rather than a deep merge, so the merging does not descend into
|
||||
nested values.
|
||||
When there's an option collision between the local file and the merged
|
||||
include, the local file's option takes precedence.
|
||||
|
||||
Note that this `<<` include merging syntax is only for merging in mappings
|
||||
(keys/values). If you'd like to include other types like scalars or lists
|
||||
directly, please see the section above about standard includes.
|
||||
<span class="minilink minilink-addedin">New in version 1.6.1</span> Colliding
|
||||
list values are appended together.
|
||||
|
||||
|
||||
## Configuration overrides
|
||||
|
@ -162,7 +231,14 @@ borgmatic create --override location.repositories=[test1.borg,test2.borg]
|
|||
Or even a single list element:
|
||||
|
||||
```bash
|
||||
borgmatic create --override location.repositories=[/root/test1.borg]
|
||||
borgmatic create --override location.repositories=[/root/test.borg]
|
||||
```
|
||||
|
||||
If your override value contains special YAML characters like colons, then
|
||||
you'll need quotes for it to parse correctly:
|
||||
|
||||
```bash
|
||||
borgmatic create --override location.repositories="['user@server:test.borg']"
|
||||
```
|
||||
|
||||
There is not currently a way to override a single element of a list without
|
||||
|
@ -177,3 +253,5 @@ indentation and a leading dash.)
|
|||
|
||||
Be sure to quote your overrides if they contain spaces or other characters
|
||||
that your shell may interpret.
|
||||
|
||||
An alternate to command-line overrides is passing in your values via [environment variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: How to monitor your backups
|
||||
eleventyNavigation:
|
||||
key: Monitor your backups
|
||||
key: 🚨 Monitor your backups
|
||||
parent: How-to guides
|
||||
order: 5
|
||||
order: 6
|
||||
---
|
||||
|
||||
## Monitoring and alerting
|
||||
|
@ -38,17 +38,19 @@ below for how to configure this.
|
|||
|
||||
borgmatic integrates with monitoring services like
|
||||
[Healthchecks](https://healthchecks.io/), [Cronitor](https://cronitor.io),
|
||||
[Cronhub](https://cronhub.io), and [PagerDuty](https://www.pagerduty.com/) and
|
||||
pings these services whenever borgmatic runs. That way, you'll receive an
|
||||
alert when something goes wrong or (for certain hooks) the service doesn't
|
||||
hear from borgmatic for a configured interval. See [Healthchecks
|
||||
[Cronhub](https://cronhub.io), [PagerDuty](https://www.pagerduty.com/), and
|
||||
[ntfy](https://ntfy.sh/) and pings these services whenever borgmatic runs.
|
||||
That way, you'll receive an alert when something goes wrong or (for certain
|
||||
hooks) the service doesn't hear from borgmatic for a configured interval. See
|
||||
[Healthchecks
|
||||
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#healthchecks-hook),
|
||||
[Cronitor
|
||||
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronitor-hook),
|
||||
[Cronhub
|
||||
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronhub-hook),
|
||||
and [PagerDuty
|
||||
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook)
|
||||
[PagerDuty
|
||||
hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook),
|
||||
and [ntfy hook](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook)
|
||||
below for how to configure this.
|
||||
|
||||
While these services offer different features, you probably only need to use
|
||||
|
@ -59,8 +61,6 @@ one of them at most.
|
|||
You can use traditional monitoring software to consume borgmatic JSON output
|
||||
and track when the last successful backup occurred. See [scripting
|
||||
borgmatic](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#scripting-borgmatic)
|
||||
and [related
|
||||
software](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#related-software)
|
||||
below for how to configure this.
|
||||
|
||||
### Borg hosting providers
|
||||
|
@ -83,10 +83,10 @@ tests](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/).
|
|||
|
||||
## Error hooks
|
||||
|
||||
When an error occurs during a `prune`, `create`, or `check` action, borgmatic
|
||||
can run configurable shell commands to fire off custom error notifications or
|
||||
take other actions, so you can get alerted as soon as something goes wrong.
|
||||
Here's a not-so-useful example:
|
||||
When an error occurs during a `create`, `prune`, `compact`, or `check` action,
|
||||
borgmatic can run configurable shell commands to fire off custom error
|
||||
notifications or take other actions, so you can get alerted as soon as
|
||||
something goes wrong. Here's a not-so-useful example:
|
||||
|
||||
```yaml
|
||||
hooks:
|
||||
|
@ -104,10 +104,9 @@ hooks:
|
|||
- send-text-message.sh "{configuration_filename}" "{repository}"
|
||||
```
|
||||
|
||||
In this example, when the error occurs, borgmatic interpolates a few runtime
|
||||
values into the hook command: the borgmatic configuration filename, and the
|
||||
path of the repository. Here's the full set of supported variables you can use
|
||||
here:
|
||||
In this example, when the error occurs, borgmatic interpolates runtime values
|
||||
into the hook command: the borgmatic configuration filename, and the path of
|
||||
the repository. Here's the full set of supported variables you can use here:
|
||||
|
||||
* `configuration_filename`: borgmatic configuration filename in which the
|
||||
error occurred
|
||||
|
@ -117,9 +116,9 @@ here:
|
|||
* `output`: output of the command that failed (may be blank if an error
|
||||
occurred without running a command)
|
||||
|
||||
Note that borgmatic runs the `on_error` hooks only for `prune`, `create`, or
|
||||
`check` actions or hooks in which an error occurs, and not other actions.
|
||||
borgmatic does not run `on_error` hooks if an error occurs within a
|
||||
Note that borgmatic runs the `on_error` hooks only for `create`, `prune`,
|
||||
`compact`, or `check` actions or hooks in which an error occurs, and not other
|
||||
actions. borgmatic does not run `on_error` hooks if an error occurs within a
|
||||
`before_everything` or `after_everything` hook. For more about hooks, see the
|
||||
[borgmatic hooks
|
||||
documentation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/),
|
||||
|
@ -137,14 +136,15 @@ URL" for your project. Here's an example:
|
|||
|
||||
```yaml
|
||||
hooks:
|
||||
healthchecks: https://hc-ping.com/addffa72-da17-40ae-be9c-ff591afb942a
|
||||
healthchecks:
|
||||
ping_url: https://hc-ping.com/addffa72-da17-40ae-be9c-ff591afb942a
|
||||
```
|
||||
|
||||
With this hook in place, borgmatic pings your Healthchecks project when a
|
||||
backup begins, ends, or errors. Specifically, after the <a
|
||||
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
|
||||
hooks</a> run, borgmatic lets Healthchecks know that it has started if any of
|
||||
the `prune`, `create`, or `check` actions are run.
|
||||
the `create`, `prune`, `compact`, or `check` actions are run.
|
||||
|
||||
Then, if the actions complete successfully, borgmatic notifies Healthchecks of
|
||||
the success after the `after_backup` hooks run, and includes borgmatic logs in
|
||||
|
@ -154,12 +154,15 @@ in the Healthchecks UI, although be aware that Healthchecks currently has a
|
|||
|
||||
If an error occurs during any action or hook, borgmatic notifies Healthchecks
|
||||
after the `on_error` hooks run, also tacking on logs including the error
|
||||
itself. But the logs are only included for errors that occur when a `prune`,
|
||||
`create`, or `check` action is run.
|
||||
itself. But the logs are only included for errors that occur when a `create`,
|
||||
`prune`, `compact`, or `check` action is run.
|
||||
|
||||
You can customize the verbosity of the logs that are sent to Healthchecks with
|
||||
borgmatic's `--monitoring-verbosity` flag. The `--files` and `--stats` flags
|
||||
may also be of use. See `borgmatic --help` for more information.
|
||||
borgmatic's `--monitoring-verbosity` flag. The `--list` and `--stats` flags
|
||||
may also be of use. See `borgmatic create --help` for more information.
|
||||
Additionally, see the [borgmatic configuration
|
||||
file](https://torsion.org/borgmatic/docs/reference/configuration/) for
|
||||
additional Healthchecks options.
|
||||
|
||||
You can configure Healthchecks to notify you by a [variety of
|
||||
mechanisms](https://healthchecks.io/#welcome-integrations) when backups fail
|
||||
|
@ -177,15 +180,16 @@ API URL" for your monitor. Here's an example:
|
|||
|
||||
```yaml
|
||||
hooks:
|
||||
cronitor: https://cronitor.link/d3x0c1
|
||||
cronitor:
|
||||
ping_url: https://cronitor.link/d3x0c1
|
||||
```
|
||||
|
||||
With this hook in place, borgmatic pings your Cronitor monitor when a backup
|
||||
begins, ends, or errors. Specifically, after the <a
|
||||
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
|
||||
hooks</a> run, borgmatic lets Cronitor know that it has started if any of the
|
||||
`prune`, `create`, or `check` actions are run. Then, if the actions complete
|
||||
successfully, borgmatic notifies Cronitor of the success after the
|
||||
`prune`, `compact`, `create`, or `check` actions are run. Then, if the actions
|
||||
complete successfully, borgmatic notifies Cronitor of the success after the
|
||||
`after_backup` hooks run. And if an error occurs during any action or hook,
|
||||
borgmatic notifies Cronitor after the `on_error` hooks run.
|
||||
|
||||
|
@ -205,15 +209,16 @@ URL" for your monitor. Here's an example:
|
|||
|
||||
```yaml
|
||||
hooks:
|
||||
cronhub: https://cronhub.io/start/1f5e3410-254c-11e8-b61d-55875966d031
|
||||
cronhub:
|
||||
ping_url: https://cronhub.io/start/1f5e3410-254c-11e8-b61d-55875966d031
|
||||
```
|
||||
|
||||
With this hook in place, borgmatic pings your Cronhub monitor when a backup
|
||||
begins, ends, or errors. Specifically, after the <a
|
||||
href="https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/">`before_backup`
|
||||
hooks</a> run, borgmatic lets Cronhub know that it has started if any of the
|
||||
`prune`, `create`, or `check` actions are run. Then, if the actions complete
|
||||
successfully, borgmatic notifies Cronhub of the success after the
|
||||
`prune`, `compact`, `create`, or `check` actions are run. Then, if the actions
|
||||
complete successfully, borgmatic notifies Cronhub of the success after the
|
||||
`after_backup` hooks run. And if an error occurs during any action or hook,
|
||||
borgmatic notifies Cronhub after the `on_error` hooks run.
|
||||
|
||||
|
@ -247,14 +252,15 @@ Here's an example:
|
|||
|
||||
```yaml
|
||||
hooks:
|
||||
pagerduty: a177cad45bd374409f78906a810a3074
|
||||
pagerduty:
|
||||
integration_key: a177cad45bd374409f78906a810a3074
|
||||
```
|
||||
|
||||
With this hook in place, borgmatic creates a PagerDuty event for your service
|
||||
whenever backups fail. Specifically, if an error occurs during a `create`,
|
||||
`prune`, or `check` action, borgmatic sends an event to PagerDuty before the
|
||||
`on_error` hooks run. Note that borgmatic does not contact PagerDuty when a
|
||||
backup starts or ends without error.
|
||||
`prune`, `compact`, or `check` action, borgmatic sends an event to PagerDuty
|
||||
before the `on_error` hooks run. Note that borgmatic does not contact
|
||||
PagerDuty when a backup starts or ends without error.
|
||||
|
||||
You can configure PagerDuty to notify you by a [variety of
|
||||
mechanisms](https://support.pagerduty.com/docs/notifications) when backups
|
||||
|
@ -264,51 +270,69 @@ If you have any issues with the integration, [please contact
|
|||
us](https://torsion.org/borgmatic/#support-and-contributing).
|
||||
|
||||
|
||||
## ntfy hook
|
||||
|
||||
[ntfy](https://ntfy.sh) is a free, simple, service (either hosted or self-hosted)
|
||||
which offers simple pub/sub push notifications to multiple platforms including
|
||||
[web](https://ntfy.sh/stats), [Android](https://play.google.com/store/apps/details?id=io.heckel.ntfy)
|
||||
and [iOS](https://apps.apple.com/us/app/ntfy/id1625396347).
|
||||
|
||||
Since push notifications for regular events might soon become quite annoying,
|
||||
this hook only fires on any errors by default in order to instantly alert you to issues.
|
||||
The `states` list can override this.
|
||||
|
||||
As ntfy is unauthenticated, it isn't a suitable channel for any private information
|
||||
so the default messages are intentionally generic. These can be overridden, depending
|
||||
on your risk assessment. Each `state` can have its own custom messages, priorities and tags
|
||||
or, if none are provided, will use the default.
|
||||
|
||||
An example configuration is shown here, with all the available options, including
|
||||
[priorities](https://ntfy.sh/docs/publish/#message-priority) and
|
||||
[tags](https://ntfy.sh/docs/publish/#tags-emojis):
|
||||
|
||||
```yaml
|
||||
hooks:
|
||||
ntfy:
|
||||
topic: my-unique-topic
|
||||
server: https://ntfy.my-domain.com
|
||||
start:
|
||||
title: A Borgmatic backup started
|
||||
message: Watch this space...
|
||||
tags: borgmatic
|
||||
priority: min
|
||||
finish:
|
||||
title: A Borgmatic backup completed successfully
|
||||
message: Nice!
|
||||
tags: borgmatic,+1
|
||||
priority: min
|
||||
fail:
|
||||
title: A Borgmatic backup failed
|
||||
message: You should probably fix it
|
||||
tags: borgmatic,-1,skull
|
||||
priority: max
|
||||
states:
|
||||
- start
|
||||
- finish
|
||||
- fail
|
||||
```
|
||||
|
||||
## Scripting borgmatic
|
||||
|
||||
To consume the output of borgmatic in other software, you can include an
|
||||
optional `--json` flag with `create`, `list`, or `info` to get the output
|
||||
formatted as JSON.
|
||||
optional `--json` flag with `create`, `rlist`, `rinfo`, or `info` to get the
|
||||
output formatted as JSON.
|
||||
|
||||
Note that when you specify the `--json` flag, Borg's other non-JSON output is
|
||||
suppressed so as not to interfere with the captured JSON. Also note that JSON
|
||||
output only shows up at the console, and not in syslog.
|
||||
|
||||
|
||||
## Related software
|
||||
|
||||
* [Borgmacator GNOME AppIndicator](https://github.com/N-Coder/borgmacator/)
|
||||
|
||||
|
||||
### Successful backups
|
||||
|
||||
`borgmatic list` includes support for a `--successful` flag that only lists
|
||||
successful (non-checkpoint) backups. This flag works via a basic heuristic: It
|
||||
assumes that non-checkpoint archive names end with a digit (e.g. from a
|
||||
timestamp), while checkpoint archive names do not. This means that if you're
|
||||
using custom archive names that do not end in a digit, the `--successful` flag
|
||||
will not work as expected.
|
||||
|
||||
Combined with a built-in Borg flag like `--last`, you can list the last
|
||||
successful backup for use in your monitoring scripts. Here's an example
|
||||
combined with `--json`:
|
||||
|
||||
```bash
|
||||
borgmatic list --successful --last 1 --json
|
||||
```
|
||||
|
||||
Note that this particular combination will only work if you've got a single
|
||||
backup "series" in your repository. If you're instead backing up, say, from
|
||||
multiple different hosts into a single repository, then you'll need to get
|
||||
fancier with your archive listing. See `borg list --help` for more flags.
|
||||
|
||||
|
||||
### Latest backups
|
||||
|
||||
All borgmatic actions that accept an "--archive" flag allow you to specify an
|
||||
archive name of "latest". This lets you get the latest successful archive
|
||||
without having to first run "borgmatic list" manually, which can be handy in
|
||||
automated scripts. Here's an example:
|
||||
All borgmatic actions that accept an `--archive` flag allow you to specify an
|
||||
archive name of `latest`. This lets you get the latest archive without having
|
||||
to first run `borgmatic rlist` manually, which can be handy in automated
|
||||
scripts. Here's an example:
|
||||
|
||||
```bash
|
||||
borgmatic info --archive latest
|
||||
|
|
90
docs/how-to/provide-your-passwords.md
Normal file
90
docs/how-to/provide-your-passwords.md
Normal file
|
@ -0,0 +1,90 @@
|
|||
---
|
||||
title: How to provide your passwords
|
||||
eleventyNavigation:
|
||||
key: 🔒 Provide your passwords
|
||||
parent: How-to guides
|
||||
order: 2
|
||||
---
|
||||
## Environment variable interpolation
|
||||
|
||||
If you want to use a Borg repository passphrase or database passwords with
|
||||
borgmatic, you can set them directly in your borgmatic configuration file,
|
||||
treating those secrets like any other option value. But if you'd rather store
|
||||
them outside of borgmatic, whether for convenience or security reasons, read
|
||||
on.
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.6.4</span> borgmatic
|
||||
supports interpolating arbitrary environment variables directly into option
|
||||
values in your configuration file. That means you can instruct borgmatic to
|
||||
pull your repository passphrase, your database passwords, or any other option
|
||||
values from environment variables. For instance:
|
||||
|
||||
```yaml
|
||||
storage:
|
||||
encryption_passphrase: ${MY_PASSPHRASE}
|
||||
```
|
||||
|
||||
This uses the `MY_PASSPHRASE` environment variable as your encryption
|
||||
passphrase. Note that the `{` `}` brackets are required. `$MY_PASSPHRASE` by
|
||||
itself will not work.
|
||||
|
||||
In the case of `encryption_passphrase` in particular, an alternate approach
|
||||
is to use Borg's `BORG_PASSPHRASE` environment variable, which doesn't even
|
||||
require setting an explicit `encryption_passphrase` value in borgmatic's
|
||||
configuration file.
|
||||
|
||||
For [database
|
||||
configuration](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/),
|
||||
the same approach applies. For example:
|
||||
|
||||
```yaml
|
||||
hooks:
|
||||
postgresql_databases:
|
||||
- name: users
|
||||
password: ${MY_DATABASE_PASSWORD}
|
||||
```
|
||||
|
||||
This uses the `MY_DATABASE_PASSWORD` environment variable as your database
|
||||
password.
|
||||
|
||||
|
||||
### Interpolation defaults
|
||||
|
||||
If you'd like to set a default for your environment variables, you can do so with the following syntax:
|
||||
|
||||
```yaml
|
||||
storage:
|
||||
encryption_passphrase: ${MY_PASSPHRASE:-defaultpass}
|
||||
```
|
||||
|
||||
Here, "`defaultpass`" is the default passphrase if the `MY_PASSPHRASE`
|
||||
environment variable is not set. Without a default, if the environment
|
||||
variable doesn't exist, borgmatic will error.
|
||||
|
||||
|
||||
### Disabling interpolation
|
||||
|
||||
To disable this environment variable interpolation feature entirely, you can
|
||||
pass the `--no-environment-interpolation` flag on the command-line.
|
||||
|
||||
Or if you'd like to disable interpolation within a single option value, you
|
||||
can escape it with a backslash. For instance, if your password is literally
|
||||
`${A}@!`:
|
||||
|
||||
```yaml
|
||||
storage:
|
||||
encryption_passphrase: \${A}@!
|
||||
```
|
||||
|
||||
### Related features
|
||||
|
||||
Another way to override particular options within a borgmatic configuration
|
||||
file is to use a [configuration
|
||||
override](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-overrides)
|
||||
on the command-line. But please be aware of the security implications of
|
||||
specifying secrets on the command-line.
|
||||
|
||||
Additionally, borgmatic action hooks support their own [variable
|
||||
interpolation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation),
|
||||
although in that case it's for particular borgmatic runtime values rather than
|
||||
(only) environment variables.
|
96
docs/how-to/run-arbitrary-borg-commands.md
Normal file
96
docs/how-to/run-arbitrary-borg-commands.md
Normal file
|
@ -0,0 +1,96 @@
|
|||
---
|
||||
title: How to run arbitrary Borg commands
|
||||
eleventyNavigation:
|
||||
key: 🔧 Run arbitrary Borg commands
|
||||
parent: How-to guides
|
||||
order: 11
|
||||
---
|
||||
## Running Borg with borgmatic
|
||||
|
||||
Borg has several commands (and options) that borgmatic does not currently
|
||||
support. Sometimes though, as a borgmatic user, you may find yourself wanting
|
||||
to take advantage of these off-the-beaten-path Borg features. You could of
|
||||
course drop down to running Borg directly. But then you'd give up all the
|
||||
niceties of your borgmatic configuration. You could file a [borgmatic
|
||||
ticket](https://torsion.org/borgmatic/#issues) or even a [pull
|
||||
request](https://torsion.org/borgmatic/#contributing) to add the feature. But
|
||||
what if you need it *now*?
|
||||
|
||||
That's where borgmatic's support for running "arbitrary" Borg commands comes
|
||||
in. Running Borg commands with borgmatic takes advantage of the following, all
|
||||
based on your borgmatic configuration files or command-line arguments:
|
||||
|
||||
* configured repositories (automatically runs your Borg command once for each
|
||||
one)
|
||||
* local and remote Borg binary paths
|
||||
* SSH settings and Borg environment variables
|
||||
* lock wait settings
|
||||
* verbosity
|
||||
|
||||
|
||||
### borg action
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.5.15</span> The way
|
||||
you run Borg with borgmatic is via the `borg` action. Here's a simple example:
|
||||
|
||||
```bash
|
||||
borgmatic borg break-lock
|
||||
```
|
||||
|
||||
(No `borg` action in borgmatic? Time to upgrade!)
|
||||
|
||||
This runs Borg's `break-lock` command once on each configured borgmatic
|
||||
repository. Notice how the repository isn't present in the specified Borg
|
||||
options, as that part is provided by borgmatic.
|
||||
|
||||
You can also specify Borg options for relevant commands:
|
||||
|
||||
```bash
|
||||
borgmatic borg rlist --short
|
||||
```
|
||||
|
||||
This runs Borg's `rlist` command once on each configured borgmatic repository.
|
||||
(The native `borgmatic rlist` action should be preferred for most use.)
|
||||
|
||||
What if you only want to run Borg on a single configured borgmatic repository
|
||||
when you've got several configured? Not a problem.
|
||||
|
||||
```bash
|
||||
borgmatic borg --repository repo.borg break-lock
|
||||
```
|
||||
|
||||
And what about a single archive?
|
||||
|
||||
```bash
|
||||
borgmatic borg --archive your-archive-name rlist
|
||||
```
|
||||
|
||||
### Limitations
|
||||
|
||||
borgmatic's `borg` action is not without limitations:
|
||||
|
||||
* The Borg command you want to run (`create`, `list`, etc.) *must* come first
|
||||
after the `borg` action. If you have any other Borg options to specify,
|
||||
provide them after. For instance, `borgmatic borg list --progress` will work,
|
||||
but `borgmatic borg --progress list` will not.
|
||||
* borgmatic supplies the repository/archive name to Borg for you (based on
|
||||
your borgmatic configuration or the `borgmatic borg --repository`/`--archive`
|
||||
arguments), so do not specify the repository/archive otherwise.
|
||||
* The `borg` action will not currently work for any Borg commands like `borg
|
||||
serve` that do not accept a repository/archive name.
|
||||
* Do not specify any global borgmatic arguments to the right of the `borg`
|
||||
action. (They will be passed to Borg instead of borgmatic.) If you have
|
||||
global borgmatic arguments, specify them *before* the `borg` action.
|
||||
* Unlike other borgmatic actions, you cannot combine the `borg` action with
|
||||
other borgmatic actions. This is to prevent ambiguity in commands like
|
||||
`borgmatic borg list`, in which `list` is both a valid Borg command and a
|
||||
borgmatic action. In this case, only the Borg command is run.
|
||||
* Unlike normal borgmatic actions that support JSON, the `borg` action will
|
||||
not disable certain borgmatic logs to avoid interfering with JSON output.
|
||||
* Unlike other borgmatic actions, the `borg` action captures (and logs) all
|
||||
output, so interactive prompts or flags like `--progress` will not work as
|
||||
expected.
|
||||
|
||||
In general, this `borgmatic borg` feature should be considered an escape
|
||||
valve—a feature of second resort. In the long run, it's preferable to wrap
|
||||
Borg commands with borgmatic actions that can support them fully.
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: How to set up backups
|
||||
eleventyNavigation:
|
||||
key: Set up backups
|
||||
key: 📥 Set up backups
|
||||
parent: How-to guides
|
||||
order: 0
|
||||
---
|
||||
|
@ -28,7 +28,7 @@ sudo pip3 install --user --upgrade borgmatic
|
|||
This installs borgmatic and its commands at the `/root/.local/bin` path.
|
||||
|
||||
Your pip binary may have a different name than "pip3". Make sure you're using
|
||||
Python 3, as borgmatic does not support Python 2.
|
||||
Python 3.7+, as borgmatic does not support older versions of Python.
|
||||
|
||||
The next step is to ensure that borgmatic's commands available are on your
|
||||
system `PATH`, so that you can run borgmatic:
|
||||
|
@ -51,6 +51,11 @@ sudo borgmatic --version
|
|||
|
||||
If borgmatic is properly installed, that should output your borgmatic version.
|
||||
|
||||
As an alternative to adding the path to `~/.bashrc` file, if you're using sudo
|
||||
to run borgmatic, you can configure [sudo's
|
||||
`secure_path` option](https://man.archlinux.org/man/sudoers.5) to include
|
||||
borgmatic's path.
|
||||
|
||||
|
||||
### Global install option
|
||||
|
||||
|
@ -68,7 +73,7 @@ sudo pip3 install --upgrade borgmatic
|
|||
|
||||
The main downside of a global install is that borgmatic is less cleanly
|
||||
separated from the rest of your Python software, and there's the theoretical
|
||||
possibility of libary conflicts. But if you're okay with that, for instance
|
||||
possibility of library conflicts. But if you're okay with that, for instance
|
||||
on a relatively dedicated system, then a global install can work out fine.
|
||||
|
||||
|
||||
|
@ -77,8 +82,8 @@ on a relatively dedicated system, then a global install can work out fine.
|
|||
Besides the approaches described above, there are several other options for
|
||||
installing borgmatic:
|
||||
|
||||
* [Docker image with scheduled backups](https://hub.docker.com/r/b3vis/borgmatic/)
|
||||
* [Docker base image](https://hub.docker.com/r/monachus/borgmatic/)
|
||||
* [Docker image with scheduled backups](https://hub.docker.com/r/b3vis/borgmatic/) (+ Docker Compose files)
|
||||
* [Docker image with multi-arch and Docker CLI support](https://hub.docker.com/r/modem7/borgmatic-docker/)
|
||||
* [Debian](https://tracker.debian.org/pkg/borgmatic)
|
||||
* [Ubuntu](https://launchpad.net/ubuntu/+source/borgmatic)
|
||||
* [Fedora official](https://bodhi.fedoraproject.org/updates/?search=borgmatic)
|
||||
|
@ -87,23 +92,28 @@ installing borgmatic:
|
|||
* [Alpine Linux](https://pkgs.alpinelinux.org/packages?name=borgmatic)
|
||||
* [OpenBSD](http://ports.su/sysutils/borgmatic)
|
||||
* [openSUSE](https://software.opensuse.org/package/borgmatic)
|
||||
* [stand-alone binary](https://github.com/cmarquardt/borgmatic-binary)
|
||||
* [macOS (via Homebrew)](https://formulae.brew.sh/formula/borgmatic)
|
||||
* [macOS (via MacPorts)](https://ports.macports.org/port/borgmatic/)
|
||||
* [Ansible role](https://github.com/borgbase/ansible-role-borgbackup)
|
||||
* [virtualenv](https://virtualenv.pypa.io/en/stable/)
|
||||
|
||||
|
||||
## Hosting providers
|
||||
|
||||
Need somewhere to store your encrypted offsite backups? The following hosting
|
||||
providers include specific support for Borg/borgmatic. Using these links and
|
||||
services helps support borgmatic development and hosting. (These are referral
|
||||
links, but without any tracking scripts or cookies.)
|
||||
Need somewhere to store your encrypted off-site backups? The following hosting
|
||||
providers include specific support for Borg/borgmatic—and fund borgmatic
|
||||
development and hosting when you use these links to sign up. (These are
|
||||
referral links, but without any tracking scripts or cookies.)
|
||||
|
||||
<ul>
|
||||
<li class="referral"><a href="https://www.rsync.net/cgi-bin/borg.cgi?campaign=borg&adgroup=borgmatic">rsync.net</a>: Cloud Storage provider with full support for borg and any other SSH/SFTP tool</li>
|
||||
<li class="referral"><a href="https://www.borgbase.com/?utm_source=borgmatic">BorgBase</a>: Borg hosting service with support for monitoring, 2FA, and append-only repos</li>
|
||||
<li class="referral"><a href="https://storage.lima-labs.com/special-pricing-offer-for-borgmatic-users/">Lima-Labs</a>: Affordable, reliable cloud data storage accessable via SSH/SCP/FTP for Borg backups or any other bulk storage needs</li>
|
||||
</ul>
|
||||
|
||||
Additionally, [rsync.net](https://www.rsync.net/products/borg.html) and
|
||||
[Hetzner](https://www.hetzner.com/storage/storage-box) have compatible storage
|
||||
offerings, but do not currently fund borgmatic development or hosting.
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
After you install borgmatic, generate a sample configuration file:
|
||||
|
@ -139,8 +149,8 @@ configuration](https://torsion.org/borgmatic/docs/how-to/upgrade/#upgrading-your
|
|||
|
||||
### Encryption
|
||||
|
||||
If you encrypt your Borg repository with a passphrase instead of a key file,
|
||||
you'll either need to set the borgmatic `encryption_passphrase` configuration
|
||||
If you encrypt your Borg repository with a passphrase or a key file, you'll
|
||||
either need to set the borgmatic `encryption_passphrase` configuration
|
||||
variable or set the `BORG_PASSPHRASE` environment variable. See the
|
||||
[repository encryption
|
||||
section](https://borgbackup.readthedocs.io/en/stable/quickstart.html#repository-encryption)
|
||||
|
@ -177,32 +187,39 @@ files via configuration management, or you want to double check that your hand
|
|||
edits are valid.
|
||||
|
||||
|
||||
## Initialization
|
||||
## Repository creation
|
||||
|
||||
Before you can create backups with borgmatic, you first need to initialize a
|
||||
Borg repository so you have a destination for your backup archives. (But skip
|
||||
this step if you already have a Borg repository.) To create a repository, run
|
||||
a command like the following:
|
||||
Before you can create backups with borgmatic, you first need to create a Borg
|
||||
repository so you have a destination for your backup archives. (But skip this
|
||||
step if you already have a Borg repository.) To create a repository, run a
|
||||
command like the following with Borg 1.x:
|
||||
|
||||
```bash
|
||||
sudo borgmatic init --encryption repokey
|
||||
```
|
||||
|
||||
(No borgmatic `init` action? Try the old-style `--init` flag, or upgrade
|
||||
borgmatic!)
|
||||
<span class="minilink minilink-addedin">New in borgmatic version 1.7.0</span>
|
||||
Or, with Borg 2.x:
|
||||
|
||||
```bash
|
||||
sudo borgmatic rcreate --encryption repokey-aes-ocb
|
||||
```
|
||||
|
||||
(Note that `repokey-chacha20-poly1305` may be faster than `repokey-aes-ocb` on
|
||||
certain platforms like ARM64.)
|
||||
|
||||
This uses the borgmatic configuration file you created above to determine
|
||||
which local or remote repository to create, and encrypts it with the
|
||||
encryption passphrase specified there if one is provided. Read about [Borg
|
||||
encryption
|
||||
modes](https://borgbackup.readthedocs.io/en/stable/usage/init.html#encryption-modes)
|
||||
modes](https://borgbackup.readthedocs.io/en/stable/usage/init.html#encryption-mode-tldr)
|
||||
for the menu of available encryption modes.
|
||||
|
||||
Also, optionally check out the [Borg Quick
|
||||
Start](https://borgbackup.readthedocs.org/en/stable/quickstart.html) for more
|
||||
background about repository initialization.
|
||||
background about repository creation.
|
||||
|
||||
Note that borgmatic skips repository initialization if the repository already
|
||||
Note that borgmatic skips repository creation if the repository already
|
||||
exists. This supports use cases like ensuring a repository exists prior to
|
||||
performing a backup.
|
||||
|
||||
|
@ -212,29 +229,42 @@ key-based SSH access to the desired user account on the remote host.
|
|||
|
||||
## Backups
|
||||
|
||||
Now that you've configured borgmatic and initialized a repository, it's a
|
||||
good idea to test that borgmatic is working. So to run borgmatic and start a
|
||||
Now that you've configured borgmatic and created a repository, it's a good
|
||||
idea to test that borgmatic is working. So to run borgmatic and start a
|
||||
backup, you can invoke it like this:
|
||||
|
||||
```bash
|
||||
sudo borgmatic --verbosity 1 --files
|
||||
sudo borgmatic create --verbosity 1 --list --stats
|
||||
```
|
||||
|
||||
(No borgmatic `--files` flag? It's only present in newer versions of
|
||||
borgmatic. So try leaving it out, or upgrade borgmatic!)
|
||||
(No borgmatic `--list` flag? Try `--files` instead, leave it out, or upgrade
|
||||
borgmatic!)
|
||||
|
||||
By default, this will also prune any old backups as per the configured
|
||||
retention policy, and check backups for consistency problems due to things
|
||||
like file damage.
|
||||
The `--verbosity` flag makes borgmatic show the steps it's performing. The
|
||||
`--list` flag lists each file that's new or changed since the last backup. And
|
||||
`--stats` shows summary information about the created archive. All of these
|
||||
flags are optional.
|
||||
|
||||
The verbosity flag makes borgmatic show the steps it's performing. And the
|
||||
files flag lists each file that's new or changed since the last backup.
|
||||
Eyeball the list and see if it matches your expectations based on the
|
||||
configuration.
|
||||
As the command runs, you should eyeball the output to see if it matches your
|
||||
expectations based on your configuration.
|
||||
|
||||
If you'd like to specify an alternate configuration file path, use the
|
||||
`--config` flag. See `borgmatic --help` for more information.
|
||||
`--config` flag.
|
||||
|
||||
See `borgmatic --help` and `borgmatic create --help` for more information.
|
||||
|
||||
|
||||
## Default actions
|
||||
|
||||
If you omit `create` and other actions, borgmatic runs through a set of
|
||||
default actions: `prune` any old backups as per the configured retention
|
||||
policy, `compact` segments to free up space (with Borg 1.2+, borgmatic
|
||||
1.5.23+), `create` a backup, *and* `check` backups for consistency problems
|
||||
due to things like file damage. For instance:
|
||||
|
||||
```bash
|
||||
sudo borgmatic --verbosity 1 --list --stats
|
||||
```
|
||||
|
||||
## Autopilot
|
||||
|
||||
|
@ -245,7 +275,7 @@ that, you can configure a separate job runner to invoke it periodically.
|
|||
### cron
|
||||
|
||||
If you're using cron, download the [sample cron
|
||||
file](https://projects.torsion.org/witten/borgmatic/src/master/sample/cron/borgmatic).
|
||||
file](https://projects.torsion.org/borgmatic-collective/borgmatic/src/master/sample/cron/borgmatic).
|
||||
Then, from the directory where you downloaded it:
|
||||
|
||||
```bash
|
||||
|
@ -253,15 +283,26 @@ sudo mv borgmatic /etc/cron.d/borgmatic
|
|||
sudo chmod +x /etc/cron.d/borgmatic
|
||||
```
|
||||
|
||||
You can modify the cron file if you'd like to run borgmatic more or less frequently.
|
||||
If borgmatic is installed at a different location than
|
||||
`/root/.local/bin/borgmatic`, edit the cron file with the correct path. You
|
||||
can also modify the cron file if you'd like to run borgmatic more or less
|
||||
frequently.
|
||||
|
||||
### systemd
|
||||
|
||||
If you're using systemd instead of cron to run jobs, download the [sample
|
||||
systemd service
|
||||
file](https://projects.torsion.org/witten/borgmatic/raw/branch/master/sample/systemd/borgmatic.service)
|
||||
If you're using systemd instead of cron to run jobs, you can still configure
|
||||
borgmatic to run automatically.
|
||||
|
||||
(If you installed borgmatic from [Other ways to
|
||||
install](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#other-ways-to-install),
|
||||
you may already have borgmatic systemd service and timer files. If so, you may
|
||||
be able to skip some of the steps below.)
|
||||
|
||||
First, download the [sample systemd service
|
||||
file](https://projects.torsion.org/borgmatic-collective/borgmatic/raw/branch/master/sample/systemd/borgmatic.service)
|
||||
and the [sample systemd timer
|
||||
file](https://projects.torsion.org/witten/borgmatic/raw/branch/master/sample/systemd/borgmatic.timer).
|
||||
file](https://projects.torsion.org/borgmatic-collective/borgmatic/raw/branch/master/sample/systemd/borgmatic.timer).
|
||||
|
||||
Then, from the directory where you downloaded them:
|
||||
|
||||
```bash
|
||||
|
@ -281,12 +322,46 @@ borgmatic to run.
|
|||
If you run borgmatic in macOS with launchd, you may encounter permissions
|
||||
issues when reading files to backup. If that happens to you, you may be
|
||||
interested in an [unofficial work-around for Full Disk
|
||||
Access](https://projects.torsion.org/witten/borgmatic/issues/293).
|
||||
Access](https://projects.torsion.org/borgmatic-collective/borgmatic/issues/293).
|
||||
|
||||
|
||||
## Colored output
|
||||
## Niceties
|
||||
|
||||
Borgmatic produces colored terminal output by default. It is disabled when a
|
||||
|
||||
### Shell completion
|
||||
|
||||
borgmatic includes a shell completion script (currently only for Bash) to
|
||||
support tab-completing borgmatic command-line actions and flags. Depending on
|
||||
how you installed borgmatic, this may be enabled by default. But if it's not,
|
||||
start by installing the `bash-completion` Linux package or the
|
||||
[`bash-completion@2`](https://formulae.brew.sh/formula/bash-completion@2)
|
||||
macOS Homebrew formula. Then, install the shell completion script globally:
|
||||
|
||||
```bash
|
||||
sudo su -c "borgmatic --bash-completion > $(pkg-config --variable=completionsdir bash-completion)/borgmatic"
|
||||
```
|
||||
|
||||
If you don't have `pkg-config` installed, you can try the following path
|
||||
instead:
|
||||
|
||||
```bash
|
||||
sudo su -c "borgmatic --bash-completion > /usr/share/bash-completion/completions/borgmatic"
|
||||
```
|
||||
|
||||
Or, if you'd like to install the script for only the current user:
|
||||
|
||||
```bash
|
||||
mkdir --parents ~/.local/share/bash-completion/completions
|
||||
borgmatic --bash-completion > ~/.local/share/bash-completion/completions/borgmatic
|
||||
```
|
||||
|
||||
Finally, restart your shell (`exit` and open a new shell) so the completions
|
||||
take effect.
|
||||
|
||||
|
||||
### Colored output
|
||||
|
||||
borgmatic produces colored terminal output by default. It is disabled when a
|
||||
non-interactive terminal is detected (like a cron job), or when you use the
|
||||
`--json` flag. Otherwise, you can disable it by passing the `--no-color` flag,
|
||||
setting the environment variable `PY_COLORS=False`, or setting the `color`
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
---
|
||||
title: How to upgrade borgmatic
|
||||
title: How to upgrade borgmatic and Borg
|
||||
eleventyNavigation:
|
||||
key: Upgrade borgmatic
|
||||
key: 📦 Upgrade borgmatic/Borg
|
||||
parent: How-to guides
|
||||
order: 10
|
||||
order: 12
|
||||
---
|
||||
## Upgrading
|
||||
## Upgrading borgmatic
|
||||
|
||||
In general, all you should need to do to upgrade borgmatic is run the
|
||||
following:
|
||||
|
@ -115,3 +115,98 @@ sudo pip3 install --user borgmatic
|
|||
|
||||
That's it! borgmatic will continue using your /etc/borgmatic configuration
|
||||
files.
|
||||
|
||||
|
||||
## Upgrading Borg
|
||||
|
||||
To upgrade to a new version of Borg, you can generally install a new version
|
||||
the same way you installed the previous version, paying attention to any
|
||||
instructions included with each Borg release changelog linked from the
|
||||
[releases page](https://github.com/borgbackup/borg/releases). Some more major
|
||||
Borg releases require additional steps that borgmatic can help with.
|
||||
|
||||
|
||||
### Borg 1.2 to 2.0
|
||||
|
||||
<span class="minilink minilink-addedin">New in borgmatic version 1.7.0</span>
|
||||
Upgrading Borg from 1.2 to 2.0 requires manually upgrading your existing Borg
|
||||
1 repositories before use with Borg or borgmatic. Here's how you can
|
||||
accomplish that.
|
||||
|
||||
Start by upgrading borgmatic as described above to at least version 1.7.0 and
|
||||
Borg to 2.0. Then, rename your repository in borgmatic's configuration file to
|
||||
a new repository path. The repository upgrade process does not occur
|
||||
in-place; you'll create a new repository with a copy of your old repository's
|
||||
data.
|
||||
|
||||
Let's say your original borgmatic repository configuration file looks something
|
||||
like this:
|
||||
|
||||
```yaml
|
||||
location:
|
||||
repositories:
|
||||
- original.borg
|
||||
```
|
||||
|
||||
Change it to a new (not yet created) repository path:
|
||||
|
||||
```yaml
|
||||
location:
|
||||
repositories:
|
||||
- upgraded.borg
|
||||
```
|
||||
|
||||
Then, run the `rcreate` action (formerly `init`) to create that new Borg 2
|
||||
repository:
|
||||
|
||||
```bash
|
||||
borgmatic rcreate --verbosity 1 --encryption repokey-blake2-aes-ocb \
|
||||
--source-repository original.borg --repository upgraded.borg
|
||||
```
|
||||
|
||||
This creates an empty repository and doesn't actually transfer any data yet.
|
||||
The `--source-repository` flag is necessary to reuse key material from your
|
||||
Borg 1 repository so that the subsequent data transfer can work.
|
||||
|
||||
The `--encryption` value above selects the same chunk ID algorithm (`blake2`)
|
||||
commonly used in Borg 1, thereby making deduplication work across transferred
|
||||
archives and new archives.
|
||||
|
||||
If you get an error about "You must keep the same ID hash" from Borg, that
|
||||
means the encryption value you specified doesn't correspond to your source
|
||||
repository's chunk ID algorithm. In that case, try not using `blake2`:
|
||||
|
||||
```bash
|
||||
borgmatic rcreate --verbosity 1 --encryption repokey-aes-ocb \
|
||||
--source-repository original.borg --repository upgraded.borg
|
||||
```
|
||||
|
||||
Read about [Borg encryption
|
||||
modes](https://borgbackup.readthedocs.io/en/2.0.0b5/usage/rcreate.html#encryption-mode-tldr)
|
||||
for more details.
|
||||
|
||||
To transfer data from your original Borg 1 repository to your newly created
|
||||
Borg 2 repository:
|
||||
|
||||
```bash
|
||||
borgmatic transfer --verbosity 1 --upgrader From12To20 --source-repository \
|
||||
original.borg --repository upgraded.borg --dry-run
|
||||
borgmatic transfer --verbosity 1 --upgrader From12To20 --source-repository \
|
||||
original.borg --repository upgraded.borg
|
||||
borgmatic transfer --verbosity 1 --upgrader From12To20 --source-repository \
|
||||
original.borg --repository upgraded.borg --dry-run
|
||||
```
|
||||
|
||||
The first command with `--dry-run` tells you what Borg is going to do during
|
||||
the transfer, the second command actually performs the transfer/upgrade (this
|
||||
might take a while), and the final command with `--dry-run` again provides
|
||||
confirmation of success—or tells you if something hasn't been transferred yet.
|
||||
|
||||
Note that by omitting the `--upgrader` flag, you can also do archive transfers
|
||||
between related Borg 2 repositories without upgrading, even down to individual
|
||||
archives. For more on that functionality, see the [Borg transfer
|
||||
documentation](https://borgbackup.readthedocs.io/en/2.0.0b5/usage/transfer.html).
|
||||
|
||||
That's it! Now you can use your new Borg 2 repository as normal with
|
||||
borgmatic. If you've got multiple repositories, repeat the above process for
|
||||
each.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Command-line reference
|
||||
eleventyNavigation:
|
||||
key: Command-line reference
|
||||
key: ⌨️ Command-line reference
|
||||
parent: Reference guides
|
||||
order: 1
|
||||
---
|
||||
|
@ -13,9 +13,3 @@ each action sub-command:
|
|||
```
|
||||
{% include borgmatic/command-line.txt %}
|
||||
```
|
||||
|
||||
|
||||
## Related documentation
|
||||
|
||||
* [Set up backups with borgmatic](https://torsion.org/borgmatic/docs/how-to/set-up-backups/)
|
||||
* [borgmatic configuration reference](https://torsion.org/borgmatic/docs/reference/configuration/)
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Configuration reference
|
||||
eleventyNavigation:
|
||||
key: Configuration reference
|
||||
key: ⚙️ Configuration reference
|
||||
parent: Reference guides
|
||||
order: 0
|
||||
---
|
||||
|
@ -15,9 +15,3 @@ Here is a full sample borgmatic configuration file including all available optio
|
|||
|
||||
Note that you can also [download this configuration
|
||||
file](https://torsion.org/borgmatic/docs/reference/config.yaml) for use locally.
|
||||
|
||||
|
||||
## Related documentation
|
||||
|
||||
* [Set up backups with borgmatic](https://torsion.org/borgmatic/docs/how-to/set-up-backups/)
|
||||
* [borgmatic command-line reference](https://torsion.org/borgmatic/docs/reference/command-line/)
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user