LVM snapshots (#80). #949
8
NEWS
8
NEWS
@ -1,8 +1,14 @@
|
||||
1.9.4.dev0
|
||||
* #80 (beta): Add an LVM hook for snapshotting and backing up LVM logical volumes. See the
|
||||
documentation for more information:
|
||||
https://torsion.org/borgmatic/docs/how-to/snapshot-your-filesystems/
|
||||
* #251 (beta): Add a Btrfs hook for snapshotting and backing up Btrfs subvolumes. See the
|
||||
documentation for more information:
|
||||
https://torsion.org/borgmatic/docs/how-to/snapshot-your-filesystems/
|
||||
* #926: Fix library error when running within a PyInstaller bundle.
|
||||
* #926: Fix a library error when running within a PyInstaller bundle.
|
||||
* #950: Fix a snapshot unmount error in the ZFS hook when using nested datasets.
|
||||
* Update the ZFS hook to discover and snapshot ZFS datasets even if they are parent/grandparent
|
||||
directories of your source directories.
|
||||
* Reorganize data source and monitoring hooks to make developing new hooks easier.
|
||||
|
||||
1.9.3
|
||||
|
@ -63,6 +63,7 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
|
||||
<a href="https://sqlite.org/"><img src="docs/static/sqlite.png" alt="SQLite" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
|
||||
<a href="https://openzfs.org/"><img src="docs/static/openzfs.png" alt="OpenZFS" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
|
||||
<a href="https://btrfs.readthedocs.io/"><img src="docs/static/btrfs.png" alt="Btrfs" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
|
||||
<a href="https://sourceware.org/lvm2/"><img src="docs/static/lvm.png" alt="LVM" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
|
||||
<a href="https://rclone.org"><img src="docs/static/rclone.png" alt="rclone" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
|
||||
<a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
|
||||
<a href="https://uptime.kuma.pet/"><img src="docs/static/uptimekuma.png" alt="Uptime Kuma" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
|
||||
|
@ -2304,3 +2304,50 @@ properties:
|
||||
example: /usr/local/bin/findmnt
|
||||
description: |
|
||||
Configuration for integration with the Btrfs filesystem.
|
||||
lvm:
|
||||
type: ["object", "null"]
|
||||
additionalProperties: false
|
||||
properties:
|
||||
snapshot_size:
|
||||
type: string
|
||||
description: |
|
||||
Size to allocate for each snapshot taken, including the
|
||||
units to use for that size. Defaults to "10%ORIGIN" (10%
|
||||
of the size of logical volume being snapshotted). See the
|
||||
lvcreate "--size" and "--extents" documentation for more
|
||||
information:
|
||||
https://www.man7.org/linux/man-pages/man8/lvcreate.8.html
|
||||
example: 5GB
|
||||
lvcreate_command:
|
||||
type: string
|
||||
description: |
|
||||
Command to use instead of "lvcreate".
|
||||
example: /usr/local/bin/lvcreate
|
||||
lvremove_command:
|
||||
type: string
|
||||
description: |
|
||||
Command to use instead of "lvremove".
|
||||
example: /usr/local/bin/lvremove
|
||||
lvs_command:
|
||||
type: string
|
||||
description: |
|
||||
Command to use instead of "lvs".
|
||||
example: /usr/local/bin/lvs
|
||||
lsblk_command:
|
||||
type: string
|
||||
description: |
|
||||
Command to use instead of "lsblk".
|
||||
example: /usr/local/bin/lsblk
|
||||
mount_command:
|
||||
type: string
|
||||
description: |
|
||||
Command to use instead of "mount".
|
||||
example: /usr/local/bin/mount
|
||||
umount_command:
|
||||
type: string
|
||||
description: |
|
||||
Command to use instead of "umount".
|
||||
example: /usr/local/bin/umount
|
||||
description: |
|
||||
Configuration for integration with Linux LVM (Logical Volume
|
||||
Manager).
|
||||
|
@ -1,3 +1,4 @@
|
||||
import collections
|
||||
import glob
|
||||
import logging
|
||||
import os
|
||||
@ -6,6 +7,7 @@ import subprocess
|
||||
|
||||
import borgmatic.config.paths
|
||||
import borgmatic.execute
|
||||
import borgmatic.hooks.data_source.snapshot
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@ -22,9 +24,10 @@ def get_filesystem_mount_points(findmnt_command):
|
||||
Given a findmnt command to run, get all top-level Btrfs filesystem mount points.
|
||||
'''
|
||||
findmnt_output = borgmatic.execute.execute_command_and_capture_output(
|
||||
(
|
||||
findmnt_command,
|
||||
'-nt',
|
||||
tuple(findmnt_command.split(' '))
|
||||
+ (
|
||||
'-n', # No headings.
|
||||
'-t', # Filesystem type.
|
||||
'btrfs',
|
||||
)
|
||||
)
|
||||
@ -34,55 +37,70 @@ def get_filesystem_mount_points(findmnt_command):
|
||||
|
||||
def get_subvolumes_for_filesystem(btrfs_command, filesystem_mount_point):
|
||||
'''
|
||||
Given a Btrfs command to run and a Btrfs filesystem mount point, get the subvolumes for that
|
||||
filesystem.
|
||||
Given a Btrfs command to run and a Btrfs filesystem mount point, get the sorted subvolumes for
|
||||
that filesystem. Include the filesystem itself.
|
||||
'''
|
||||
btrfs_output = borgmatic.execute.execute_command_and_capture_output(
|
||||
(
|
||||
btrfs_command,
|
||||
tuple(btrfs_command.split(' '))
|
||||
+ (
|
||||
'subvolume',
|
||||
'list',
|
||||
filesystem_mount_point,
|
||||
)
|
||||
)
|
||||
|
||||
return tuple(
|
||||
subvolume_path
|
||||
for line in btrfs_output.splitlines()
|
||||
for subvolume_subpath in (line.rstrip().split(' ')[-1],)
|
||||
for subvolume_path in (os.path.join(filesystem_mount_point, subvolume_subpath),)
|
||||
if subvolume_subpath.strip()
|
||||
if filesystem_mount_point.strip()
|
||||
if not filesystem_mount_point.strip():
|
||||
return ()
|
||||
|
||||
return (filesystem_mount_point,) + tuple(
|
||||
sorted(
|
||||
subvolume_path
|
||||
for line in btrfs_output.splitlines()
|
||||
for subvolume_subpath in (line.rstrip().split(' ')[-1],)
|
||||
for subvolume_path in (os.path.join(filesystem_mount_point, subvolume_subpath),)
|
||||
if subvolume_subpath.strip()
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
Subvolume = collections.namedtuple(
|
||||
'Subvolume', ('path', 'contained_source_directories'), defaults=((),)
|
||||
)
|
||||
|
||||
|
||||
def get_subvolumes(btrfs_command, findmnt_command, source_directories=None):
|
||||
'''
|
||||
Given a Btrfs command to run and a sequence of configured source directories, find the
|
||||
intersection between the current Btrfs filesystem and subvolume mount points and the configured
|
||||
borgmatic source directories. The idea is that these are the requested subvolumes to snapshot.
|
||||
|
||||
If the source directories is None, then return all subvolumes.
|
||||
If the source directories is None, then return all subvolumes, sorted by path.
|
||||
|
||||
Return the result as a sequence of matching subvolume mount points.
|
||||
'''
|
||||
source_directories_lookup = set(source_directories or ())
|
||||
candidate_source_directories = set(source_directories or ())
|
||||
subvolumes = []
|
||||
|
||||
# For each filesystem mount point, find its subvolumes and match them again the given source
|
||||
# directories to find the subvolumes to backup. Also try to match the filesystem mount point
|
||||
# itself (since it's implicitly a subvolume).
|
||||
# directories to find the subvolumes to backup. And within this loop, sort the subvolumes from
|
||||
# longest to shortest mount points, so longer mount points get a whack at the candidate source
|
||||
# directory piñata before their parents do. (Source directories are consumed during this
|
||||
# process, so no two datasets get the same contained source directories.)
|
||||
for mount_point in get_filesystem_mount_points(findmnt_command):
|
||||
if source_directories is None or mount_point in source_directories_lookup:
|
||||
subvolumes.append(mount_point)
|
||||
|
||||
subvolumes.extend(
|
||||
subvolume_path
|
||||
for subvolume_path in get_subvolumes_for_filesystem(btrfs_command, mount_point)
|
||||
if source_directories is None or subvolume_path in source_directories_lookup
|
||||
Subvolume(subvolume_path, contained_source_directories)
|
||||
for subvolume_path in reversed(
|
||||
get_subvolumes_for_filesystem(btrfs_command, mount_point)
|
||||
)
|
||||
for contained_source_directories in (
|
||||
borgmatic.hooks.data_source.snapshot.get_contained_directories(
|
||||
subvolume_path, candidate_source_directories
|
||||
),
|
||||
)
|
||||
if source_directories is None or contained_source_directories
|
||||
)
|
||||
|
||||
return tuple(subvolumes)
|
||||
return tuple(sorted(subvolumes, key=lambda subvolume: subvolume.path))
|
||||
|
||||
|
||||
BORGMATIC_SNAPSHOT_PREFIX = '.borgmatic-snapshot-'
|
||||
@ -95,7 +113,6 @@ def make_snapshot_path(subvolume_path): # pragma: no cover
|
||||
return os.path.join(
|
||||
subvolume_path,
|
||||
f'{BORGMATIC_SNAPSHOT_PREFIX}{os.getpid()}',
|
||||
'.', # Borg 1.4+ "slashdot" hack.
|
||||
# Included so that the snapshot ends up in the Borg archive at the "original" subvolume
|
||||
# path.
|
||||
subvolume_path.lstrip(os.path.sep),
|
||||
@ -129,6 +146,20 @@ def make_snapshot_exclude_path(subvolume_path): # pragma: no cover
|
||||
)
|
||||
|
||||
|
||||
def make_borg_source_directory_path(subvolume_path, source_directory): # pragma: no cover
|
||||
'''
|
||||
Given the path to a subvolume and a source directory inside it, make a corresponding path for
|
||||
the source directory within a snapshot path intended for giving to Borg.
|
||||
'''
|
||||
return os.path.join(
|
||||
subvolume_path,
|
||||
f'{BORGMATIC_SNAPSHOT_PREFIX}{os.getpid()}',
|
||||
'.', # Borg 1.4+ "slashdot" hack.
|
||||
# Included so that the source directory ends up in the Borg archive at its "original" path.
|
||||
source_directory.lstrip(os.path.sep),
|
||||
)
|
||||
|
||||
|
||||
def snapshot_subvolume(btrfs_command, subvolume_path, snapshot_path): # pragma: no cover
|
||||
'''
|
||||
Given a Btrfs command to run, the path to a subvolume, and the path for a snapshot, create a new
|
||||
@ -137,8 +168,8 @@ def snapshot_subvolume(btrfs_command, subvolume_path, snapshot_path): # pragma:
|
||||
os.makedirs(os.path.dirname(snapshot_path), mode=0o700, exist_ok=True)
|
||||
|
||||
borgmatic.execute.execute_command(
|
||||
(
|
||||
btrfs_command,
|
||||
tuple(btrfs_command.split(' '))
|
||||
+ (
|
||||
'subvolume',
|
||||
'snapshot',
|
||||
'-r', # Read-only,
|
||||
@ -182,21 +213,27 @@ def dump_data_sources(
|
||||
logger.warning(f'{log_prefix}: No Btrfs subvolumes found to snapshot{dry_run_label}')
|
||||
|
||||
# Snapshot each subvolume, rewriting source directories to use their snapshot paths.
|
||||
for subvolume_path in subvolumes:
|
||||
logger.debug(f'{log_prefix}: Creating Btrfs snapshot for {subvolume_path} subvolume')
|
||||
for subvolume in subvolumes:
|
||||
logger.debug(f'{log_prefix}: Creating Btrfs snapshot for {subvolume.path} subvolume')
|
||||
|
||||
snapshot_path = make_snapshot_path(subvolume_path)
|
||||
snapshot_path = make_snapshot_path(subvolume.path)
|
||||
|
||||
if dry_run:
|
||||
continue
|
||||
|
||||
snapshot_subvolume(btrfs_command, subvolume_path, snapshot_path)
|
||||
snapshot_subvolume(btrfs_command, subvolume.path, snapshot_path)
|
||||
|
||||
if subvolume_path in source_directories:
|
||||
source_directories.remove(subvolume_path)
|
||||
for source_directory in subvolume.contained_source_directories:
|
||||
try:
|
||||
source_directories.remove(source_directory)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
source_directories.append(snapshot_path)
|
||||
config.setdefault('exclude_patterns', []).append(make_snapshot_exclude_path(subvolume_path))
|
||||
source_directories.append(
|
||||
make_borg_source_directory_path(subvolume.path, source_directory)
|
||||
)
|
||||
|
||||
config.setdefault('exclude_patterns', []).append(make_snapshot_exclude_path(subvolume.path))
|
||||
|
||||
return []
|
||||
|
||||
@ -206,8 +243,8 @@ def delete_snapshot(btrfs_command, snapshot_path): # pragma: no cover
|
||||
Given a Btrfs command to run and the name of a snapshot path, delete it.
|
||||
'''
|
||||
borgmatic.execute.execute_command(
|
||||
(
|
||||
btrfs_command,
|
||||
tuple(btrfs_command.split(' '))
|
||||
+ (
|
||||
'subvolume',
|
||||
'delete',
|
||||
snapshot_path,
|
||||
@ -228,7 +265,7 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
|
||||
findmnt_command = hook_config.get('findmnt_command', 'findmnt')
|
||||
|
||||
try:
|
||||
all_subvolume_paths = get_subvolumes(btrfs_command, findmnt_command)
|
||||
all_subvolumes = get_subvolumes(btrfs_command, findmnt_command)
|
||||
except FileNotFoundError as error:
|
||||
logger.debug(f'{log_prefix}: Could not find "{error.filename}" command')
|
||||
return
|
||||
@ -236,9 +273,11 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
|
||||
logger.debug(f'{log_prefix}: {error}')
|
||||
return
|
||||
|
||||
for subvolume_path in all_subvolume_paths:
|
||||
# Reversing the sorted subvolumes ensures that we remove longer mount point paths of child
|
||||
# subvolumes before the shorter mount point paths of parent subvolumes.
|
||||
for subvolume in reversed(all_subvolumes):
|
||||
subvolume_snapshots_glob = borgmatic.config.paths.replace_temporary_subdirectory_with_glob(
|
||||
os.path.normpath(make_snapshot_path(subvolume_path)),
|
||||
os.path.normpath(make_snapshot_path(subvolume.path)),
|
||||
temporary_directory_prefix=BORGMATIC_SNAPSHOT_PREFIX,
|
||||
)
|
||||
|
||||
@ -266,7 +305,7 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
|
||||
|
||||
# Strip off the subvolume path from the end of the snapshot path and then delete the
|
||||
# resulting directory.
|
||||
shutil.rmtree(snapshot_path.rsplit(subvolume_path, 1)[0])
|
||||
shutil.rmtree(snapshot_path.rsplit(subvolume.path, 1)[0])
|
||||
|
||||
|
||||
def make_data_source_dump_patterns(
|
||||
|
400
borgmatic/hooks/data_source/lvm.py
Normal file
400
borgmatic/hooks/data_source/lvm.py
Normal file
@ -0,0 +1,400 @@
|
||||
import collections
|
||||
import glob
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
|
||||
import borgmatic.config.paths
|
||||
import borgmatic.execute
|
||||
import borgmatic.hooks.data_source.snapshot
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def use_streaming(hook_config, config, log_prefix): # pragma: no cover
|
||||
'''
|
||||
Return whether dump streaming is used for this hook. (Spoiler: It isn't.)
|
||||
'''
|
||||
return False
|
||||
|
||||
|
||||
BORGMATIC_SNAPSHOT_PREFIX = 'borgmatic-'
|
||||
Logical_volume = collections.namedtuple(
|
||||
'Logical_volume', ('name', 'device_path', 'mount_point', 'contained_source_directories')
|
||||
)
|
||||
|
||||
|
||||
def get_logical_volumes(lsblk_command, source_directories=None):
|
||||
'''
|
||||
Given an lsblk command to run and a sequence of configured source directories, find the
|
||||
intersection between the current LVM logical volume mount points and the configured borgmatic
|
||||
source directories. The idea is that these are the requested logical volumes to snapshot.
|
||||
|
||||
If source directories is None, include all logical volume mounts points, not just those in
|
||||
source directories.
|
||||
|
||||
Return the result as a sequence of Logical_volume instances.
|
||||
'''
|
||||
try:
|
||||
devices_info = json.loads(
|
||||
borgmatic.execute.execute_command_and_capture_output(
|
||||
# Use lsblk instead of lvs here because lvs can't show active mounts.
|
||||
tuple(lsblk_command.split(' '))
|
||||
+ (
|
||||
'--output',
|
||||
'name,path,mountpoint,type',
|
||||
'--json',
|
||||
'--list',
|
||||
)
|
||||
)
|
||||
)
|
||||
except json.JSONDecodeError as error:
|
||||
raise ValueError(f'Invalid {lsblk_command} JSON output: {error}')
|
||||
|
||||
candidate_source_directories = set(source_directories or ())
|
||||
|
||||
try:
|
||||
return tuple(
|
||||
Logical_volume(
|
||||
device['name'], device['path'], device['mountpoint'], contained_source_directories
|
||||
)
|
||||
for device in devices_info['blockdevices']
|
||||
if device['mountpoint'] and device['type'] == 'lvm'
|
||||
for contained_source_directories in (
|
||||
borgmatic.hooks.data_source.snapshot.get_contained_directories(
|
||||
device['mountpoint'], candidate_source_directories
|
||||
),
|
||||
)
|
||||
if not source_directories or contained_source_directories
|
||||
)
|
||||
except KeyError as error:
|
||||
raise ValueError(f'Invalid {lsblk_command} output: Missing key "{error}"')
|
||||
|
||||
|
||||
def snapshot_logical_volume(
|
||||
witten marked this conversation as resolved
Outdated
|
||||
lvcreate_command,
|
||||
snapshot_name,
|
||||
logical_volume_device,
|
||||
snapshot_size,
|
||||
):
|
||||
'''
|
||||
Given an lvcreate command to run, a snapshot name, the path to the logical volume device to
|
||||
snapshot, and a snapshot size string, create a new LVM snapshot.
|
||||
'''
|
||||
borgmatic.execute.execute_command(
|
||||
tuple(lvcreate_command.split(' '))
|
||||
+ (
|
||||
'--snapshot',
|
||||
('--extents' if '%' in snapshot_size else '--size'),
|
||||
snapshot_size,
|
||||
'--name',
|
||||
snapshot_name,
|
||||
logical_volume_device,
|
||||
),
|
||||
output_log_level=logging.DEBUG,
|
||||
)
|
||||
|
||||
|
||||
def mount_snapshot(mount_command, snapshot_device, snapshot_mount_path): # pragma: no cover
|
||||
'''
|
||||
Given a mount command to run, the device path for an existing snapshot, and the path where the
|
||||
snapshot should be mounted, mount the snapshot as read-only (making any necessary directories
|
||||
first).
|
||||
'''
|
||||
os.makedirs(snapshot_mount_path, mode=0o700, exist_ok=True)
|
||||
|
||||
borgmatic.execute.execute_command(
|
||||
tuple(mount_command.split(' '))
|
||||
+ (
|
||||
'-o',
|
||||
'ro',
|
||||
snapshot_device,
|
||||
snapshot_mount_path,
|
||||
),
|
||||
output_log_level=logging.DEBUG,
|
||||
)
|
||||
|
||||
|
||||
DEFAULT_SNAPSHOT_SIZE = '10%ORIGIN'
|
||||
|
||||
|
||||
def dump_data_sources(
|
||||
hook_config,
|
||||
config,
|
||||
log_prefix,
|
||||
config_paths,
|
||||
borgmatic_runtime_directory,
|
||||
source_directories,
|
||||
dry_run,
|
||||
):
|
||||
'''
|
||||
Given an LVM configuration dict, a configuration dict, a log prefix, the borgmatic configuration
|
||||
file paths, the borgmatic runtime directory, the configured source directories, and whether this
|
||||
is a dry run, auto-detect and snapshot any LVM logical volume mount points listed in the given
|
||||
source directories. Also update those source directories, replacing logical volume mount points
|
||||
with corresponding snapshot directories so they get stored in the Borg archive instead. Use the
|
||||
log prefix in any log entries.
|
||||
|
||||
Return an empty sequence, since there are no ongoing dump processes from this hook.
|
||||
|
||||
If this is a dry run, then don't actually snapshot anything.
|
||||
'''
|
||||
dry_run_label = ' (dry run; not actually snapshotting anything)' if dry_run else ''
|
||||
logger.info(f'{log_prefix}: Snapshotting LVM logical volumes{dry_run_label}')
|
||||
|
||||
# List logical volumes to get their mount points.
|
||||
lsblk_command = hook_config.get('lsblk_command', 'lsblk')
|
||||
requested_logical_volumes = get_logical_volumes(lsblk_command, source_directories)
|
||||
|
||||
# Snapshot each logical volume, rewriting source directories to use the snapshot paths.
|
||||
snapshot_suffix = f'{BORGMATIC_SNAPSHOT_PREFIX}{os.getpid()}'
|
||||
normalized_runtime_directory = os.path.normpath(borgmatic_runtime_directory)
|
||||
|
||||
if not requested_logical_volumes:
|
||||
logger.warning(f'{log_prefix}: No LVM logical volumes found to snapshot{dry_run_label}')
|
||||
|
||||
for logical_volume in requested_logical_volumes:
|
||||
snapshot_name = f'{logical_volume.name}_{snapshot_suffix}'
|
||||
logger.debug(
|
||||
f'{log_prefix}: Creating LVM snapshot {snapshot_name} of {logical_volume.mount_point}{dry_run_label}'
|
||||
)
|
||||
|
||||
if not dry_run:
|
||||
snapshot_logical_volume(
|
||||
hook_config.get('lvcreate_command', 'lvcreate'),
|
||||
snapshot_name,
|
||||
logical_volume.device_path,
|
||||
hook_config.get('snapshot_size', DEFAULT_SNAPSHOT_SIZE),
|
||||
)
|
||||
|
||||
# Get the device path for the snapshot we just created.
|
||||
try:
|
||||
snapshot = get_snapshots(
|
||||
hook_config.get('lvs_command', 'lvs'), snapshot_name=snapshot_name
|
||||
)[0]
|
||||
except IndexError:
|
||||
raise ValueError(f'Cannot find LVM snapshot {snapshot_name}')
|
||||
|
||||
# Mount the snapshot into a particular named temporary directory so that the snapshot ends
|
||||
# up in the Borg archive at the "original" logical volume mount point path.
|
||||
snapshot_mount_path = os.path.join(
|
||||
normalized_runtime_directory,
|
||||
'lvm_snapshots',
|
||||
logical_volume.mount_point.lstrip(os.path.sep),
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
f'{log_prefix}: Mounting LVM snapshot {snapshot_name} at {snapshot_mount_path}{dry_run_label}'
|
||||
)
|
||||
|
||||
if dry_run:
|
||||
continue
|
||||
|
||||
mount_snapshot(
|
||||
hook_config.get('mount_command', 'mount'), snapshot.device_path, snapshot_mount_path
|
||||
)
|
||||
|
||||
# Update the path for each contained source directory, so Borg sees it within the
|
||||
# mounted snapshot.
|
||||
for source_directory in logical_volume.contained_source_directories:
|
||||
try:
|
||||
source_directories.remove(source_directory)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
source_directories.append(
|
||||
os.path.join(
|
||||
normalized_runtime_directory,
|
||||
'lvm_snapshots',
|
||||
'.', # Borg 1.4+ "slashdot" hack.
|
||||
source_directory.lstrip(os.path.sep),
|
||||
)
|
||||
)
|
||||
|
||||
return []
|
||||
|
||||
|
||||
def unmount_snapshot(umount_command, snapshot_mount_path): # pragma: no cover
|
||||
'''
|
||||
Given a umount command to run and the mount path of a snapshot, unmount it.
|
||||
'''
|
||||
borgmatic.execute.execute_command(
|
||||
tuple(umount_command.split(' ')) + (snapshot_mount_path,),
|
||||
output_log_level=logging.DEBUG,
|
||||
)
|
||||
|
||||
|
||||
def remove_snapshot(lvremove_command, snapshot_device_path): # pragma: no cover
|
||||
'''
|
||||
Given an lvremove command to run and the device path of a snapshot, remove it it.
|
||||
'''
|
||||
borgmatic.execute.execute_command(
|
||||
tuple(lvremove_command.split(' '))
|
||||
+ (
|
||||
'--force', # Suppress an interactive "are you sure?" type prompt.
|
||||
snapshot_device_path,
|
||||
),
|
||||
output_log_level=logging.DEBUG,
|
||||
)
|
||||
|
||||
|
||||
Snapshot = collections.namedtuple(
|
||||
'Snapshot',
|
||||
('name', 'device_path'),
|
||||
)
|
||||
|
||||
|
||||
def get_snapshots(lvs_command, snapshot_name=None):
|
||||
'''
|
||||
Given an lvs command to run, return all LVM snapshots as a sequence of Snapshot instances.
|
||||
|
||||
If a snapshot name is given, filter the results to that snapshot.
|
||||
'''
|
||||
try:
|
||||
snapshot_info = json.loads(
|
||||
borgmatic.execute.execute_command_and_capture_output(
|
||||
# Use lvs instead of lsblk here because lsblk can't filter to just snapshots.
|
||||
tuple(lvs_command.split(' '))
|
||||
+ (
|
||||
'--report-format',
|
||||
'json',
|
||||
'--options',
|
||||
'lv_name,lv_path',
|
||||
'--select',
|
||||
'lv_attr =~ ^s', # Filter to just snapshots.
|
||||
)
|
||||
)
|
||||
)
|
||||
except json.JSONDecodeError as error:
|
||||
raise ValueError(f'Invalid {lvs_command} JSON output: {error}')
|
||||
|
||||
try:
|
||||
return tuple(
|
||||
Snapshot(snapshot['lv_name'], snapshot['lv_path'])
|
||||
for snapshot in snapshot_info['report'][0]['lv']
|
||||
if snapshot_name is None or snapshot['lv_name'] == snapshot_name
|
||||
)
|
||||
except IndexError:
|
||||
raise ValueError(f'Invalid {lvs_command} output: Missing report data')
|
||||
except KeyError as error:
|
||||
raise ValueError(f'Invalid {lvs_command} output: Missing key "{error}"')
|
||||
|
||||
|
||||
def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_directory, dry_run):
|
||||
'''
|
||||
Given an LVM configuration dict, a configuration dict, a log prefix, the borgmatic runtime
|
||||
directory, and whether this is a dry run, unmount and delete any LVM snapshots created by
|
||||
borgmatic. Use the log prefix in any log entries. If this is a dry run, then don't actually
|
||||
remove anything.
|
||||
'''
|
||||
dry_run_label = ' (dry run; not actually removing anything)' if dry_run else ''
|
||||
|
||||
# Unmount snapshots.
|
||||
try:
|
||||
logical_volumes = get_logical_volumes(hook_config.get('lsblk_command', 'lsblk'))
|
||||
except FileNotFoundError as error:
|
||||
logger.debug(f'{log_prefix}: Could not find "{error.filename}" command')
|
||||
return
|
||||
except subprocess.CalledProcessError as error:
|
||||
logger.debug(f'{log_prefix}: {error}')
|
||||
return
|
||||
|
||||
snapshots_glob = os.path.join(
|
||||
borgmatic.config.paths.replace_temporary_subdirectory_with_glob(
|
||||
os.path.normpath(borgmatic_runtime_directory),
|
||||
),
|
||||
'lvm_snapshots',
|
||||
)
|
||||
logger.debug(
|
||||
f'{log_prefix}: Looking for snapshots to remove in {snapshots_glob}{dry_run_label}'
|
||||
)
|
||||
umount_command = hook_config.get('umount_command', 'umount')
|
||||
|
||||
for snapshots_directory in glob.glob(snapshots_glob):
|
||||
if not os.path.isdir(snapshots_directory):
|
||||
continue
|
||||
|
||||
for logical_volume in logical_volumes:
|
||||
snapshot_mount_path = os.path.join(
|
||||
snapshots_directory, logical_volume.mount_point.lstrip(os.path.sep)
|
||||
)
|
||||
if not os.path.isdir(snapshot_mount_path):
|
||||
continue
|
||||
|
||||
# This might fail if the directory is already mounted, but we swallow errors here since
|
||||
# we'll do another recursive delete below. The point of doing it here is that we don't
|
||||
# want to try to unmount a non-mounted directory (which *will* fail).
|
||||
if not dry_run:
|
||||
shutil.rmtree(snapshot_mount_path, ignore_errors=True)
|
||||
|
||||
# If the delete was successful, that means there's nothing to unmount.
|
||||
if not os.path.isdir(snapshot_mount_path):
|
||||
continue
|
||||
|
||||
logger.debug(
|
||||
f'{log_prefix}: Unmounting LVM snapshot at {snapshot_mount_path}{dry_run_label}'
|
||||
)
|
||||
|
||||
if dry_run:
|
||||
continue
|
||||
|
||||
try:
|
||||
unmount_snapshot(umount_command, snapshot_mount_path)
|
||||
except FileNotFoundError:
|
||||
logger.debug(f'{log_prefix}: Could not find "{umount_command}" command')
|
||||
return
|
||||
except subprocess.CalledProcessError as error:
|
||||
logger.debug(f'{log_prefix}: {error}')
|
||||
return
|
||||
|
||||
if not dry_run:
|
||||
shutil.rmtree(snapshots_directory)
|
||||
|
||||
# Delete snapshots.
|
||||
lvremove_command = hook_config.get('lvremove_command', 'lvremove')
|
||||
|
||||
try:
|
||||
snapshots = get_snapshots(hook_config.get('lvs_command', 'lvs'))
|
||||
except FileNotFoundError as error:
|
||||
logger.debug(f'{log_prefix}: Could not find "{error.filename}" command')
|
||||
return
|
||||
except subprocess.CalledProcessError as error:
|
||||
logger.debug(f'{log_prefix}: {error}')
|
||||
return
|
||||
|
||||
for snapshot in snapshots:
|
||||
# Only delete snapshots that borgmatic actually created!
|
||||
if not snapshot.name.split('_')[-1].startswith(BORGMATIC_SNAPSHOT_PREFIX):
|
||||
continue
|
||||
|
||||
logger.debug(f'{log_prefix}: Deleting LVM snapshot {snapshot.name}{dry_run_label}')
|
||||
|
||||
if not dry_run:
|
||||
remove_snapshot(lvremove_command, snapshot.device_path)
|
||||
|
||||
|
||||
def make_data_source_dump_patterns(
|
||||
hook_config, config, log_prefix, borgmatic_runtime_directory, name=None
|
||||
): # pragma: no cover
|
||||
'''
|
||||
Restores aren't implemented, because stored files can be extracted directly with "extract".
|
||||
'''
|
||||
return ()
|
||||
|
||||
|
||||
def restore_data_source_dump(
|
||||
hook_config,
|
||||
config,
|
||||
log_prefix,
|
||||
data_source,
|
||||
dry_run,
|
||||
extract_process,
|
||||
connection_params,
|
||||
borgmatic_runtime_directory,
|
||||
): # pragma: no cover
|
||||
'''
|
||||
Restores aren't implemented, because stored files can be extracted directly with "extract".
|
||||
'''
|
||||
raise NotImplementedError()
|
30
borgmatic/hooks/data_source/snapshot.py
Normal file
30
borgmatic/hooks/data_source/snapshot.py
Normal file
@ -0,0 +1,30 @@
|
||||
import pathlib
|
||||
|
||||
IS_A_HOOK = False
|
||||
|
||||
|
||||
def get_contained_directories(parent_directory, candidate_contained_directories):
|
||||
'''
|
||||
Given a parent directory and a set of candiate directories potentially inside it, get the subset
|
||||
of contained directories for which the parent directory is actually the parent, a grandparent,
|
||||
the very same directory, etc. The idea is if, say, /var/log and /var/lib are candidate contained
|
||||
directories, but there's a parent directory (logical volume, dataset, subvolume, etc.) at /var,
|
||||
then /var is what we want to snapshot.
|
||||
|
||||
Also mutate the given set of candidate contained directories to remove any actually contained
|
||||
directories from it. That way, this function can be called multiple times, successively
|
||||
processing candidate directories until none are left—and avoiding assigning any candidate
|
||||
directory to more than one parent directory.
|
||||
'''
|
||||
if not candidate_contained_directories:
|
||||
return ()
|
||||
|
||||
contained = tuple(
|
||||
candidate
|
||||
for candidate in candidate_contained_directories
|
||||
if pathlib.PurePath(parent_directory) == pathlib.PurePath(candidate)
|
||||
or pathlib.PurePath(parent_directory) in pathlib.PurePath(candidate).parents
|
||||
)
|
||||
candidate_contained_directories -= set(contained)
|
||||
|
||||
return contained
|
@ -1,3 +1,4 @@
|
||||
import collections
|
||||
import glob
|
||||
import logging
|
||||
import os
|
||||
@ -6,6 +7,7 @@ import subprocess
|
||||
|
||||
import borgmatic.config.paths
|
||||
import borgmatic.execute
|
||||
import borgmatic.hooks.data_source.snapshot
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@ -21,6 +23,13 @@ BORGMATIC_SNAPSHOT_PREFIX = 'borgmatic-'
|
||||
BORGMATIC_USER_PROPERTY = 'org.torsion.borgmatic:backup'
|
||||
|
||||
|
||||
Dataset = collections.namedtuple(
|
||||
'Dataset',
|
||||
('name', 'mount_point', 'auto_backup', 'contained_source_directories'),
|
||||
defaults=(False, ()),
|
||||
)
|
||||
|
||||
|
||||
def get_datasets_to_backup(zfs_command, source_directories):
|
||||
'''
|
||||
Given a ZFS command to run and a sequence of configured source directories, find the
|
||||
@ -29,11 +38,11 @@ def get_datasets_to_backup(zfs_command, source_directories):
|
||||
datasets tagged with a borgmatic-specific user property, whether or not they appear in source
|
||||
directories.
|
||||
|
||||
Return the result as a sequence of (dataset name, mount point) pairs.
|
||||
Return the result as a sequence of Dataset instances, sorted by mount point.
|
||||
'''
|
||||
list_output = borgmatic.execute.execute_command_and_capture_output(
|
||||
(
|
||||
zfs_command,
|
||||
tuple(zfs_command.split(' '))
|
||||
+ (
|
||||
'list',
|
||||
'-H',
|
||||
'-t',
|
||||
@ -42,44 +51,69 @@ def get_datasets_to_backup(zfs_command, source_directories):
|
||||
f'name,mountpoint,{BORGMATIC_USER_PROPERTY}',
|
||||
)
|
||||
)
|
||||
source_directories_set = set(source_directories)
|
||||
|
||||
try:
|
||||
return tuple(
|
||||
(dataset_name, mount_point)
|
||||
for line in list_output.splitlines()
|
||||
for (dataset_name, mount_point, user_property_value) in (line.rstrip().split('\t'),)
|
||||
if mount_point in source_directories_set or user_property_value == 'auto'
|
||||
# Sort from longest to shortest mount points, so longer mount points get a whack at the
|
||||
# candidate source directory piñata before their parents do. (Source directories are
|
||||
# consumed during the second loop below, so no two datasets get the same contained source
|
||||
# directories.)
|
||||
datasets = sorted(
|
||||
(
|
||||
Dataset(dataset_name, mount_point, (user_property_value == 'auto'), ())
|
||||
for line in list_output.splitlines()
|
||||
for (dataset_name, mount_point, user_property_value) in (line.rstrip().split('\t'),)
|
||||
),
|
||||
key=lambda dataset: dataset.mount_point,
|
||||
reverse=True,
|
||||
)
|
||||
except ValueError:
|
||||
raise ValueError('Invalid {zfs_command} list output')
|
||||
raise ValueError(f'Invalid {zfs_command} list output')
|
||||
|
||||
candidate_source_directories = set(source_directories)
|
||||
|
||||
return tuple(
|
||||
sorted(
|
||||
(
|
||||
Dataset(
|
||||
dataset.name,
|
||||
dataset.mount_point,
|
||||
dataset.auto_backup,
|
||||
contained_source_directories,
|
||||
)
|
||||
for dataset in datasets
|
||||
for contained_source_directories in (
|
||||
(
|
||||
(dataset.mount_point,)
|
||||
if dataset.auto_backup
|
||||
else borgmatic.hooks.data_source.snapshot.get_contained_directories(
|
||||
dataset.mount_point, candidate_source_directories
|
||||
)
|
||||
),
|
||||
)
|
||||
if contained_source_directories
|
||||
),
|
||||
key=lambda dataset: dataset.mount_point,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def get_all_datasets(zfs_command):
|
||||
def get_all_dataset_mount_points(zfs_command):
|
||||
'''
|
||||
Given a ZFS command to run, return all ZFS datasets as a sequence of (dataset name, mount point)
|
||||
pairs.
|
||||
Given a ZFS command to run, return all ZFS datasets as a sequence of sorted mount points.
|
||||
'''
|
||||
list_output = borgmatic.execute.execute_command_and_capture_output(
|
||||
(
|
||||
zfs_command,
|
||||
tuple(zfs_command.split(' '))
|
||||
+ (
|
||||
'list',
|
||||
'-H',
|
||||
'-t',
|
||||
'filesystem',
|
||||
'-o',
|
||||
'name,mountpoint',
|
||||
'mountpoint',
|
||||
)
|
||||
)
|
||||
|
||||
try:
|
||||
return tuple(
|
||||
(dataset_name, mount_point)
|
||||
for line in list_output.splitlines()
|
||||
for (dataset_name, mount_point) in (line.rstrip().split('\t'),)
|
||||
)
|
||||
except ValueError:
|
||||
raise ValueError('Invalid {zfs_command} list output')
|
||||
return tuple(sorted(line.rstrip() for line in list_output.splitlines()))
|
||||
|
||||
|
||||
def snapshot_dataset(zfs_command, full_snapshot_name): # pragma: no cover
|
||||
@ -88,8 +122,8 @@ def snapshot_dataset(zfs_command, full_snapshot_name): # pragma: no cover
|
||||
snapshot.
|
||||
'''
|
||||
borgmatic.execute.execute_command(
|
||||
(
|
||||
zfs_command,
|
||||
tuple(zfs_command.split(' '))
|
||||
+ (
|
||||
'snapshot',
|
||||
full_snapshot_name,
|
||||
),
|
||||
@ -106,10 +140,12 @@ def mount_snapshot(mount_command, full_snapshot_name, snapshot_mount_path): # p
|
||||
os.makedirs(snapshot_mount_path, mode=0o700, exist_ok=True)
|
||||
|
||||
borgmatic.execute.execute_command(
|
||||
(
|
||||
mount_command,
|
||||
tuple(mount_command.split(' '))
|
||||
+ (
|
||||
'-t',
|
||||
'zfs',
|
||||
'-o',
|
||||
'ro',
|
||||
full_snapshot_name,
|
||||
snapshot_mount_path,
|
||||
),
|
||||
@ -147,40 +183,54 @@ def dump_data_sources(
|
||||
|
||||
# Snapshot each dataset, rewriting source directories to use the snapshot paths.
|
||||
snapshot_name = f'{BORGMATIC_SNAPSHOT_PREFIX}{os.getpid()}'
|
||||
normalized_runtime_directory = os.path.normpath(borgmatic_runtime_directory)
|
||||
|
||||
if not requested_datasets:
|
||||
logger.warning(f'{log_prefix}: No ZFS datasets found to snapshot{dry_run_label}')
|
||||
|
||||
for dataset_name, mount_point in requested_datasets:
|
||||
full_snapshot_name = f'{dataset_name}@{snapshot_name}'
|
||||
logger.debug(f'{log_prefix}: Creating ZFS snapshot {full_snapshot_name}{dry_run_label}')
|
||||
for dataset in requested_datasets:
|
||||
full_snapshot_name = f'{dataset.name}@{snapshot_name}'
|
||||
logger.debug(
|
||||
f'{log_prefix}: Creating ZFS snapshot {full_snapshot_name} of {dataset.mount_point}{dry_run_label}'
|
||||
)
|
||||
|
||||
if not dry_run:
|
||||
snapshot_dataset(zfs_command, full_snapshot_name)
|
||||
|
||||
# Mount the snapshot into a particular named temporary directory so that the snapshot ends
|
||||
# up in the Borg archive at the "original" dataset mount point path.
|
||||
snapshot_mount_path_for_borg = os.path.join(
|
||||
os.path.normpath(borgmatic_runtime_directory),
|
||||
snapshot_mount_path = os.path.join(
|
||||
normalized_runtime_directory,
|
||||
'zfs_snapshots',
|
||||
'.', # Borg 1.4+ "slashdot" hack.
|
||||
mount_point.lstrip(os.path.sep),
|
||||
dataset.mount_point.lstrip(os.path.sep),
|
||||
)
|
||||
snapshot_mount_path = os.path.normpath(snapshot_mount_path_for_borg)
|
||||
|
||||
logger.debug(
|
||||
f'{log_prefix}: Mounting ZFS snapshot {full_snapshot_name} at {snapshot_mount_path}{dry_run_label}'
|
||||
)
|
||||
|
||||
if not dry_run:
|
||||
mount_snapshot(
|
||||
hook_config.get('mount_command', 'mount'), full_snapshot_name, snapshot_mount_path
|
||||
if dry_run:
|
||||
continue
|
||||
|
||||
mount_snapshot(
|
||||
hook_config.get('mount_command', 'mount'), full_snapshot_name, snapshot_mount_path
|
||||
)
|
||||
|
||||
for source_directory in dataset.contained_source_directories:
|
||||
try:
|
||||
source_directories.remove(source_directory)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
source_directories.append(
|
||||
os.path.join(
|
||||
normalized_runtime_directory,
|
||||
'zfs_snapshots',
|
||||
'.', # Borg 1.4+ "slashdot" hack.
|
||||
source_directory.lstrip(os.path.sep),
|
||||
)
|
||||
)
|
||||
|
||||
if mount_point in source_directories:
|
||||
source_directories.remove(mount_point)
|
||||
|
||||
source_directories.append(snapshot_mount_path_for_borg)
|
||||
|
||||
return []
|
||||
|
||||
|
||||
@ -189,10 +239,7 @@ def unmount_snapshot(umount_command, snapshot_mount_path): # pragma: no cover
|
||||
Given a umount command to run and the mount path of a snapshot, unmount it.
|
||||
'''
|
||||
borgmatic.execute.execute_command(
|
||||
(
|
||||
umount_command,
|
||||
snapshot_mount_path,
|
||||
),
|
||||
tuple(umount_command.split(' ')) + (snapshot_mount_path,),
|
||||
output_log_level=logging.DEBUG,
|
||||
)
|
||||
|
||||
@ -203,8 +250,8 @@ def destroy_snapshot(zfs_command, full_snapshot_name): # pragma: no cover
|
||||
it.
|
||||
'''
|
||||
borgmatic.execute.execute_command(
|
||||
(
|
||||
zfs_command,
|
||||
tuple(zfs_command.split(' '))
|
||||
+ (
|
||||
'destroy',
|
||||
full_snapshot_name,
|
||||
),
|
||||
@ -218,8 +265,8 @@ def get_all_snapshots(zfs_command):
|
||||
form "dataset@snapshot".
|
||||
'''
|
||||
list_output = borgmatic.execute.execute_command_and_capture_output(
|
||||
(
|
||||
zfs_command,
|
||||
tuple(zfs_command.split(' '))
|
||||
+ (
|
||||
'list',
|
||||
'-H',
|
||||
'-t',
|
||||
@ -245,7 +292,7 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
|
||||
zfs_command = hook_config.get('zfs_command', 'zfs')
|
||||
|
||||
try:
|
||||
datasets = get_all_datasets(zfs_command)
|
||||
dataset_mount_points = get_all_dataset_mount_points(zfs_command)
|
||||
except FileNotFoundError:
|
||||
logger.debug(f'{log_prefix}: Could not find "{zfs_command}" command')
|
||||
return
|
||||
@ -268,18 +315,24 @@ def remove_data_source_dumps(hook_config, config, log_prefix, borgmatic_runtime_
|
||||
if not os.path.isdir(snapshots_directory):
|
||||
continue
|
||||
|
||||
# This might fail if the directory is already mounted, but we swallow errors here since
|
||||
# we'll try again below. The point of doing it here is that we don't want to try to unmount
|
||||
# a non-mounted directory (which *will* fail), and probing for whether a directory is
|
||||
# mounted is tough to do in a cross-platform way.
|
||||
if not dry_run:
|
||||
shutil.rmtree(snapshots_directory, ignore_errors=True)
|
||||
|
||||
for _, mount_point in datasets:
|
||||
# Reversing the sorted datasets ensures that we unmount the longer mount point paths of
|
||||
# child datasets before the shorter mount point paths of parent datasets.
|
||||
for mount_point in reversed(dataset_mount_points):
|
||||
snapshot_mount_path = os.path.join(snapshots_directory, mount_point.lstrip(os.path.sep))
|
||||
if not os.path.isdir(snapshot_mount_path):
|
||||
continue
|
||||
|
||||
# This might fail if the path is already mounted, but we swallow errors here since we'll
|
||||
# do another recursive delete below. The point of doing it here is that we don't want to
|
||||
# try to unmount a non-mounted directory (which *will* fail), and probing for whether a
|
||||
# directory is mounted is tough to do in a cross-platform way.
|
||||
if not dry_run:
|
||||
shutil.rmtree(snapshot_mount_path, ignore_errors=True)
|
||||
|
||||
# If the delete was successful, that means there's nothing to unmount.
|
||||
if not os.path.isdir(snapshot_mount_path):
|
||||
continue
|
||||
|
||||
logger.debug(
|
||||
f'{log_prefix}: Unmounting ZFS snapshot at {snapshot_mount_path}{dry_run_label}'
|
||||
)
|
||||
|
@ -71,8 +71,18 @@ completes.
|
||||
|
||||
Additionally, borgmatic rewrites the snapshot file paths so that they appear
|
||||
at their original dataset locations in a Borg archive. For instance, if your
|
||||
dataset is mounted at `/mnt/dataset`, then the snapshotted files will appear
|
||||
in an archive at `/mnt/dataset` as well.
|
||||
dataset is mounted at `/var/dataset`, then the snapshotted files will appear
|
||||
in an archive at `/var/dataset` as well—even if borgmatic has to mount the
|
||||
snapshot somewhere in `/run/user/1000/borgmatic/zfs_snapshots/` to perform the
|
||||
backup.
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.9.4</span> borgmatic
|
||||
is smart enough to look at the parent (and grandparent, etc.) directories of
|
||||
each of your `source_directories` to discover any datasets. For instance,
|
||||
let's say you add `/var/log` and `/var/lib` to your source directories, but
|
||||
`/var` is a dataset. borgmatic will discover that and snapshot `/var`
|
||||
accordingly. This also works even with nested datasets; borgmatic selects
|
||||
the dataset that's the "closest" parent to your source directories.
|
||||
|
||||
<span class="minilink minilink-addedin">With Borg version 1.2 and
|
||||
earlier</span>Snapshotted files are instead stored at a path dependent on the
|
||||
@ -128,10 +138,20 @@ subvolumes (non-recursively) and includes the snapshotted files in the paths
|
||||
sent to Borg. borgmatic is also responsible for cleaning up (deleting) these
|
||||
snapshots after a backup completes.
|
||||
|
||||
Additionally, borgmatic rewrites the snapshot file paths so that they appear at
|
||||
their original subvolume locations in a Borg archive. For instance, if your
|
||||
subvolume exists at `/mnt/subvolume`, then the snapshotted files will appear in
|
||||
an archive at `/mnt/subvolume` as well.
|
||||
borgmatic is smart enough to look at the parent (and grandparent, etc.)
|
||||
directories of each of your `source_directories` to discover any subvolumes.
|
||||
For instance, let's say you add `/var/log` and `/var/lib` to your source
|
||||
directories, but `/var` is a subvolume. borgmatic will discover that and
|
||||
snapshot `/var` accordingly. This also works even with nested subvolumes;
|
||||
borgmatic selects the subvolume that's the "closest" parent to your source
|
||||
directories.
|
||||
|
||||
Additionally, borgmatic rewrites the snapshot file paths so that they appear
|
||||
at their original subvolume locations in a Borg archive. For instance, if your
|
||||
subvolume exists at `/var/subvolume`, then the snapshotted files will appear
|
||||
in an archive at `/var/subvolume` as well—even if borgmatic has to mount the
|
||||
snapshot somewhere in `/var/subvolume/.borgmatic-snapshot-1234/` to perform
|
||||
the backup.
|
||||
|
||||
<span class="minilink minilink-addedin">With Borg version 1.2 and
|
||||
earlier</span>Snapshotted files are instead stored at a path dependent on the
|
||||
@ -145,3 +165,100 @@ Subvolume snapshots are stored in a Borg archive as normal files, so you can use
|
||||
the standard [extract
|
||||
action](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/) to extract
|
||||
them.
|
||||
|
||||
|
||||
### LVM
|
||||
|
||||
<span class="minilink minilink-addedin">New in version 1.9.4</span> <span
|
||||
class="minilink minilink-addedin">Beta feature</span> borgmatic supports
|
||||
taking snapshots with [LVM](https://sourceware.org/lvm2/) (Linux Logical
|
||||
Volume Manager) and sending those snapshots to Borg for backup. LVM isn't
|
||||
itself a filesystem, but it can take snapshots at the layer right below your
|
||||
filesystem.
|
||||
|
||||
To use this feature, first you need one or more mounted LVM logical volumes.
|
||||
Then, enable LVM within borgmatic by adding the following line to your
|
||||
configuration file:
|
||||
|
||||
```yaml
|
||||
lvm:
|
||||
```
|
||||
|
||||
No other options are necessary to enable LVM support, but if desired you can
|
||||
override some of the options used by the LVM hook. For instance:
|
||||
|
||||
```yaml
|
||||
lvm:
|
||||
snapshot_size: 5GB # See below for details.
|
||||
lvcreate_command: /usr/local/bin/lvcreate
|
||||
lvremove_command: /usr/local/bin/lvremove
|
||||
lvs_command: /usr/local/bin/lvs
|
||||
lsbrk_command: /usr/local/bin/lsbrk
|
||||
mount_command: /usr/local/bin/mount
|
||||
umount_command: /usr/local/bin/umount
|
||||
```
|
||||
|
||||
As long as the LVM hook is in beta, it may be subject to breaking changes
|
||||
and/or may not work well for your use cases. But feel free to use it in
|
||||
production if you're okay with these caveats, and please [provide any
|
||||
feedback](https://torsion.org/borgmatic/#issues) you have on this feature.
|
||||
|
||||
|
||||
#### Snapshot size
|
||||
|
||||
The `snapshot_size` option is the size to allocate for each snapshot taken,
|
||||
including the units to use for that size. While borgmatic's snapshots
|
||||
themselves are read-only and don't change during backups, the logical volume
|
||||
being snapshotted *can* change—therefore requiring additional snapshot storage
|
||||
since LVM snapshots are copy-on-write. And if the configured snapshot size is
|
||||
too small (and LVM isn't configured to grow snapshots automatically), then the
|
||||
snapshots will fail to allocate enough space, resulting in a broken backup.
|
||||
|
||||
If not specified, the `snapshot_size` option defaults to `10%ORIGIN`, which
|
||||
means 10% of the size of logical volume being snapshotted. See the [`lvcreate
|
||||
--size` and `--extents`
|
||||
documentation](https://www.man7.org/linux/man-pages/man8/lvcreate.8.html) for
|
||||
more information about possible values here. (Under the hood, borgmatic uses
|
||||
witten marked this conversation as resolved
anarcat
commented
i'm a bit confused by this. what are we rewriting here exactly? the files appear in i would have expected this to say something like:
that would, at least, clarify things for me. part of the confusion i have, i think, comes from the use of i'm a bit confused by this. what are we rewriting here exactly? the files appear in `/mnt/lvolume`, exactly as they are in the logical volume...
i would have expected this to say something like:
```
Additionally, borgmatic rewrites the snapshot file paths so that they appear
at their original logical volume locations in a Borg archive. For instance, if
your logical volume is mounted at `/var`, then the snapshotted files
will appear in an archive at `/var` as well, even if, to perform the backup,
borgmatic will mount the volume in the runtime directory, in (for example)
`/run/user/1000/lvm_snapshots/`.
```
that would, at least, clarify things for me.
part of the confusion i have, i think, comes from the use of `/mnt` in the example: that's a mountpoint i use for temporary things, not a real partition, so i thought this was where borgmatic would mount the snapshot, temporarily.
witten
commented
Got it. I really appreciate the feedback. I will clarify! Got it. I really appreciate the feedback. I will clarify!
|
||||
`lvcreate --extents` if the `snapshot_size` is a percentage value, and
|
||||
`lvcreate --size` otherwise.)
|
||||
|
||||
|
||||
#### Logical volume discovery
|
||||
|
||||
For any logical volume you'd like backed up, add its mount point to
|
||||
borgmatic's `source_directories` option.
|
||||
|
||||
During a backup, borgmatic automatically snapshots these discovered logical
|
||||
volumes (non-recursively), temporary mounts the snapshots within its [runtime
|
||||
directory](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#runtime-directory),
|
||||
and includes the snapshotted files in the paths sent to Borg. borgmatic is
|
||||
also responsible for cleaning up (deleting) these snapshots after a backup
|
||||
completes.
|
||||
|
||||
borgmatic is smart enough to look at the parent (and grandparent, etc.)
|
||||
directories of each of your `source_directories` to discover any logical
|
||||
volumes. For instance, let's say you add `/var/log` and `/var/lib` to your
|
||||
source directories, but `/var` is a logical volume. borgmatic will discover
|
||||
that and snapshot `/var` accordingly.
|
||||
|
||||
Additionally, borgmatic rewrites the snapshot file paths so that they appear
|
||||
at their original logical volume locations in a Borg archive. For instance, if
|
||||
your logical volume is mounted at `/var/lvolume`, then the snapshotted files
|
||||
will appear in an archive at `/var/lvolume` as well—even if borgmatic has to
|
||||
mount the snapshot somewhere in `/run/user/1000/borgmatic/lvm_snapshots/` to
|
||||
perform the backup.
|
||||
|
||||
<span class="minilink minilink-addedin">With Borg version 1.2 and
|
||||
earlier</span>Snapshotted files are instead stored at a path dependent on the
|
||||
[runtime
|
||||
directory](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#runtime-directory)
|
||||
in use at the time the archive was created, as Borg 1.2 and earlier do not
|
||||
support path rewriting.
|
||||
|
||||
|
||||
#### Extract a logical volume
|
||||
|
||||
Logical volume snapshots are stored in a Borg archive as normal files, so
|
||||
you can use the standard
|
||||
[extract action](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/) to
|
||||
extract them.
|
||||
|
BIN
docs/static/lvm.png
vendored
Normal file
BIN
docs/static/lvm.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 5.1 KiB |
0
tests/end-to-end/commands/__init__.py
Normal file
0
tests/end-to-end/commands/__init__.py
Normal file
90
tests/end-to-end/commands/fake_btrfs.py
Normal file
90
tests/end-to-end/commands/fake_btrfs.py
Normal file
@ -0,0 +1,90 @@
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
|
||||
|
||||
def parse_arguments(*unparsed_arguments):
|
||||
global_parser = argparse.ArgumentParser(add_help=False)
|
||||
action_parsers = global_parser.add_subparsers(dest='action')
|
||||
|
||||
subvolume_parser = action_parsers.add_parser('subvolume')
|
||||
subvolume_subparser = subvolume_parser.add_subparsers(dest='subaction')
|
||||
|
||||
list_parser = subvolume_subparser.add_parser('list')
|
||||
list_parser.add_argument('-s', dest='snapshots_only', action='store_true')
|
||||
list_parser.add_argument('subvolume_path')
|
||||
|
||||
snapshot_parser = subvolume_subparser.add_parser('snapshot')
|
||||
snapshot_parser.add_argument('-r', dest='read_only', action='store_true')
|
||||
snapshot_parser.add_argument('subvolume_path')
|
||||
snapshot_parser.add_argument('snapshot_path')
|
||||
|
||||
delete_parser = subvolume_subparser.add_parser('delete')
|
||||
delete_parser.add_argument('snapshot_path')
|
||||
|
||||
return global_parser.parse_args(unparsed_arguments)
|
||||
|
||||
|
||||
BUILTIN_SUBVOLUME_LIST_LINES = (
|
||||
'261 gen 29 top level 5 path sub',
|
||||
'262 gen 29 top level 5 path other',
|
||||
)
|
||||
SUBVOLUME_LIST_LINE_PREFIX = '263 gen 29 top level 5 path '
|
||||
|
||||
|
||||
def load_snapshots():
|
||||
try:
|
||||
return json.load(open('/tmp/fake_btrfs.json'))
|
||||
except FileNotFoundError:
|
||||
return []
|
||||
|
||||
|
||||
def save_snapshots(snapshot_paths):
|
||||
json.dump(snapshot_paths, open('/tmp/fake_btrfs.json', 'w'))
|
||||
|
||||
|
||||
def print_subvolume_list(arguments, snapshot_paths):
|
||||
assert arguments.subvolume_path == '/mnt/subvolume'
|
||||
|
||||
if not arguments.snapshots_only:
|
||||
for line in BUILTIN_SUBVOLUME_LIST_LINES:
|
||||
print(line)
|
||||
|
||||
for snapshot_path in snapshot_paths:
|
||||
print(
|
||||
SUBVOLUME_LIST_LINE_PREFIX
|
||||
+ snapshot_path[snapshot_path.index('.borgmatic-snapshot-') :]
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
arguments = parse_arguments(*sys.argv[1:])
|
||||
snapshot_paths = load_snapshots()
|
||||
|
||||
if arguments.subaction == 'list':
|
||||
print_subvolume_list(arguments, snapshot_paths)
|
||||
elif arguments.subaction == 'snapshot':
|
||||
snapshot_paths.append(arguments.snapshot_path)
|
||||
save_snapshots(snapshot_paths)
|
||||
|
||||
subdirectory = os.path.join(arguments.snapshot_path, 'subdir')
|
||||
os.makedirs(subdirectory, mode=0o700, exist_ok=True)
|
||||
test_file = open(os.path.join(subdirectory, 'file.txt'), 'w')
|
||||
test_file.write('contents')
|
||||
test_file.close()
|
||||
elif arguments.subaction == 'delete':
|
||||
subdirectory = os.path.join(arguments.snapshot_path, 'subdir')
|
||||
shutil.rmtree(subdirectory)
|
||||
|
||||
snapshot_paths = [
|
||||
snapshot_path
|
||||
for snapshot_path in snapshot_paths
|
||||
if snapshot_path.endswith('/' + arguments.snapshot_path)
|
||||
]
|
||||
save_snapshots(snapshot_paths)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
33
tests/end-to-end/commands/fake_findmnt.py
Normal file
33
tests/end-to-end/commands/fake_findmnt.py
Normal file
@ -0,0 +1,33 @@
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
|
||||
def parse_arguments(*unparsed_arguments):
|
||||
parser = argparse.ArgumentParser(add_help=False)
|
||||
parser.add_argument('-n', dest='headings', action='store_false', default=True)
|
||||
parser.add_argument('-t', dest='type')
|
||||
|
||||
return parser.parse_args(unparsed_arguments)
|
||||
|
||||
|
||||
BUILTIN_FILESYSTEM_MOUNT_LINES = (
|
||||
'/mnt/subvolume /dev/loop1 btrfs rw,relatime,ssd,space_cache=v2,subvolid=5,subvol=/',
|
||||
)
|
||||
|
||||
|
||||
def print_filesystem_mounts(arguments):
|
||||
for line in BUILTIN_FILESYSTEM_MOUNT_LINES:
|
||||
print(line)
|
||||
|
||||
|
||||
def main():
|
||||
arguments = parse_arguments(*sys.argv[1:])
|
||||
|
||||
assert not arguments.headings
|
||||
assert arguments.type == 'btrfs'
|
||||
|
||||
print_filesystem_mounts(arguments)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
70
tests/end-to-end/commands/fake_lsblk.py
Normal file
70
tests/end-to-end/commands/fake_lsblk.py
Normal file
@ -0,0 +1,70 @@
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
|
||||
|
||||
def parse_arguments(*unparsed_arguments):
|
||||
parser = argparse.ArgumentParser(add_help=False)
|
||||
|
||||
parser.add_argument('--output', required=True)
|
||||
parser.add_argument('--json', action='store_true', required=True)
|
||||
parser.add_argument('--list', action='store_true', required=True)
|
||||
|
||||
return parser.parse_args(unparsed_arguments)
|
||||
|
||||
|
||||
BUILTIN_BLOCK_DEVICES = {
|
||||
'blockdevices': [
|
||||
{'name': 'loop0', 'path': '/dev/loop0', 'mountpoint': None, 'type': 'loop'},
|
||||
{'name': 'cryptroot', 'path': '/dev/mapper/cryptroot', 'mountpoint': '/', 'type': 'crypt'},
|
||||
{
|
||||
'name': 'vgroup-lvolume',
|
||||
'path': '/dev/mapper/vgroup-lvolume',
|
||||
'mountpoint': '/mnt/lvolume',
|
||||
'type': 'lvm',
|
||||
},
|
||||
{
|
||||
'name': 'vgroup-lvolume-real',
|
||||
'path': '/dev/mapper/vgroup-lvolume-real',
|
||||
'mountpoint': None,
|
||||
'type': 'lvm',
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
def load_snapshots():
|
||||
try:
|
||||
return json.load(open('/tmp/fake_lvm.json'))
|
||||
except FileNotFoundError:
|
||||
return []
|
||||
|
||||
|
||||
def print_logical_volumes_json(arguments, snapshots):
|
||||
data = dict(BUILTIN_BLOCK_DEVICES)
|
||||
|
||||
for snapshot in snapshots:
|
||||
data['blockdevices'].extend(
|
||||
{
|
||||
'name': snapshot['lv_name'],
|
||||
'path': snapshot['lv_path'],
|
||||
'mountpoint': None,
|
||||
'type': 'lvm',
|
||||
}
|
||||
for snapshot in snapshots
|
||||
)
|
||||
|
||||
print(json.dumps(data))
|
||||
|
||||
|
||||
def main():
|
||||
arguments = parse_arguments(*sys.argv[1:])
|
||||
snapshots = load_snapshots()
|
||||
|
||||
assert arguments.output == 'name,path,mountpoint,type'
|
||||
|
||||
print_logical_volumes_json(arguments, snapshots)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
43
tests/end-to-end/commands/fake_lvcreate.py
Normal file
43
tests/end-to-end/commands/fake_lvcreate.py
Normal file
@ -0,0 +1,43 @@
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
|
||||
|
||||
def parse_arguments(*unparsed_arguments):
|
||||
parser = argparse.ArgumentParser(add_help=False)
|
||||
|
||||
parser.add_argument('--snapshot', action='store_true', required=True)
|
||||
parser.add_argument('--extents')
|
||||
parser.add_argument('--size')
|
||||
parser.add_argument('--name', dest='snapshot_name', required=True)
|
||||
parser.add_argument('logical_volume_device')
|
||||
|
||||
return parser.parse_args(unparsed_arguments)
|
||||
|
||||
|
||||
def load_snapshots():
|
||||
try:
|
||||
return json.load(open('/tmp/fake_lvm.json'))
|
||||
except FileNotFoundError:
|
||||
return []
|
||||
|
||||
|
||||
def save_snapshots(snapshots):
|
||||
json.dump(snapshots, open('/tmp/fake_lvm.json', 'w'))
|
||||
|
||||
|
||||
def main():
|
||||
arguments = parse_arguments(*sys.argv[1:])
|
||||
snapshots = load_snapshots()
|
||||
|
||||
assert arguments.extents or arguments.size
|
||||
|
||||
snapshots.append(
|
||||
{'lv_name': arguments.snapshot_name, 'lv_path': f'/dev/vgroup/{arguments.snapshot_name}'},
|
||||
)
|
||||
|
||||
save_snapshots(snapshots)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
39
tests/end-to-end/commands/fake_lvremove.py
Normal file
39
tests/end-to-end/commands/fake_lvremove.py
Normal file
@ -0,0 +1,39 @@
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
|
||||
|
||||
def parse_arguments(*unparsed_arguments):
|
||||
parser = argparse.ArgumentParser(add_help=False)
|
||||
|
||||
parser.add_argument('--force', action='store_true', required=True)
|
||||
parser.add_argument('snapshot_device')
|
||||
|
||||
return parser.parse_args(unparsed_arguments)
|
||||
|
||||
|
||||
def load_snapshots():
|
||||
try:
|
||||
return json.load(open('/tmp/fake_lvm.json'))
|
||||
except FileNotFoundError:
|
||||
return []
|
||||
|
||||
|
||||
def save_snapshots(snapshots):
|
||||
json.dump(snapshots, open('/tmp/fake_lvm.json', 'w'))
|
||||
|
||||
|
||||
def main():
|
||||
arguments = parse_arguments(*sys.argv[1:])
|
||||
|
||||
snapshots = [
|
||||
snapshot
|
||||
for snapshot in load_snapshots()
|
||||
if snapshot['lv_path'] != arguments.snapshot_device
|
||||
]
|
||||
|
||||
save_snapshots(snapshots)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
50
tests/end-to-end/commands/fake_lvs.py
Normal file
50
tests/end-to-end/commands/fake_lvs.py
Normal file
@ -0,0 +1,50 @@
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
|
||||
|
||||
def parse_arguments(*unparsed_arguments):
|
||||
parser = argparse.ArgumentParser(add_help=False)
|
||||
|
||||
parser.add_argument('--report-format', required=True)
|
||||
parser.add_argument('--options', required=True)
|
||||
parser.add_argument('--select', required=True)
|
||||
|
||||
return parser.parse_args(unparsed_arguments)
|
||||
|
||||
|
||||
def load_snapshots():
|
||||
try:
|
||||
return json.load(open('/tmp/fake_lvm.json'))
|
||||
except FileNotFoundError:
|
||||
return []
|
||||
|
||||
|
||||
def print_snapshots_json(arguments, snapshots):
|
||||
assert arguments.report_format == 'json'
|
||||
assert arguments.options == 'lv_name,lv_path'
|
||||
assert arguments.select == 'lv_attr =~ ^s'
|
||||
|
||||
print(
|
||||
json.dumps(
|
||||
{
|
||||
'report': [
|
||||
{
|
||||
'lv': snapshots,
|
||||
}
|
||||
],
|
||||
'log': [],
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
arguments = parse_arguments(*sys.argv[1:])
|
||||
snapshots = load_snapshots()
|
||||
|
||||
print_snapshots_json(arguments, snapshots)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
29
tests/end-to-end/commands/fake_mount.py
Normal file
29
tests/end-to-end/commands/fake_mount.py
Normal file
@ -0,0 +1,29 @@
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
|
||||
|
||||
def parse_arguments(*unparsed_arguments):
|
||||
parser = argparse.ArgumentParser(add_help=False)
|
||||
parser.add_argument('-t', dest='type')
|
||||
parser.add_argument('-o', dest='options')
|
||||
parser.add_argument('snapshot_name')
|
||||
parser.add_argument('mount_point')
|
||||
|
||||
return parser.parse_args(unparsed_arguments)
|
||||
|
||||
|
||||
def main():
|
||||
arguments = parse_arguments(*sys.argv[1:])
|
||||
|
||||
assert arguments.options == 'ro'
|
||||
|
||||
subdirectory = os.path.join(arguments.mount_point, 'subdir')
|
||||
os.mkdir(subdirectory)
|
||||
test_file = open(os.path.join(subdirectory, 'file.txt'), 'w')
|
||||
test_file.write('contents')
|
||||
test_file.close()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
22
tests/end-to-end/commands/fake_umount.py
Normal file
22
tests/end-to-end/commands/fake_umount.py
Normal file
@ -0,0 +1,22 @@
|
||||
import argparse
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
|
||||
|
||||
def parse_arguments(*unparsed_arguments):
|
||||
parser = argparse.ArgumentParser(add_help=False)
|
||||
parser.add_argument('mount_point')
|
||||
|
||||
return parser.parse_args(unparsed_arguments)
|
||||
|
||||
|
||||
def main():
|
||||
arguments = parse_arguments(*sys.argv[1:])
|
||||
|
||||
subdirectory = os.path.join(arguments.mount_point, 'subdir')
|
||||
shutil.rmtree(subdirectory)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
92
tests/end-to-end/commands/fake_zfs.py
Normal file
92
tests/end-to-end/commands/fake_zfs.py
Normal file
@ -0,0 +1,92 @@
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
|
||||
|
||||
def parse_arguments(*unparsed_arguments):
|
||||
global_parser = argparse.ArgumentParser(add_help=False)
|
||||
action_parsers = global_parser.add_subparsers(dest='action')
|
||||
|
||||
list_parser = action_parsers.add_parser('list')
|
||||
list_parser.add_argument('-H', dest='header', action='store_false', default=True)
|
||||
list_parser.add_argument('-t', dest='type', default='filesystem')
|
||||
list_parser.add_argument('-o', dest='properties', default='name,used,avail,refer,mountpoint')
|
||||
|
||||
snapshot_parser = action_parsers.add_parser('snapshot')
|
||||
snapshot_parser.add_argument('name')
|
||||
|
||||
destroy_parser = action_parsers.add_parser('destroy')
|
||||
destroy_parser.add_argument('name')
|
||||
|
||||
return global_parser.parse_args(unparsed_arguments)
|
||||
|
||||
|
||||
BUILTIN_DATASETS = (
|
||||
{
|
||||
'name': 'pool',
|
||||
'used': '256K',
|
||||
'avail': '23.7M',
|
||||
'refer': '25K',
|
||||
'mountpoint': '/pool',
|
||||
},
|
||||
{
|
||||
'name': 'pool/dataset',
|
||||
'used': '256K',
|
||||
'avail': '23.7M',
|
||||
'refer': '25K',
|
||||
'mountpoint': '/pool/dataset',
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def load_snapshots():
|
||||
try:
|
||||
return json.load(open('/tmp/fake_zfs.json'))
|
||||
except FileNotFoundError:
|
||||
return []
|
||||
|
||||
|
||||
def save_snapshots(snapshots):
|
||||
json.dump(snapshots, open('/tmp/fake_zfs.json', 'w'))
|
||||
|
||||
|
||||
def print_dataset_list(arguments, datasets, snapshots):
|
||||
properties = arguments.properties.split(',')
|
||||
data = (
|
||||
(tuple(property_name.upper() for property_name in properties),) if arguments.header else ()
|
||||
) + tuple(
|
||||
tuple(dataset.get(property_name, '-') for property_name in properties)
|
||||
for dataset in (snapshots if arguments.type == 'snapshot' else datasets)
|
||||
)
|
||||
|
||||
if not data:
|
||||
return
|
||||
|
||||
for data_row in data:
|
||||
print('\t'.join(data_row))
|
||||
|
||||
|
||||
def main():
|
||||
arguments = parse_arguments(*sys.argv[1:])
|
||||
snapshots = load_snapshots()
|
||||
|
||||
if arguments.action == 'list':
|
||||
print_dataset_list(arguments, BUILTIN_DATASETS, snapshots)
|
||||
elif arguments.action == 'snapshot':
|
||||
snapshots.append(
|
||||
{
|
||||
'name': arguments.name,
|
||||
'used': '0B',
|
||||
'avail': '-',
|
||||
'refer': '25K',
|
||||
'mountpoint': '-',
|
||||
},
|
||||
)
|
||||
save_snapshots(snapshots)
|
||||
elif arguments.action == 'destroy':
|
||||
snapshots = [snapshot for snapshot in snapshots if snapshot['name'] != arguments.name]
|
||||
save_snapshots(snapshots)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
63
tests/end-to-end/test_btrfs.py
Normal file
63
tests/end-to-end/test_btrfs.py
Normal file
@ -0,0 +1,63 @@
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
|
||||
def generate_configuration(config_path, repository_path):
|
||||
'''
|
||||
Generate borgmatic configuration into a file at the config path, and update the defaults so as
|
||||
to work for testing (including injecting the given repository path and tacking on an encryption
|
||||
passphrase).
|
||||
'''
|
||||
subprocess.check_call(f'borgmatic config generate --destination {config_path}'.split(' '))
|
||||
config = (
|
||||
open(config_path)
|
||||
.read()
|
||||
.replace('ssh://user@backupserver/./sourcehostname.borg', repository_path)
|
||||
.replace('- path: /mnt/backup', '')
|
||||
.replace('label: local', '')
|
||||
.replace('- /home', f'- {config_path}')
|
||||
.replace('- /etc', '- /mnt/subvolume/subdir')
|
||||
.replace('- /var/log/syslog*', '')
|
||||
+ 'encryption_passphrase: "test"\n'
|
||||
+ 'btrfs:\n'
|
||||
+ ' btrfs_command: python3 /app/tests/end-to-end/commands/fake_btrfs.py\n'
|
||||
+ ' findmnt_command: python3 /app/tests/end-to-end/commands/fake_findmnt.py\n'
|
||||
)
|
||||
config_file = open(config_path, 'w')
|
||||
config_file.write(config)
|
||||
config_file.close()
|
||||
|
||||
|
||||
def test_btrfs_create_and_list():
|
||||
temporary_directory = tempfile.mkdtemp()
|
||||
repository_path = os.path.join(temporary_directory, 'test.borg')
|
||||
|
||||
try:
|
||||
config_path = os.path.join(temporary_directory, 'test.yaml')
|
||||
generate_configuration(config_path, repository_path)
|
||||
|
||||
subprocess.check_call(
|
||||
f'borgmatic -v 2 --config {config_path} repo-create --encryption repokey'.split(' ')
|
||||
)
|
||||
|
||||
# Run a create action to exercise Btrfs snapshotting and backup.
|
||||
subprocess.check_call(f'borgmatic --config {config_path} create'.split(' '))
|
||||
|
||||
# List the resulting archive and assert that the snapshotted files are there.
|
||||
output = subprocess.check_output(
|
||||
f'borgmatic --config {config_path} list --archive latest'.split(' ')
|
||||
).decode(sys.stdout.encoding)
|
||||
|
||||
assert 'mnt/subvolume/subdir/file.txt' in output
|
||||
|
||||
# Assert that the snapshot has been deleted.
|
||||
assert not subprocess.check_output(
|
||||
'python3 /app/tests/end-to-end/commands/fake_btrfs.py subvolume list -s /mnt/subvolume'.split(
|
||||
' '
|
||||
)
|
||||
)
|
||||
finally:
|
||||
shutil.rmtree(temporary_directory)
|
71
tests/end-to-end/test_lvm.py
Normal file
71
tests/end-to-end/test_lvm.py
Normal file
@ -0,0 +1,71 @@
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
|
||||
def generate_configuration(config_path, repository_path):
|
||||
'''
|
||||
Generate borgmatic configuration into a file at the config path, and update the defaults so as
|
||||
to work for testing (including injecting the given repository path and tacking on an encryption
|
||||
passphrase).
|
||||
'''
|
||||
subprocess.check_call(f'borgmatic config generate --destination {config_path}'.split(' '))
|
||||
config = (
|
||||
open(config_path)
|
||||
.read()
|
||||
.replace('ssh://user@backupserver/./sourcehostname.borg', repository_path)
|
||||
.replace('- path: /mnt/backup', '')
|
||||
.replace('label: local', '')
|
||||
.replace('- /home', f'- {config_path}')
|
||||
.replace('- /etc', '- /mnt/lvolume/subdir')
|
||||
.replace('- /var/log/syslog*', '')
|
||||
+ 'encryption_passphrase: "test"\n'
|
||||
+ 'lvm:\n'
|
||||
+ ' lsblk_command: python3 /app/tests/end-to-end/commands/fake_lsblk.py\n'
|
||||
+ ' lvcreate_command: python3 /app/tests/end-to-end/commands/fake_lvcreate.py\n'
|
||||
+ ' lvremove_command: python3 /app/tests/end-to-end/commands/fake_lvremove.py\n'
|
||||
+ ' lvs_command: python3 /app/tests/end-to-end/commands/fake_lvs.py\n'
|
||||
+ ' mount_command: python3 /app/tests/end-to-end/commands/fake_mount.py\n'
|
||||
+ ' umount_command: python3 /app/tests/end-to-end/commands/fake_umount.py\n'
|
||||
)
|
||||
config_file = open(config_path, 'w')
|
||||
config_file.write(config)
|
||||
config_file.close()
|
||||
|
||||
|
||||
def test_lvm_create_and_list():
|
||||
temporary_directory = tempfile.mkdtemp()
|
||||
repository_path = os.path.join(temporary_directory, 'test.borg')
|
||||
|
||||
try:
|
||||
config_path = os.path.join(temporary_directory, 'test.yaml')
|
||||
generate_configuration(config_path, repository_path)
|
||||
|
||||
subprocess.check_call(
|
||||
f'borgmatic -v 2 --config {config_path} repo-create --encryption repokey'.split(' ')
|
||||
)
|
||||
|
||||
# Run a create action to exercise LVM snapshotting and backup.
|
||||
subprocess.check_call(f'borgmatic --config {config_path} create'.split(' '))
|
||||
|
||||
# List the resulting archive and assert that the snapshotted files are there.
|
||||
output = subprocess.check_output(
|
||||
f'borgmatic --config {config_path} list --archive latest'.split(' ')
|
||||
).decode(sys.stdout.encoding)
|
||||
|
||||
assert 'mnt/lvolume/subdir/file.txt' in output
|
||||
|
||||
# Assert that the snapshot has been deleted.
|
||||
assert not json.loads(
|
||||
subprocess.check_output(
|
||||
'python3 /app/tests/end-to-end/commands/fake_lvs.py --report-format json --options lv_name,lv_path --select'.split(
|
||||
' '
|
||||
)
|
||||
+ ['lv_attr =~ ^s']
|
||||
)
|
||||
)['report'][0]['lv']
|
||||
finally:
|
||||
shutil.rmtree(temporary_directory)
|
62
tests/end-to-end/test_zfs.py
Normal file
62
tests/end-to-end/test_zfs.py
Normal file
@ -0,0 +1,62 @@
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
|
||||
def generate_configuration(config_path, repository_path):
|
||||
'''
|
||||
Generate borgmatic configuration into a file at the config path, and update the defaults so as
|
||||
to work for testing (including injecting the given repository path and tacking on an encryption
|
||||
passphrase).
|
||||
'''
|
||||
subprocess.check_call(f'borgmatic config generate --destination {config_path}'.split(' '))
|
||||
config = (
|
||||
open(config_path)
|
||||
.read()
|
||||
.replace('ssh://user@backupserver/./sourcehostname.borg', repository_path)
|
||||
.replace('- path: /mnt/backup', '')
|
||||
.replace('label: local', '')
|
||||
.replace('- /home', f'- {config_path}')
|
||||
.replace('- /etc', '- /pool/dataset/subdir')
|
||||
.replace('- /var/log/syslog*', '')
|
||||
+ 'encryption_passphrase: "test"\n'
|
||||
+ 'zfs:\n'
|
||||
+ ' zfs_command: python3 /app/tests/end-to-end/commands/fake_zfs.py\n'
|
||||
+ ' mount_command: python3 /app/tests/end-to-end/commands/fake_mount.py\n'
|
||||
+ ' umount_command: python3 /app/tests/end-to-end/commands/fake_umount.py'
|
||||
)
|
||||
config_file = open(config_path, 'w')
|
||||
config_file.write(config)
|
||||
config_file.close()
|
||||
|
||||
|
||||
def test_zfs_create_and_list():
|
||||
temporary_directory = tempfile.mkdtemp()
|
||||
repository_path = os.path.join(temporary_directory, 'test.borg')
|
||||
|
||||
try:
|
||||
config_path = os.path.join(temporary_directory, 'test.yaml')
|
||||
generate_configuration(config_path, repository_path)
|
||||
|
||||
subprocess.check_call(
|
||||
f'borgmatic -v 2 --config {config_path} repo-create --encryption repokey'.split(' ')
|
||||
)
|
||||
|
||||
# Run a create action to exercise ZFS snapshotting and backup.
|
||||
subprocess.check_call(f'borgmatic --config {config_path} create'.split(' '))
|
||||
|
||||
# List the resulting archive and assert that the snapshotted files are there.
|
||||
output = subprocess.check_output(
|
||||
f'borgmatic --config {config_path} list --archive latest'.split(' ')
|
||||
).decode(sys.stdout.encoding)
|
||||
|
||||
assert 'pool/dataset/subdir/file.txt' in output
|
||||
|
||||
# Assert that the snapshot has been deleted.
|
||||
assert not subprocess.check_output(
|
||||
'python3 /app/tests/end-to-end/commands/fake_zfs.py list -H -t snapshot'.split(' ')
|
||||
)
|
||||
finally:
|
||||
shutil.rmtree(temporary_directory)
|
@ -21,7 +21,11 @@ def test_get_subvolumes_for_filesystem_parses_subvolume_list_output():
|
||||
'ID 270 gen 107 top level 5 path subvol1\nID 272 gen 74 top level 5 path subvol2\n'
|
||||
)
|
||||
|
||||
assert module.get_subvolumes_for_filesystem('btrfs', '/mnt') == ('/mnt/subvol1', '/mnt/subvol2')
|
||||
assert module.get_subvolumes_for_filesystem('btrfs', '/mnt') == (
|
||||
'/mnt',
|
||||
'/mnt/subvol1',
|
||||
'/mnt/subvol2',
|
||||
)
|
||||
|
||||
|
||||
def test_get_subvolumes_for_filesystem_skips_empty_subvolume_paths():
|
||||
@ -29,7 +33,7 @@ def test_get_subvolumes_for_filesystem_skips_empty_subvolume_paths():
|
||||
'execute_command_and_capture_output'
|
||||
).and_return('\n \nID 272 gen 74 top level 5 path subvol2\n')
|
||||
|
||||
assert module.get_subvolumes_for_filesystem('btrfs', '/mnt') == ('/mnt/subvol2',)
|
||||
assert module.get_subvolumes_for_filesystem('btrfs', '/mnt') == ('/mnt', '/mnt/subvol2')
|
||||
|
||||
|
||||
def test_get_subvolumes_for_filesystem_skips_empty_filesystem_mount_points():
|
||||
@ -51,9 +55,21 @@ def test_get_subvolumes_collects_subvolumes_matching_source_directories_from_all
|
||||
'btrfs', '/mnt2'
|
||||
).and_return(('/three', '/four'))
|
||||
|
||||
for path in ('/one', '/four'):
|
||||
flexmock(module.borgmatic.hooks.data_source.snapshot).should_receive(
|
||||
'get_contained_directories'
|
||||
).with_args(path, object).and_return((path,))
|
||||
for path in ('/two', '/three'):
|
||||
flexmock(module.borgmatic.hooks.data_source.snapshot).should_receive(
|
||||
'get_contained_directories'
|
||||
).with_args(path, object).and_return(())
|
||||
|
||||
assert module.get_subvolumes(
|
||||
'btrfs', 'findmnt', source_directories=['/one', '/four', '/five', '/six', '/mnt2', '/mnt3']
|
||||
) == ('/one', '/mnt2', '/four')
|
||||
) == (
|
||||
module.Subvolume('/four', contained_source_directories=('/four',)),
|
||||
module.Subvolume('/one', contained_source_directories=('/one',)),
|
||||
)
|
||||
|
||||
|
||||
def test_get_subvolumes_without_source_directories_collects_all_subvolumes_from_all_filesystems():
|
||||
@ -65,20 +81,28 @@ def test_get_subvolumes_without_source_directories_collects_all_subvolumes_from_
|
||||
'btrfs', '/mnt2'
|
||||
).and_return(('/three', '/four'))
|
||||
|
||||
for path in ('/one', '/two', '/three', '/four'):
|
||||
flexmock(module.borgmatic.hooks.data_source.snapshot).should_receive(
|
||||
'get_contained_directories'
|
||||
).with_args(path, object).and_return((path,))
|
||||
|
||||
assert module.get_subvolumes('btrfs', 'findmnt') == (
|
||||
'/mnt1',
|
||||
'/one',
|
||||
'/two',
|
||||
'/mnt2',
|
||||
'/three',
|
||||
'/four',
|
||||
module.Subvolume('/four', contained_source_directories=('/four',)),
|
||||
module.Subvolume('/one', contained_source_directories=('/one',)),
|
||||
module.Subvolume('/three', contained_source_directories=('/three',)),
|
||||
module.Subvolume('/two', contained_source_directories=('/two',)),
|
||||
)
|
||||
|
||||
|
||||
def test_dump_data_sources_snapshots_each_subvolume_and_updates_source_directories():
|
||||
source_directories = ['/foo', '/mnt/subvol1']
|
||||
config = {'btrfs': {}}
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(('/mnt/subvol1', '/mnt/subvol2'))
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(
|
||||
(
|
||||
module.Subvolume('/mnt/subvol1', contained_source_directories=('/mnt/subvol1',)),
|
||||
module.Subvolume('/mnt/subvol2', contained_source_directories=('/mnt/subvol2',)),
|
||||
)
|
||||
)
|
||||
flexmock(module).should_receive('make_snapshot_path').with_args('/mnt/subvol1').and_return(
|
||||
'/mnt/subvol1/.borgmatic-1234/mnt/subvol1'
|
||||
)
|
||||
@ -97,6 +121,12 @@ def test_dump_data_sources_snapshots_each_subvolume_and_updates_source_directori
|
||||
flexmock(module).should_receive('make_snapshot_exclude_path').with_args(
|
||||
'/mnt/subvol2'
|
||||
).and_return('/mnt/subvol2/.borgmatic-1234/mnt/subvol2/.borgmatic-1234')
|
||||
flexmock(module).should_receive('make_borg_source_directory_path').with_args(
|
||||
'/mnt/subvol1', object
|
||||
).and_return('/mnt/subvol1/.borgmatic-1234/mnt/subvol1')
|
||||
flexmock(module).should_receive('make_borg_source_directory_path').with_args(
|
||||
'/mnt/subvol2', object
|
||||
).and_return('/mnt/subvol2/.borgmatic-1234/mnt/subvol2')
|
||||
|
||||
assert (
|
||||
module.dump_data_sources(
|
||||
@ -128,7 +158,9 @@ def test_dump_data_sources_snapshots_each_subvolume_and_updates_source_directori
|
||||
def test_dump_data_sources_uses_custom_btrfs_command_in_commands():
|
||||
source_directories = ['/foo', '/mnt/subvol1']
|
||||
config = {'btrfs': {'btrfs_command': '/usr/local/bin/btrfs'}}
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(('/mnt/subvol1',))
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(
|
||||
(module.Subvolume('/mnt/subvol1', contained_source_directories=('/mnt/subvol1',)),)
|
||||
)
|
||||
flexmock(module).should_receive('make_snapshot_path').with_args('/mnt/subvol1').and_return(
|
||||
'/mnt/subvol1/.borgmatic-1234/mnt/subvol1'
|
||||
)
|
||||
@ -138,6 +170,9 @@ def test_dump_data_sources_uses_custom_btrfs_command_in_commands():
|
||||
flexmock(module).should_receive('make_snapshot_exclude_path').with_args(
|
||||
'/mnt/subvol1'
|
||||
).and_return('/mnt/subvol1/.borgmatic-1234/mnt/subvol1/.borgmatic-1234')
|
||||
flexmock(module).should_receive('make_borg_source_directory_path').with_args(
|
||||
'/mnt/subvol1', object
|
||||
).and_return('/mnt/subvol1/.borgmatic-1234/mnt/subvol1')
|
||||
|
||||
assert (
|
||||
module.dump_data_sources(
|
||||
@ -171,7 +206,9 @@ def test_dump_data_sources_uses_custom_findmnt_command_in_commands():
|
||||
config = {'btrfs': {'findmnt_command': '/usr/local/bin/findmnt'}}
|
||||
flexmock(module).should_receive('get_subvolumes').with_args(
|
||||
'btrfs', '/usr/local/bin/findmnt', source_directories
|
||||
).and_return(('/mnt/subvol1',)).once()
|
||||
).and_return(
|
||||
(module.Subvolume('/mnt/subvol1', contained_source_directories=('/mnt/subvol1',)),)
|
||||
).once()
|
||||
flexmock(module).should_receive('make_snapshot_path').with_args('/mnt/subvol1').and_return(
|
||||
'/mnt/subvol1/.borgmatic-1234/mnt/subvol1'
|
||||
)
|
||||
@ -181,6 +218,9 @@ def test_dump_data_sources_uses_custom_findmnt_command_in_commands():
|
||||
flexmock(module).should_receive('make_snapshot_exclude_path').with_args(
|
||||
'/mnt/subvol1'
|
||||
).and_return('/mnt/subvol1/.borgmatic-1234/mnt/subvol1/.borgmatic-1234')
|
||||
flexmock(module).should_receive('make_borg_source_directory_path').with_args(
|
||||
'/mnt/subvol1', object
|
||||
).and_return('/mnt/subvol1/.borgmatic-1234/mnt/subvol1')
|
||||
|
||||
assert (
|
||||
module.dump_data_sources(
|
||||
@ -212,7 +252,9 @@ def test_dump_data_sources_uses_custom_findmnt_command_in_commands():
|
||||
def test_dump_data_sources_with_dry_run_skips_snapshot_and_source_directories_update():
|
||||
source_directories = ['/foo', '/mnt/subvol1']
|
||||
config = {'btrfs': {}}
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(('/mnt/subvol1',))
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(
|
||||
(module.Subvolume('/mnt/subvol1', contained_source_directories=('/mnt/subvol1',)),)
|
||||
)
|
||||
flexmock(module).should_receive('make_snapshot_path').with_args('/mnt/subvol1').and_return(
|
||||
'/mnt/subvol1/.borgmatic-1234/mnt/subvol1'
|
||||
)
|
||||
@ -264,7 +306,12 @@ def test_dump_data_sources_without_matching_subvolumes_skips_snapshot_and_source
|
||||
def test_dump_data_sources_snapshots_adds_to_existing_exclude_patterns():
|
||||
source_directories = ['/foo', '/mnt/subvol1']
|
||||
config = {'btrfs': {}, 'exclude_patterns': ['/bar']}
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(('/mnt/subvol1', '/mnt/subvol2'))
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(
|
||||
(
|
||||
module.Subvolume('/mnt/subvol1', contained_source_directories=('/mnt/subvol1',)),
|
||||
module.Subvolume('/mnt/subvol2', contained_source_directories=('/mnt/subvol2',)),
|
||||
)
|
||||
)
|
||||
flexmock(module).should_receive('make_snapshot_path').with_args('/mnt/subvol1').and_return(
|
||||
'/mnt/subvol1/.borgmatic-1234/mnt/subvol1'
|
||||
)
|
||||
@ -283,6 +330,12 @@ def test_dump_data_sources_snapshots_adds_to_existing_exclude_patterns():
|
||||
flexmock(module).should_receive('make_snapshot_exclude_path').with_args(
|
||||
'/mnt/subvol2'
|
||||
).and_return('/mnt/subvol2/.borgmatic-1234/mnt/subvol2/.borgmatic-1234')
|
||||
flexmock(module).should_receive('make_borg_source_directory_path').with_args(
|
||||
'/mnt/subvol1', object
|
||||
).and_return('/mnt/subvol1/.borgmatic-1234/mnt/subvol1')
|
||||
flexmock(module).should_receive('make_borg_source_directory_path').with_args(
|
||||
'/mnt/subvol2', object
|
||||
).and_return('/mnt/subvol2/.borgmatic-1234/mnt/subvol2')
|
||||
|
||||
assert (
|
||||
module.dump_data_sources(
|
||||
@ -314,7 +367,12 @@ def test_dump_data_sources_snapshots_adds_to_existing_exclude_patterns():
|
||||
|
||||
def test_remove_data_source_dumps_deletes_snapshots():
|
||||
config = {'btrfs': {}}
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(('/mnt/subvol1', '/mnt/subvol2'))
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(
|
||||
(
|
||||
module.Subvolume('/mnt/subvol1', contained_source_directories=('/mnt/subvol1',)),
|
||||
module.Subvolume('/mnt/subvol2', contained_source_directories=('/mnt/subvol2',)),
|
||||
)
|
||||
)
|
||||
flexmock(module).should_receive('make_snapshot_path').with_args('/mnt/subvol1').and_return(
|
||||
'/mnt/subvol1/.borgmatic-1234/./mnt/subvol1'
|
||||
)
|
||||
@ -435,7 +493,12 @@ def test_remove_data_source_dumps_with_get_subvolumes_called_process_error_bails
|
||||
|
||||
def test_remove_data_source_dumps_with_dry_run_skips_deletes():
|
||||
config = {'btrfs': {}}
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(('/mnt/subvol1', '/mnt/subvol2'))
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(
|
||||
(
|
||||
module.Subvolume('/mnt/subvol1', contained_source_directories=('/mnt/subvol1',)),
|
||||
module.Subvolume('/mnt/subvol2', contained_source_directories=('/mnt/subvol2',)),
|
||||
)
|
||||
)
|
||||
flexmock(module).should_receive('make_snapshot_path').with_args('/mnt/subvol1').and_return(
|
||||
'/mnt/subvol1/.borgmatic-1234/./mnt/subvol1'
|
||||
)
|
||||
@ -513,7 +576,12 @@ def test_remove_data_source_dumps_without_subvolumes_skips_deletes():
|
||||
|
||||
def test_remove_data_source_without_snapshots_skips_deletes():
|
||||
config = {'btrfs': {}}
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(('/mnt/subvol1', '/mnt/subvol2'))
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(
|
||||
(
|
||||
module.Subvolume('/mnt/subvol1', contained_source_directories=('/mnt/subvol1',)),
|
||||
module.Subvolume('/mnt/subvol2', contained_source_directories=('/mnt/subvol2',)),
|
||||
)
|
||||
)
|
||||
flexmock(module).should_receive('make_snapshot_path').with_args('/mnt/subvol1').and_return(
|
||||
'/mnt/subvol1/.borgmatic-1234/./mnt/subvol1'
|
||||
)
|
||||
@ -552,7 +620,12 @@ def test_remove_data_source_without_snapshots_skips_deletes():
|
||||
|
||||
def test_remove_data_source_dumps_with_delete_snapshot_file_not_found_error_bails():
|
||||
config = {'btrfs': {}}
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(('/mnt/subvol1', '/mnt/subvol2'))
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(
|
||||
(
|
||||
module.Subvolume('/mnt/subvol1', contained_source_directories=('/mnt/subvol1',)),
|
||||
module.Subvolume('/mnt/subvol2', contained_source_directories=('/mnt/subvol2',)),
|
||||
)
|
||||
)
|
||||
flexmock(module).should_receive('make_snapshot_path').with_args('/mnt/subvol1').and_return(
|
||||
'/mnt/subvol1/.borgmatic-1234/./mnt/subvol1'
|
||||
)
|
||||
@ -611,7 +684,12 @@ def test_remove_data_source_dumps_with_delete_snapshot_file_not_found_error_bail
|
||||
|
||||
def test_remove_data_source_dumps_with_delete_snapshot_called_process_error_bails():
|
||||
config = {'btrfs': {}}
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(('/mnt/subvol1', '/mnt/subvol2'))
|
||||
flexmock(module).should_receive('get_subvolumes').and_return(
|
||||
(
|
||||
module.Subvolume('/mnt/subvol1', contained_source_directories=('/mnt/subvol1',)),
|
||||
module.Subvolume('/mnt/subvol2', contained_source_directories=('/mnt/subvol2',)),
|
||||
)
|
||||
)
|
||||
flexmock(module).should_receive('make_snapshot_path').with_args('/mnt/subvol1').and_return(
|
||||
'/mnt/subvol1/.borgmatic-1234/./mnt/subvol1'
|
||||
)
|
||||
|
1089
tests/unit/hooks/data_source/test_lvm.py
Normal file
1089
tests/unit/hooks/data_source/test_lvm.py
Normal file
File diff suppressed because it is too large
Load Diff
26
tests/unit/hooks/data_source/test_snapshot.py
Normal file
26
tests/unit/hooks/data_source/test_snapshot.py
Normal file
@ -0,0 +1,26 @@
|
||||
from borgmatic.hooks.data_source import snapshot as module
|
||||
|
||||
|
||||
def test_get_contained_directories_without_candidates_returns_empty():
|
||||
assert module.get_contained_directories('/mnt', {}) == ()
|
||||
|
||||
|
||||
def test_get_contained_directories_with_self_candidate_returns_self():
|
||||
candidates = {'/foo', '/mnt', '/bar'}
|
||||
|
||||
assert module.get_contained_directories('/mnt', candidates) == ('/mnt',)
|
||||
assert candidates == {'/foo', '/bar'}
|
||||
|
||||
|
||||
def test_get_contained_directories_with_child_candidate_returns_child():
|
||||
candidates = {'/foo', '/mnt/subdir', '/bar'}
|
||||
|
||||
assert module.get_contained_directories('/mnt', candidates) == ('/mnt/subdir',)
|
||||
assert candidates == {'/foo', '/bar'}
|
||||
|
||||
|
||||
def test_get_contained_directories_with_grandchild_candidate_returns_child():
|
||||
candidates = {'/foo', '/mnt/sub/dir', '/bar'}
|
||||
|
||||
assert module.get_contained_directories('/mnt', candidates) == ('/mnt/sub/dir',)
|
||||
assert candidates == {'/foo', '/bar'}
|
@ -1,3 +1,5 @@
|
||||
import os
|
||||
|
||||
import pytest
|
||||
from flexmock import flexmock
|
||||
|
||||
@ -10,10 +12,20 @@ def test_get_datasets_to_backup_filters_datasets_by_source_directories():
|
||||
).and_return(
|
||||
'dataset\t/dataset\t-\nother\t/other\t-',
|
||||
)
|
||||
flexmock(module.borgmatic.hooks.data_source.snapshot).should_receive(
|
||||
'get_contained_directories'
|
||||
).with_args('/dataset', object).and_return(('/dataset',))
|
||||
flexmock(module.borgmatic.hooks.data_source.snapshot).should_receive(
|
||||
'get_contained_directories'
|
||||
).with_args('/other', object).and_return(())
|
||||
|
||||
assert module.get_datasets_to_backup(
|
||||
'zfs', source_directories=('/foo', '/dataset', '/bar')
|
||||
) == (('dataset', '/dataset'),)
|
||||
) == (
|
||||
module.Dataset(
|
||||
name='dataset', mount_point='/dataset', contained_source_directories=('/dataset',)
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def test_get_datasets_to_backup_filters_datasets_by_user_property():
|
||||
@ -22,9 +34,20 @@ def test_get_datasets_to_backup_filters_datasets_by_user_property():
|
||||
).and_return(
|
||||
'dataset\t/dataset\tauto\nother\t/other\t-',
|
||||
)
|
||||
flexmock(module.borgmatic.hooks.data_source.snapshot).should_receive(
|
||||
'get_contained_directories'
|
||||
).with_args('/dataset', object).never()
|
||||
flexmock(module.borgmatic.hooks.data_source.snapshot).should_receive(
|
||||
'get_contained_directories'
|
||||
).with_args('/other', object).and_return(())
|
||||
|
||||
assert module.get_datasets_to_backup('zfs', source_directories=('/foo', '/bar')) == (
|
||||
('dataset', '/dataset'),
|
||||
module.Dataset(
|
||||
name='dataset',
|
||||
mount_point='/dataset',
|
||||
auto_backup=True,
|
||||
contained_source_directories=('/dataset',),
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@ -34,38 +57,39 @@ def test_get_datasets_to_backup_with_invalid_list_output_raises():
|
||||
).and_return(
|
||||
'dataset',
|
||||
)
|
||||
flexmock(module.borgmatic.hooks.data_source.snapshot).should_receive(
|
||||
'get_contained_directories'
|
||||
).never()
|
||||
|
||||
with pytest.raises(ValueError, match='zfs'):
|
||||
module.get_datasets_to_backup('zfs', source_directories=('/foo', '/bar'))
|
||||
|
||||
|
||||
def test_get_get_all_datasets_does_not_filter_datasets():
|
||||
def test_get_all_dataset_mount_points_does_not_filter_datasets():
|
||||
flexmock(module.borgmatic.execute).should_receive(
|
||||
'execute_command_and_capture_output'
|
||||
).and_return(
|
||||
'dataset\t/dataset\nother\t/other',
|
||||
'/dataset\n/other',
|
||||
)
|
||||
flexmock(module.borgmatic.hooks.data_source.snapshot).should_receive(
|
||||
'get_contained_directories'
|
||||
).and_return(('/dataset',))
|
||||
|
||||
assert module.get_all_datasets('zfs') == (
|
||||
('dataset', '/dataset'),
|
||||
('other', '/other'),
|
||||
assert module.get_all_dataset_mount_points('zfs') == (
|
||||
('/dataset'),
|
||||
('/other'),
|
||||
)
|
||||
|
||||
|
||||
def test_get_all_datasets_with_invalid_list_output_raises():
|
||||
flexmock(module.borgmatic.execute).should_receive(
|
||||
'execute_command_and_capture_output'
|
||||
).and_return(
|
||||
'dataset',
|
||||
)
|
||||
|
||||
with pytest.raises(ValueError, match='zfs'):
|
||||
module.get_all_datasets('zfs')
|
||||
|
||||
|
||||
def test_dump_data_sources_snapshots_and_mounts_and_updates_source_directories():
|
||||
flexmock(module).should_receive('get_datasets_to_backup').and_return(
|
||||
(('dataset', '/mnt/dataset'),)
|
||||
(
|
||||
flexmock(
|
||||
name='dataset',
|
||||
mount_point='/mnt/dataset',
|
||||
contained_source_directories=('/mnt/dataset/subdir',),
|
||||
)
|
||||
)
|
||||
)
|
||||
flexmock(module.os).should_receive('getpid').and_return(1234)
|
||||
full_snapshot_name = 'dataset@borgmatic-1234'
|
||||
@ -79,7 +103,7 @@ def test_dump_data_sources_snapshots_and_mounts_and_updates_source_directories()
|
||||
full_snapshot_name,
|
||||
module.os.path.normpath(snapshot_mount_path),
|
||||
).once()
|
||||
source_directories = ['/mnt/dataset']
|
||||
source_directories = ['/mnt/dataset/subdir']
|
||||
|
||||
assert (
|
||||
module.dump_data_sources(
|
||||
@ -94,10 +118,10 @@ def test_dump_data_sources_snapshots_and_mounts_and_updates_source_directories()
|
||||
== []
|
||||
)
|
||||
|
||||
assert source_directories == [snapshot_mount_path]
|
||||
assert source_directories == [os.path.join(snapshot_mount_path, 'subdir')]
|
||||
|
||||
|
||||
def test_dump_data_sources_snapshots_with_no_datasets_skips_snapshots():
|
||||
def test_dump_data_sources_with_no_datasets_skips_snapshots():
|
||||
flexmock(module).should_receive('get_datasets_to_backup').and_return(())
|
||||
flexmock(module.os).should_receive('getpid').and_return(1234)
|
||||
flexmock(module).should_receive('snapshot_dataset').never()
|
||||
@ -122,7 +146,13 @@ def test_dump_data_sources_snapshots_with_no_datasets_skips_snapshots():
|
||||
|
||||
def test_dump_data_sources_uses_custom_commands():
|
||||
flexmock(module).should_receive('get_datasets_to_backup').and_return(
|
||||
(('dataset', '/mnt/dataset'),)
|
||||
(
|
||||
flexmock(
|
||||
name='dataset',
|
||||
mount_point='/mnt/dataset',
|
||||
contained_source_directories=('/mnt/dataset/subdir',),
|
||||
)
|
||||
)
|
||||
)
|
||||
flexmock(module.os).should_receive('getpid').and_return(1234)
|
||||
full_snapshot_name = 'dataset@borgmatic-1234'
|
||||
@ -136,7 +166,7 @@ def test_dump_data_sources_uses_custom_commands():
|
||||
full_snapshot_name,
|
||||
module.os.path.normpath(snapshot_mount_path),
|
||||
).once()
|
||||
source_directories = ['/mnt/dataset']
|
||||
source_directories = ['/mnt/dataset/subdir']
|
||||
hook_config = {
|
||||
'zfs_command': '/usr/local/bin/zfs',
|
||||
'mount_command': '/usr/local/bin/mount',
|
||||
@ -158,12 +188,12 @@ def test_dump_data_sources_uses_custom_commands():
|
||||
== []
|
||||
)
|
||||
|
||||
assert source_directories == [snapshot_mount_path]
|
||||
assert source_directories == [os.path.join(snapshot_mount_path, 'subdir')]
|
||||
|
||||
|
||||
def test_dump_data_sources_with_dry_run_skips_commands_and_does_not_touch_source_directories():
|
||||
flexmock(module).should_receive('get_datasets_to_backup').and_return(
|
||||
(('dataset', '/mnt/dataset'),)
|
||||
(flexmock(name='dataset', mount_point='/mnt/dataset'),)
|
||||
)
|
||||
flexmock(module.os).should_receive('getpid').and_return(1234)
|
||||
flexmock(module).should_receive('snapshot_dataset').never()
|
||||
@ -186,6 +216,46 @@ def test_dump_data_sources_with_dry_run_skips_commands_and_does_not_touch_source
|
||||
assert source_directories == ['/mnt/dataset']
|
||||
|
||||
|
||||
def test_dump_data_sources_ignores_mismatch_between_source_directories_and_contained_source_directories():
|
||||
flexmock(module).should_receive('get_datasets_to_backup').and_return(
|
||||
(
|
||||
flexmock(
|
||||
name='dataset',
|
||||
mount_point='/mnt/dataset',
|
||||
contained_source_directories=('/mnt/dataset/subdir',),
|
||||
)
|
||||
)
|
||||
)
|
||||
flexmock(module.os).should_receive('getpid').and_return(1234)
|
||||
full_snapshot_name = 'dataset@borgmatic-1234'
|
||||
flexmock(module).should_receive('snapshot_dataset').with_args(
|
||||
'zfs',
|
||||
full_snapshot_name,
|
||||
).once()
|
||||
snapshot_mount_path = '/run/borgmatic/zfs_snapshots/./mnt/dataset'
|
||||
flexmock(module).should_receive('mount_snapshot').with_args(
|
||||
'mount',
|
||||
full_snapshot_name,
|
||||
module.os.path.normpath(snapshot_mount_path),
|
||||
).once()
|
||||
source_directories = ['/hmm']
|
||||
|
||||
assert (
|
||||
module.dump_data_sources(
|
||||
hook_config={},
|
||||
config={'source_directories': '/mnt/dataset', 'zfs': {}},
|
||||
log_prefix='test',
|
||||
config_paths=('test.yaml',),
|
||||
borgmatic_runtime_directory='/run/borgmatic',
|
||||
source_directories=source_directories,
|
||||
dry_run=False,
|
||||
)
|
||||
== []
|
||||
)
|
||||
|
||||
assert source_directories == ['/hmm', os.path.join(snapshot_mount_path, 'subdir')]
|
||||
|
||||
|
||||
def test_get_all_snapshots_parses_list_output():
|
||||
flexmock(module.borgmatic.execute).should_receive(
|
||||
'execute_command_and_capture_output'
|
||||
@ -197,7 +267,7 @@ def test_get_all_snapshots_parses_list_output():
|
||||
|
||||
|
||||
def test_remove_data_source_dumps_unmounts_and_destroys_snapshots():
|
||||
flexmock(module).should_receive('get_all_datasets').and_return((('dataset', '/mnt/dataset'),))
|
||||
flexmock(module).should_receive('get_all_dataset_mount_points').and_return(('/mnt/dataset',))
|
||||
flexmock(module.borgmatic.config.paths).should_receive(
|
||||
'replace_temporary_subdirectory_with_glob'
|
||||
).and_return('/run/borgmatic')
|
||||
@ -224,7 +294,7 @@ def test_remove_data_source_dumps_unmounts_and_destroys_snapshots():
|
||||
|
||||
|
||||
def test_remove_data_source_dumps_use_custom_commands():
|
||||
flexmock(module).should_receive('get_all_datasets').and_return((('dataset', '/mnt/dataset'),))
|
||||
flexmock(module).should_receive('get_all_dataset_mount_points').and_return(('/mnt/dataset',))
|
||||
flexmock(module.borgmatic.config.paths).should_receive(
|
||||
'replace_temporary_subdirectory_with_glob'
|
||||
).and_return('/run/borgmatic')
|
||||
@ -252,7 +322,7 @@ def test_remove_data_source_dumps_use_custom_commands():
|
||||
|
||||
|
||||
def test_remove_data_source_dumps_bails_for_missing_zfs_command():
|
||||
flexmock(module).should_receive('get_all_datasets').and_raise(FileNotFoundError)
|
||||
flexmock(module).should_receive('get_all_dataset_mount_points').and_raise(FileNotFoundError)
|
||||
flexmock(module.borgmatic.config.paths).should_receive(
|
||||
'replace_temporary_subdirectory_with_glob'
|
||||
).never()
|
||||
@ -268,7 +338,7 @@ def test_remove_data_source_dumps_bails_for_missing_zfs_command():
|
||||
|
||||
|
||||
def test_remove_data_source_dumps_bails_for_zfs_command_error():
|
||||
flexmock(module).should_receive('get_all_datasets').and_raise(
|
||||
flexmock(module).should_receive('get_all_dataset_mount_points').and_raise(
|
||||
module.subprocess.CalledProcessError(1, 'wtf')
|
||||
)
|
||||
flexmock(module.borgmatic.config.paths).should_receive(
|
||||
@ -286,7 +356,7 @@ def test_remove_data_source_dumps_bails_for_zfs_command_error():
|
||||
|
||||
|
||||
def test_remove_data_source_dumps_bails_for_missing_umount_command():
|
||||
flexmock(module).should_receive('get_all_datasets').and_return((('dataset', '/mnt/dataset'),))
|
||||
flexmock(module).should_receive('get_all_dataset_mount_points').and_return(('/mnt/dataset',))
|
||||
flexmock(module.borgmatic.config.paths).should_receive(
|
||||
'replace_temporary_subdirectory_with_glob'
|
||||
).and_return('/run/borgmatic')
|
||||
@ -310,7 +380,7 @@ def test_remove_data_source_dumps_bails_for_missing_umount_command():
|
||||
|
||||
|
||||
def test_remove_data_source_dumps_bails_for_umount_command_error():
|
||||
flexmock(module).should_receive('get_all_datasets').and_return((('dataset', '/mnt/dataset'),))
|
||||
flexmock(module).should_receive('get_all_dataset_mount_points').and_return(('/mnt/dataset',))
|
||||
flexmock(module.borgmatic.config.paths).should_receive(
|
||||
'replace_temporary_subdirectory_with_glob'
|
||||
).and_return('/run/borgmatic')
|
||||
@ -334,7 +404,7 @@ def test_remove_data_source_dumps_bails_for_umount_command_error():
|
||||
|
||||
|
||||
def test_remove_data_source_dumps_skips_unmount_snapshot_directories_that_are_not_actually_directories():
|
||||
flexmock(module).should_receive('get_all_datasets').and_return((('dataset', '/mnt/dataset'),))
|
||||
flexmock(module).should_receive('get_all_dataset_mount_points').and_return(('/mnt/dataset',))
|
||||
flexmock(module.borgmatic.config.paths).should_receive(
|
||||
'replace_temporary_subdirectory_with_glob'
|
||||
).and_return('/run/borgmatic')
|
||||
@ -359,7 +429,7 @@ def test_remove_data_source_dumps_skips_unmount_snapshot_directories_that_are_no
|
||||
|
||||
|
||||
def test_remove_data_source_dumps_skips_unmount_snapshot_mount_paths_that_are_not_actually_directories():
|
||||
flexmock(module).should_receive('get_all_datasets').and_return((('dataset', '/mnt/dataset'),))
|
||||
flexmock(module).should_receive('get_all_dataset_mount_points').and_return(('/mnt/dataset',))
|
||||
flexmock(module.borgmatic.config.paths).should_receive(
|
||||
'replace_temporary_subdirectory_with_glob'
|
||||
).and_return('/run/borgmatic')
|
||||
@ -388,8 +458,38 @@ def test_remove_data_source_dumps_skips_unmount_snapshot_mount_paths_that_are_no
|
||||
)
|
||||
|
||||
|
||||
def test_remove_data_source_dumps_skips_unmount_snapshot_mount_paths_after_rmtree_succeeds():
|
||||
flexmock(module).should_receive('get_all_dataset_mount_points').and_return(('/mnt/dataset',))
|
||||
flexmock(module.borgmatic.config.paths).should_receive(
|
||||
'replace_temporary_subdirectory_with_glob'
|
||||
).and_return('/run/borgmatic')
|
||||
flexmock(module.glob).should_receive('glob').replace_with(lambda path: [path])
|
||||
flexmock(module.os.path).should_receive('isdir').with_args(
|
||||
'/run/borgmatic/zfs_snapshots'
|
||||
).and_return(True)
|
||||
flexmock(module.os.path).should_receive('isdir').with_args(
|
||||
'/run/borgmatic/zfs_snapshots/mnt/dataset'
|
||||
).and_return(True).and_return(False)
|
||||
flexmock(module.shutil).should_receive('rmtree')
|
||||
flexmock(module).should_receive('unmount_snapshot').never()
|
||||
flexmock(module).should_receive('get_all_snapshots').and_return(
|
||||
('dataset@borgmatic-1234', 'dataset@other', 'other@other', 'invalid')
|
||||
)
|
||||
flexmock(module).should_receive('destroy_snapshot').with_args(
|
||||
'zfs', 'dataset@borgmatic-1234'
|
||||
).once()
|
||||
|
||||
module.remove_data_source_dumps(
|
||||
hook_config={},
|
||||
config={'source_directories': '/mnt/dataset', 'zfs': {}},
|
||||
log_prefix='test',
|
||||
borgmatic_runtime_directory='/run/borgmatic',
|
||||
dry_run=False,
|
||||
)
|
||||
|
||||
|
||||
def test_remove_data_source_dumps_with_dry_run_skips_unmount_and_destroy():
|
||||
flexmock(module).should_receive('get_all_datasets').and_return((('dataset', '/mnt/dataset'),))
|
||||
flexmock(module).should_receive('get_all_dataset_mount_points').and_return(('/mnt/dataset',))
|
||||
flexmock(module.borgmatic.config.paths).should_receive(
|
||||
'replace_temporary_subdirectory_with_glob'
|
||||
).and_return('/run/borgmatic')
|
||||
|
Loading…
x
Reference in New Issue
Block a user
i don't think that works. snapshots take up space even if you don't write to them: writes to the parent volume will write to the snapshot (counter-intuitively, i admit) to keep a copy of the old extent.
at least, that's how i understand it: did you test this to confirm this actually works in production?
i suspect you'll have to make that number of extents configurable, and in fact i would use the
--size
parameter instead of--extents
, unless you want to start parsing units for the user and converting that into extents...Works in production? No. Works on my test machine? Yes. So are you thinking just a configuration option for setting the size of snapshots in MB? Would that really work globally for all logical volumes? Or do you think, at the risk of complicating this code, a percentage of logical volume size would work better?
And thanks for taking the time to look at this and weigh in!
I suspect that works for you because you never write enough to fill that
one extent.
Essentially, yes. It's not necessarily in MB though... You specify the
unit or percentage, so you can do -L 10%ORIGIN to have a snapshot 10% of
the original size, or -L 1G to have a 1G snapshot. I'd pass that
verbatim to lvcreate, essentially.
You could even use 10% as a default, but then the problem is you need
enough free space to cover for that, which is why I think this needs to
be configured by the user.
No problem, thanks for taking a bit on that one!
Sounds good. I'll add that option as you describe, potentially with a default.
Okay, implemented!