Compare commits

..

40 Commits
main ... main

Author SHA1 Message Date
ebde88ccaa Fix the Healthchecks ping body size limit, restoring it to the documented 100,000 bytes (#889).
All checks were successful
build / test (push) Successful in 6m0s
build / docs (push) Successful in 1m28s
2024-06-25 12:45:44 -07:00
cc402487d9 Minor development documentation clarifications.
All checks were successful
build / test (push) Successful in 5m55s
build / docs (push) Successful in 1m29s
2024-06-24 10:48:13 -07:00
f5a1dd31c8 Fix PagerDuty hook traceback with Python < 3.10 (#886).
All checks were successful
build / test (push) Successful in 4m17s
build / docs (push) Successful in 50s
2024-06-23 18:28:41 -07:00
c41000a4b1 Bump version for release.
Some checks failed
build / test (push) Failing after 1m19s
build / docs (push) Has been skipped
2024-06-23 17:02:13 -07:00
c3f8b05a68 Fix test warning in PagerDuty hook. 2024-06-23 17:01:54 -07:00
f4fcf92bd6 Add an "upload_buffer_size" option to set the size of the upload buffer used in "create" action (#865).
All checks were successful
build / test (push) Successful in 6m17s
build / docs (push) Successful in 1m28s
2024-06-23 16:26:22 -07:00
a2c139245d Add a "--max-duration" flag to the "check" action and a "max_duration" option to the repository check configuration (#817).
All checks were successful
build / test (push) Successful in 6m14s
build / docs (push) Successful in 1m29s
2024-06-22 16:19:06 -07:00
45a9e3bfc3 Document that "borgmatic borg create" bypasses monitoring hooks (#882).
All checks were successful
build / test (push) Successful in 4m23s
build / docs (push) Successful in 53s
2024-06-20 14:25:20 -07:00
bd40015e1c Added missing word to NEWS entry (#881).
All checks were successful
build / test (push) Successful in 4m37s
build / docs (push) Successful in 1m29s
2024-06-20 13:33:34 -07:00
7894600408 Fix "Unrecognized argument" error when the same value is with different command-line flags (#881).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2024-06-20 11:53:52 -07:00
df4668754d Fix "Argument list too long" error in the "spot" check when checking 100k+ files (#866).
All checks were successful
build / test (push) Successful in 6m20s
build / docs (push) Successful in 1m29s
2024-06-09 22:53:56 -07:00
08d6f83b2e In the "spot" check, don't try to hash symlinked directories.
All checks were successful
build / test (push) Successful in 4m25s
build / docs (push) Successful in 51s
2024-06-09 15:58:16 -07:00
c58f510054 Minor spot check documentation clarification (#868).
All checks were successful
build / test (push) Successful in 6m7s
build / docs (push) Successful in 1m30s
2024-06-09 15:28:28 -07:00
c2879d054a Alpha ordering in docs (#874).
All checks were successful
build / test (push) Successful in 4m29s
build / docs (push) Successful in 53s
2024-06-05 14:58:43 -07:00
f821d2c909 Calling interpolated variable "repository_label" instead of "label" for clarity (#874).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
2024-06-05 14:56:21 -07:00
1ef2218919 Remove obsolute "version:" from Docker Compose files. 2024-06-05 14:50:52 -07:00
177c958572 Add configured repository "label" to the interpolated variables passed to command hooks (#874).
All checks were successful
build / test (push) Successful in 4m26s
build / docs (push) Successful in 52s
2024-06-05 14:47:37 -07:00
b5ab1ff0cd Use (current) default action order whenever actions are mentioned (#873).
All checks were successful
build / test (push) Successful in 6m9s
build / docs (push) Successful in 1m30s
2024-06-05 11:21:51 -07:00
70a978b83d Upgrade test requirements.
All checks were successful
build / test (push) Successful in 5m46s
build / docs (push) Successful in 1m15s
2024-05-21 13:57:06 -07:00
2037810c6b Avoid requiring network in test_healthchecks.py (#869).
All checks were successful
build / test (push) Successful in 7m26s
build / docs (push) Successful in 2m27s
Reviewed-on: borgmatic-collective/borgmatic#869
2024-05-21 20:33:21 +00:00
de304f83de
Avoid requiring network in test_healthchecks.py
Some test environments (e.g., the one of the Nix build system) don't
allow network requests while building and testing.
2024-05-16 16:11:40 +02:00
5752373009 When color output is disabled (explicitly or implicitly), don't prefix each log line with the log level (#863).
All checks were successful
build / test (push) Successful in 7m59s
build / docs (push) Successful in 2m26s
2024-05-11 22:40:13 -07:00
fecae39fcd To avoid duplicate install, update docs to uninstall borgmatic before re-installing with Apprise (#862).
All checks were successful
build / test (push) Successful in 7m57s
build / docs (push) Successful in 2m23s
2024-05-03 16:48:35 -07:00
38bc4fbfe2 Fix interaction between environment variable interpolation in constants and shell escaping (#860).
All checks were successful
build / test (push) Successful in 7m52s
build / docs (push) Successful in 2m19s
2024-04-30 09:36:26 -07:00
92ed7573d4 Fix NEWS formatting.
All checks were successful
build / test (push) Successful in 7m22s
build / docs (push) Successful in 2m7s
2024-04-29 09:39:40 -07:00
80f0e92462 Bump version for release. 2024-04-29 09:38:02 -07:00
5f10b1b2ca Clarify database limitations.
All checks were successful
build / test (push) Successful in 6m10s
build / docs (push) Successful in 1m23s
2024-04-28 16:55:24 -07:00
4f83b1e6b3 [Documentation] Add compression level explanation and example.
All checks were successful
build / test (push) Successful in 7m24s
build / docs (push) Successful in 2m27s
Reviewed-on: borgmatic-collective/borgmatic#859
2024-04-28 16:50:09 +00:00
15d5a687fb
make parenthetical its own sentence 2024-04-28 18:41:05 +02:00
eb1fce3787
documentation: add compression level explanation and example 2024-04-28 18:24:23 +02:00
7f735cbe59 Fix a traceback with "check --only spot" when the "spot" check is unconfigured (#857).
All checks were successful
build / test (push) Successful in 7m42s
build / docs (push) Successful in 2m10s
2024-04-24 16:12:58 -07:00
a690ea4016 Add Healtchecks auto-provisioning to NEWS (#815).
All checks were successful
build / test (push) Successful in 5m49s
build / docs (push) Successful in 2m16s
2024-04-23 09:25:29 -07:00
7a110c7acd Add Healthchecks auto-provisionning (#815).
Some checks failed
build / docs (push) Blocked by required conditions
build / test (push) Has been cancelled
Reviewed-on: borgmatic-collective/borgmatic#852
Reviewed-by: Dan Helfman <witten@torsion.org>
2024-04-23 16:23:26 +00:00
407bb33359 Fix schema.yaml to comply with maximum line length 2024-04-22 20:47:03 +02:00
4b7f7bba04 Issue warning if using UUID URL scheme with create_slug 2024-04-22 20:45:36 +02:00
cfdc0a1f2a Fix Healthchecks UUID regex 2024-04-22 20:44:31 +02:00
f926055e67 Fix a traceback when the "data" consistency check is used (#854).
All checks were successful
build / test (push) Successful in 7m36s
build / docs (push) Successful in 2m26s
2024-04-21 14:55:02 -07:00
058af95d70 Document limitation about using database hooks and "one_file_system" (#853).
All checks were successful
build / test (push) Successful in 4m20s
build / docs (push) Successful in 52s
2024-04-20 14:53:41 -07:00
54facdc391 Clarify Apprise states configuration.
All checks were successful
build / test (push) Successful in 6m2s
build / docs (push) Successful in 1m29s
2024-04-20 08:26:06 -07:00
2e4c0cc7e7 Support for healthchecks auto provisionning 2024-04-19 10:43:45 +02:00
29 changed files with 984 additions and 194 deletions

26
NEWS
View File

@ -1,6 +1,30 @@
1.8.11.dev0
1.8.13.dev0
* #886: Fix a PagerDuty hook traceback with Python < 3.10.
* #889: Fix the Healthchecks ping body size limit, restoring it to the documented 100,000 bytes.
1.8.12
* #817: Add a "--max-duration" flag to the "check" action and a "max_duration" option to the
repository check configuration. This tells Borg to interrupt a repository check after a certain
duration.
* #860: Fix interaction between environment variable interpolation in constants and shell escaping.
* #863: When color output is disabled (explicitly or implicitly), don't prefix each log line with
the log level.
* #865: Add an "upload_buffer_size" option to set the size of the upload buffer used in "create"
action.
* #866: Fix "Argument list too long" error in the "spot" check when checking hundreds of thousands
of files at once.
* #874: Add the configured repository label as "repository_label" to the interpolated variables
passed to before/after command hooks.
* #881: Fix "Unrecognized argument" error when the same value is used with different command-line
flags.
* In the "spot" check, don't try to hash symlinked directories.
1.8.11
* #815: Add optional Healthchecks auto-provisioning via "create_slug" option.
* #851: Fix lack of file extraction when using "extract --strip-components all" on a path with a
leading slash.
* #854: Fix a traceback when the "data" consistency check is used.
* #857: Fix a traceback with "check --only spot" when the "spot" check is unconfigured.
1.8.10
* #656 (beta): Add a "spot" consistency check that compares file counts and contents between your

View File

@ -300,8 +300,7 @@ def collect_spot_check_source_paths(
'''
Given a repository configuration dict, a configuration dict, the local Borg version, global
arguments as an argparse.Namespace instance, the local Borg path, and the remote Borg path,
collect the source paths that Borg would use in an actual create (but only include files and
symlinks).
collect the source paths that Borg would use in an actual create (but only include files).
'''
stream_processes = any(
borgmatic.hooks.dispatch.call_hooks(
@ -349,7 +348,7 @@ def collect_spot_check_source_paths(
if path_line and path_line.startswith('- ') or path_line.startswith('+ ')
)
return tuple(path for path in paths if os.path.isfile(path) or os.path.islink(path))
return tuple(path for path in paths if os.path.isfile(path))
BORG_DIRECTORY_FILE_TYPE = 'd'
@ -388,6 +387,9 @@ def collect_spot_check_archive_paths(
)
SAMPLE_PATHS_SUBSET_COUNT = 10000
def compare_spot_check_hashes(
repository,
archive,
@ -420,32 +422,57 @@ def compare_spot_check_hashes(
f'{log_label}: Sampling {sample_count} source paths (~{spot_check_config["data_sample_percentage"]}%) for spot check'
)
# Hash each file in the sample paths (if it exists).
hash_output = borgmatic.execute.execute_command_and_capture_output(
(spot_check_config.get('xxh64sum_command', 'xxh64sum'),)
+ tuple(path for path in source_sample_paths if path in existing_source_sample_paths)
)
source_sample_paths_iterator = iter(source_sample_paths)
source_hashes = {}
archive_hashes = {}
source_hashes = dict(
(reversed(line.split(' ', 1)) for line in hash_output.splitlines()),
**{path: '' for path in source_sample_paths if path not in existing_source_sample_paths},
)
archive_hashes = dict(
reversed(line.split(' ', 1))
for line in borgmatic.borg.list.capture_archive_listing(
repository['path'],
archive,
config,
local_borg_version,
global_arguments,
list_paths=source_sample_paths,
path_format='{xxh64} /{path}{NL}', # noqa: FS003
local_path=local_path,
remote_path=remote_path,
# Only hash a few thousand files at a time (a subset of the total paths) to avoid an "Argument
# list too long" shell error.
while True:
# Hash each file in the sample paths (if it exists).
source_sample_paths_subset = tuple(
itertools.islice(source_sample_paths_iterator, SAMPLE_PATHS_SUBSET_COUNT)
)
if not source_sample_paths_subset:
break
hash_output = borgmatic.execute.execute_command_and_capture_output(
(spot_check_config.get('xxh64sum_command', 'xxh64sum'),)
+ tuple(
path for path in source_sample_paths_subset if path in existing_source_sample_paths
)
)
source_hashes.update(
**dict(
(reversed(line.split(' ', 1)) for line in hash_output.splitlines()),
# Represent non-existent files as having empty hashes so the comparison below still works.
**{
path: ''
for path in source_sample_paths_subset
if path not in existing_source_sample_paths
},
)
)
# Get the hash for each file in the archive.
archive_hashes.update(
**dict(
reversed(line.split(' ', 1))
for line in borgmatic.borg.list.capture_archive_listing(
repository['path'],
archive,
config,
local_borg_version,
global_arguments,
list_paths=source_sample_paths_subset,
path_format='{xxh64} /{path}{NL}', # noqa: FS003
local_path=local_path,
remote_path=remote_path,
)
if line
)
)
if line
)
# Compare the source hashes with the archive hashes to see how many match.
failing_paths = []
@ -480,7 +507,13 @@ def spot_check(
'''
log_label = f'{repository.get("label", repository["path"])}'
logger.debug(f'{log_label}: Running spot check')
spot_check_config = next(check for check in config['checks'] if check['name'] == 'spot')
try:
spot_check_config = next(
check for check in config.get('checks', ()) if check.get('name') == 'spot'
)
except StopIteration:
raise ValueError('Cannot run spot check because it is unconfigured')
if spot_check_config['data_tolerance_percentage'] > spot_check_config['data_sample_percentage']:
raise ValueError(

View File

@ -50,10 +50,10 @@ def make_archive_filter_flags(local_borg_version, config, checks, check_argument
return ()
def make_check_flags(checks, archive_filter_flags):
def make_check_name_flags(checks, archive_filter_flags):
'''
Given a parsed sequence of checks and a sequence of flags to filter archives, transform the
checks into tuple of command-line check flags.
Given parsed checks set and a sequence of flags to filter archives, transform the checks into
tuple of command-line check flags.
For example, given parsed checks of:
@ -68,13 +68,13 @@ def make_check_flags(checks, archive_filter_flags):
'''
if 'data' in checks:
data_flags = ('--verify-data',)
checks += ('archives',)
checks.update({'archives'})
else:
data_flags = ()
common_flags = (archive_filter_flags if 'archives' in checks else ()) + data_flags
if {'repository', 'archives'}.issubset(set(checks)):
if {'repository', 'archives'}.issubset(checks):
return common_flags
return (
@ -134,10 +134,30 @@ def check_archives(
if logger.isEnabledFor(logging.DEBUG):
verbosity_flags = ('--debug', '--show-rc')
try:
repository_check_config = next(
check for check in config.get('checks', ()) if check.get('name') == 'repository'
)
except StopIteration:
repository_check_config = {}
if check_arguments.max_duration and 'archives' in checks:
raise ValueError('The archives check cannot run when the --max-duration flag is used')
if repository_check_config.get('max_duration') and 'archives' in checks:
raise ValueError(
'The archives check cannot run when the repository check has the max_duration option set'
)
max_duration = check_arguments.max_duration or repository_check_config.get('max_duration')
borg_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
full_command = (
(local_path, 'check')
+ (('--repair',) if check_arguments.repair else ())
+ make_check_flags(checks, archive_filter_flags)
+ (('--max-duration', str(max_duration)) if max_duration else ())
+ make_check_name_flags(checks, archive_filter_flags)
+ (('--remote-path', remote_path) if remote_path else ())
+ (('--log-json',) if global_arguments.log_json else ())
+ (('--lock-wait', str(lock_wait)) if lock_wait else ())
@ -147,9 +167,6 @@ def check_archives(
+ flags.make_repository_flags(repository_path, local_borg_version)
)
borg_environment = environment.make_environment(config)
borg_exit_codes = config.get('borg_exit_codes')
# The Borg repair option triggers an interactive prompt, which won't work when output is
# captured. And progress messes with the terminal directly.
if check_arguments.repair or check_arguments.progress:

View File

@ -371,6 +371,7 @@ def make_base_create_command(
chunker_params = config.get('chunker_params', None)
compression = config.get('compression', None)
upload_rate_limit = config.get('upload_rate_limit', None)
upload_buffer_size = config.get('upload_buffer_size', None)
umask = config.get('umask', None)
lock_wait = config.get('lock_wait', None)
list_filter_flags = make_list_filter_flags(local_borg_version, dry_run)
@ -412,6 +413,7 @@ def make_base_create_command(
+ (('--chunker-params', chunker_params) if chunker_params else ())
+ (('--compression', compression) if compression else ())
+ upload_ratelimit_flags
+ (('--upload-buffer', str(upload_buffer_size)) if upload_buffer_size else ())
+ (('--one-file-system',) if config.get('one_file_system') or stream_processes else ())
+ numeric_ids_flags
+ atime_flags

View File

@ -113,6 +113,54 @@ def parse_and_record_action_arguments(
return tuple(argument for argument in remaining if argument != action_name)
def argument_is_flag(argument):
'''
Return True if the given argument looks like a flag, e.g. '--some-flag', as opposed to a
non-flag value.
'''
return isinstance(argument, str) and argument.startswith('--')
def group_arguments_with_values(arguments):
'''
Given a sequence of arguments, return a sequence of tuples where each one contains either a
single argument (such as for a stand-alone flag) or a flag argument and its corresponding value.
For instance, given the following arguments sequence as input:
('--foo', '--bar', '33', '--baz')
... return the following output:
(('--foo',), ('--bar', '33'), ('--baz',))
'''
grouped_arguments = []
index = 0
while index < len(arguments):
this_argument = arguments[index]
try:
next_argument = arguments[index + 1]
except IndexError:
grouped_arguments.append((this_argument,))
break
if (
argument_is_flag(this_argument)
and not argument_is_flag(next_argument)
and next_argument not in ACTION_ALIASES
):
grouped_arguments.append((this_argument, next_argument))
index += 2
continue
grouped_arguments.append((this_argument,))
index += 1
return tuple(grouped_arguments)
def get_unparsable_arguments(remaining_action_arguments):
'''
Given a sequence of argument tuples (one per action parser that parsed arguments), determine the
@ -121,12 +169,21 @@ def get_unparsable_arguments(remaining_action_arguments):
if not remaining_action_arguments:
return ()
grouped_action_arguments = tuple(
group_arguments_with_values(action_arguments)
for action_arguments in remaining_action_arguments
)
return tuple(
argument
for argument in dict.fromkeys(
itertools.chain.from_iterable(remaining_action_arguments)
).keys()
if all(argument in action_arguments for action_arguments in remaining_action_arguments)
itertools.chain.from_iterable(
argument_group
for argument_group in dict.fromkeys(
itertools.chain.from_iterable(grouped_action_arguments)
).keys()
if all(
argument_group in action_arguments for action_arguments in grouped_action_arguments
)
)
)
@ -604,6 +661,11 @@ def make_parsers():
action='store_true',
help='Attempt to repair any inconsistencies found (for interactive use)',
)
check_group.add_argument(
'--max-duration',
metavar='SECONDS',
help='How long to check the repository before interrupting the check, defaults to no interruption',
)
check_group.add_argument(
'-a',
'--match-archives',

View File

@ -286,10 +286,11 @@ def run_actions(
global_arguments = arguments['global']
dry_run_label = ' (dry run; not making any changes)' if global_arguments.dry_run else ''
hook_context = {
'repository': repository_path,
'repository_label': repository.get('label', ''),
'log_file': global_arguments.log_file if global_arguments.log_file else '',
# Deprecated: For backwards compatibility with borgmatic < 1.6.0.
'repositories': ','.join([repo['path'] for repo in config['repositories']]),
'log_file': global_arguments.log_file if global_arguments.log_file else '',
'repository': repository_path,
}
skip_actions = set(get_skip_actions(config, arguments))

View File

@ -50,12 +50,15 @@ def apply_constants(value, constants, shell_escape=False):
value[index] = apply_constants(list_value, constants, shell_escape)
elif isinstance(value, dict):
for option_name, option_value in value.items():
shell_escape = (
shell_escape
or option_name.startswith('before_')
or option_name.startswith('after_')
or option_name == 'on_error'
value[option_name] = apply_constants(
option_value,
constants,
shell_escape=(
shell_escape
or option_name.startswith('before_')
or option_name.startswith('after_')
or option_name == 'on_error'
),
)
value[option_name] = apply_constants(option_value, constants, shell_escape)
return value

View File

@ -269,7 +269,8 @@ properties:
compression:
type: string
description: |
Type of compression to use when creating archives. See
Type of compression to use when creating archives. (Compression
level can be added separated with a comma, like "zstd,7".) See
http://borgbackup.readthedocs.io/en/stable/usage/create.html for
details. Defaults to "lz4".
example: lz4
@ -279,6 +280,11 @@ properties:
Remote network upload rate limit in kiBytes/second. Defaults to
unlimited.
example: 100
upload_buffer_size:
type: integer
description: |
Size of network upload buffer in MiB. Defaults to no buffer.
example: 160
retries:
type: integer
description: |
@ -510,7 +516,6 @@ properties:
name:
type: string
enum:
- repository
- archives
- data
- extract
@ -541,6 +546,50 @@ properties:
"always": running this check every time checks
are run.
example: 2 weeks
- required: [name]
additionalProperties: false
properties:
name:
type: string
enum:
- repository
description: |
Name of consistency check to run: "repository",
"archives", "data", "spot", and/or "extract".
"repository" checks the consistency of the
repository, "archives" checks all of the
archives, "data" verifies the integrity of the
data within the archives, "spot" checks that
some percentage of source files are found in the
most recent archive (with identical contents),
and "extract" does an extraction dry-run of the
most recent archive. Note that "data" implies
"archives". See "skip_actions" for disabling
checks altogether.
example: spot
frequency:
type: string
description: |
How frequently to run this type of consistency
check (as a best effort). The value is a number
followed by a unit of time. E.g., "2 weeks" to
run this consistency check no more than every
two weeks for a given repository or "1 month" to
run it no more than monthly. Defaults to
"always": running this check every time checks
are run.
example: 2 weeks
max_duration:
type: integer
description: |
How many seconds to check the repository before
interrupting the check. Useful for splitting a
long-running repository check into multiple
partial checks. Defaults to no interruption. Only
applies to the "repository" check, does not check
the repository index, and is not compatible with a
simultaneous "archives" check or "--repair" flag.
example: 3600
- required:
- name
- count_tolerance_percentage
@ -1662,6 +1711,14 @@ properties:
states.
example:
- finish
create_slug:
type: boolean
description: |
Create the check if it does not exist. Only works with
the slug URL scheme (https://hc-ping.com/<ping-key>/<slug>
as opposed to https://hc-ping.com/<uuid>).
Defaults to false.
example: true
description: |
Configuration for a monitoring integration with Healthchecks. Create
an account at https://healthchecks.io (or self-host Healthchecks) if

View File

@ -1,4 +1,5 @@
import logging
import re
import requests
@ -14,7 +15,7 @@ MONITOR_STATE_TO_HEALTHCHECKS = {
monitor.State.LOG: 'log',
}
DEFAULT_PING_BODY_LIMIT_BYTES = 1500
DEFAULT_PING_BODY_LIMIT_BYTES = 100000
HANDLER_IDENTIFIER = 'healthchecks'
@ -59,10 +60,20 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
)
return
ping_url_is_uuid = re.search(r'\w{8}-\w{4}-\w{4}-\w{4}-\w{12}$', ping_url)
healthchecks_state = MONITOR_STATE_TO_HEALTHCHECKS.get(state)
if healthchecks_state:
ping_url = f'{ping_url}/{healthchecks_state}'
if hook_config.get('create_slug'):
if ping_url_is_uuid:
logger.warning(
f'{config_filename}: Healthchecks UUIDs do not support auto provisionning; ignoring'
)
else:
ping_url = f'{ping_url}?create=1'
logger.info(f'{config_filename}: Pinging Healthchecks {state.name.lower()}{dry_run_label}')
logger.debug(f'{config_filename}: Using Healthchecks ping URL {ping_url}')

View File

@ -40,9 +40,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
return
hostname = platform.node()
local_timestamp = (
datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).astimezone().isoformat()
)
local_timestamp = datetime.datetime.now(datetime.timezone.utc).astimezone().isoformat()
payload = json.dumps(
{
'routing_key': hook_config['integration_key'],

View File

@ -88,6 +88,11 @@ class Multi_stream_handler(logging.Handler):
handler.setLevel(level)
class Console_no_color_formatter(logging.Formatter):
def format(self, record): # pragma: no cover
return record.msg
class Console_color_formatter(logging.Formatter):
def format(self, record):
add_custom_log_levels()
@ -198,6 +203,8 @@ def configure_logging(
if color_enabled:
console_handler.setFormatter(Console_color_formatter())
else:
console_handler.setFormatter(Console_no_color_formatter())
console_handler.setLevel(console_log_level)

View File

@ -1,4 +1,3 @@
version: '3'
services:
docs:
image: borgmatic-docs

View File

@ -84,6 +84,9 @@ variables you can use here:
path of the borgmatic log file, only set when the `--log-file` flag is used
* `repository`: path of the current repository as configured in the current
borgmatic configuration file
* `repository_label` <span class="minilink minilink-addedin">New in version
1.8.12</span>: label of the current repository as configured in the current
borgmatic configuration file
Note that you can also interpolate in [arbitrary environment
variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).

View File

@ -437,19 +437,28 @@ borgmatic's own configuration file. So include your configuration file in
backups to avoid getting caught without a way to restore a database.
3. borgmatic does not currently support backing up or restoring multiple
databases that share the exact same name on different hosts.
4. Because database hooks implicitly enable the `read_special` configuration,
any special files are excluded from backups (named pipes, block devices,
character devices, and sockets) to prevent hanging. Try a command like `find
/your/source/path -type b -or -type c -or -type p -or -type s` to find such
files. Common directories to exclude are `/dev` and `/run`, but that may not
be exhaustive. <span class="minilink minilink-addedin">New in version
1.7.3</span> When database hooks are enabled, borgmatic automatically excludes
special files (and symlinks to special files) that may cause Borg to hang, so
generally you no longer need to manually exclude them. There are potential
edge cases though in which applications on your system create new special files
*after* borgmatic constructs its exclude list, resulting in Borg hangs. If that
occurs, you can resort to the manual excludes described above. And to opt out
of the auto-exclude feature entirely, explicitly set `read_special` to true.
4. When database hooks are enabled, borgmatic instructs Borg to consume
special files (via `--read-special`) to support database dump
streaming—regardless of the value of your `read_special` configuration option.
And because this can cause Borg to hang, borgmatic also automatically excludes
special files (and symlinks to them) that Borg may get stuck on. Even so,
there are still potential edge cases in which applications on your system
create new special files *after* borgmatic constructs its exclude list,
resulting in Borg hangs. If that occurs, you can resort to manually excluding
those files. And if you explicitly set the `read-special` option to `true`,
borgmatic will opt you out of the auto-exclude feature entirely, but will
still instruct Borg to consume special files—you will just be on your own to
exclude them. <span class="minilink minilink-addedin">Prior to version
1.7.3</span>Special files were not auto-excluded, and you were responsible for
excluding them yourself. Common directories to exclude are `/dev` and `/run`,
but that may not be exhaustive.
5. Database hooks also implicitly enable the `one_file_system` option, which
means Borg won't cross filesystem boundaries when looking for files to backup.
This is especially important when running borgmatic in a container, as
container volumes are mounted as separate filesystems. One work-around is to
explicitly add each mounted volume you'd like to backup to
`source_directories` instead of relying on Borg to include them implicitly via
a parent directory.
### Manual restoration

View File

@ -20,7 +20,7 @@ default action ordering was `prune`, `compact`, `create`, and `check`.
### A la carte actions
If you find yourself wanting to customize the actions, you have some options.
First, you can run borgmatic's `prune`, `compact`, `create`, or `check`
First, you can run borgmatic's `create`, `prune`, `compact`, or `check`
actions separately. For instance, the following optional actions are
available (among others):
@ -158,7 +158,8 @@ selected randomly each time, so in effect the spot check is probabilistic.
The `data_tolerance_percentage` is the percentage of total files in the source
directories that can fail a spot check data comparison without failing the
entire consistency check. The value must be lower than or equal to the
`contents_sample_percentage`.
`data_sample_percentage`, because `data_tolerance_percentage` only looks at
at the sampled files as determined by `data_sample_percentage`.
All three options are required when using the spot check. And because the
check relies on these configured tolerances, it may not be a

View File

@ -102,9 +102,9 @@ and depend on containers for runtime dependencies. These tests do run on the
continuous integration (CI) server, and running them on your developer machine
is the closest thing to dev-CI parity.
If you would like to run the full test suite, first install Docker (or Podman;
see below) and [Docker Compose](https://docs.docker.com/compose/install/).
Then run:
If you would like to run the end-to-end tests, first install Docker (or
Podman; see below) and [Docker
Compose](https://docs.docker.com/compose/install/). Then run:
```bash
scripts/run-end-to-end-tests
@ -152,12 +152,14 @@ the following deviations from it:
* In general, spell out words in variable names instead of shortening them.
So, think `index` instead of `idx`. There are some notable exceptions to
this though (like `config`).
* Favor blank lines around `if` statements, `return`s, logical code groupings,
etc. Readability is more important than packing the code tightly.
borgmatic code uses the [Black](https://black.readthedocs.io/en/stable/) code
formatter, the [Flake8](http://flake8.pycqa.org/en/latest/) code checker, and
the [isort](https://github.com/timothycrosley/isort) import orderer, so
certain code style requirements will be enforced when running automated tests.
See the Black, Flake8, and isort documentation for more information.
certain code style requirements are enforced when running automated tests. See
the Black, Flake8, and isort documentation for more information.
## Continuous integration

View File

@ -208,8 +208,8 @@ cronitor:
this option in the `hooks:` section of your configuration.
With this configuration, borgmatic pings your Cronitor monitor when a backup
begins, ends, or errors, but only when any of the `prune`, `compact`,
`create`, or `check` actions are run. Then, if the actions complete
begins, ends, or errors, but only when any of the `create`, `prune`,
`compact`, or `check` actions are run. Then, if the actions complete
successfully or errors, borgmatic notifies Cronitor accordingly.
You can configure Cronitor to notify you by a [variety of
@ -235,8 +235,8 @@ cronhub:
this option in the `hooks:` section of your configuration.
With this configuration, borgmatic pings your Cronhub monitor when a backup
begins, ends, or errors, but only when any of the `prune`, `compact`,
`create`, or `check` actions are run. Then, if the actions complete
begins, ends, or errors, but only when any of the `create`, `prune`,
`compact`, or `check` actions are run. Then, if the actions complete
successfully or errors, borgmatic notifies Cronhub accordingly.
Note that even though you configure borgmatic with the "start" variant of the
@ -368,7 +368,7 @@ loki:
```
With this configuration, borgmatic sends its logs to your Loki instance as any
of the `prune`, `compact`, `create`, or `check` actions are run. Then, after
of the `create`, `prune`, `compact`, or `check` actions are run. Then, after
the actions complete, borgmatic notifies Loki of success or failure.
This hook supports sending arbitrary labels to Loki. For instance:
@ -420,7 +420,8 @@ pipx](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#installation),
run the following to install Apprise so borgmatic can use it:
```bash
sudo pipx install --force borgmatic[Apprise]
sudo pipx uninstall borgmatic
sudo pipx install borgmatic[Apprise]
```
Omit `sudo` if borgmatic is installed as a non-root user.
@ -435,11 +436,16 @@ apprise:
label: gotify
- url: mastodons://access_key@hostname/@user
label: mastodon
states:
- start
- finish
- fail
```
With this configuration, borgmatic pings each of the configured Apprise
services when a backup begins, ends, or errors, but only when any of the
`prune`, `compact`, `create`, or `check` actions are run.
`create`, `prune`, `compact`, or `check` actions are run. (By default, if
`states` is not specified, Apprise services are only pinged on error.)
You can optionally customize the contents of the default messages sent to
these services:

View File

@ -133,6 +133,8 @@ borgmatic's `borg` action is not without limitations:
borgmatic action. In this case, only the Borg command is run.
* Unlike normal borgmatic actions that support JSON, the `borg` action will
not disable certain borgmatic logs to avoid interfering with JSON output.
* The `borg` action bypasses most of borgmatic's machinery, so for instance
monitoring hooks will not get triggered when running `borgmatic borg create`.
* <span class="minilink minilink-addedin">Prior to version 1.8.0</span>
borgmatic implicitly injected the repository/archive arguments on the Borg
command-line for you (based on your borgmatic configuration or the

View File

@ -1,6 +1,6 @@
from setuptools import find_packages, setup
VERSION = '1.8.11.dev0'
VERSION = '1.8.13.dev0'
setup(

View File

@ -1,34 +1,34 @@
appdirs==1.4.4
apprise==1.3.0
attrs==22.2.0
black==24.3.0
certifi==2023.7.22
chardet==5.1.0
click==8.1.3
codespell==2.2.4
apprise==1.8.0
attrs==23.2.0
black==24.4.2
certifi==2024.2.2
chardet==5.2.0
click==8.1.7
codespell==2.2.6
colorama==0.4.6
coverage==7.2.3
flake8==6.0.0
flake8-quotes==3.3.2
coverage==7.5.1
flake8==7.0.0
flake8-quotes==3.4.0
flake8-use-fstring==1.4
flake8-variables-names==0.0.5
flexmock==0.11.3
flake8-variables-names==0.0.6
flexmock==0.12.1
idna==3.7
isort==5.12.0
jsonschema==4.17.3
Markdown==3.4.1
isort==5.13.2
jsonschema==4.22.0
Markdown==3.6
mccabe==0.7.0
packaging==23.1
pathspec==0.11.1
pluggy==1.0.0
packaging==24.0
pathspec==0.12.1
pluggy==1.5.0
py==1.11.0
pycodestyle==2.10.0
pyflakes==3.0.1
pytest==7.3.0
pytest-cov==4.0.0
pycodestyle==2.11.1
pyflakes==3.2.0
pytest==8.2.1
pytest-cov==5.0.0
PyYAML>5.0.0
regex
requests==2.31.0
requests==2.32.2
ruamel.yaml>0.15.0
toml==0.10.2
typed-ast

View File

@ -1,4 +1,3 @@
version: '3'
services:
postgresql:
image: docker.io/postgres:13.1-alpine

View File

@ -520,7 +520,7 @@ def test_collect_spot_check_source_paths_without_working_directory_parses_borg_o
) == ('/etc/path', '/etc/other')
def test_collect_spot_check_source_paths_includes_symlinks_but_skips_directories():
def test_collect_spot_check_source_paths_skips_directories():
flexmock(module.borgmatic.hooks.dispatch).should_receive('call_hooks').and_return(
{'hook1': False, 'hook2': True}
)
@ -546,18 +546,19 @@ def test_collect_spot_check_source_paths_includes_symlinks_but_skips_directories
'warning: stuff\n- /etc/path\n+ /etc/dir\n? /nope',
)
flexmock(module.os.path).should_receive('isfile').with_args('/etc/path').and_return(False)
flexmock(module.os.path).should_receive('islink').with_args('/etc/path').and_return(True)
flexmock(module.os.path).should_receive('isfile').with_args('/etc/dir').and_return(False)
flexmock(module.os.path).should_receive('islink').with_args('/etc/dir').and_return(False)
assert module.collect_spot_check_source_paths(
repository={'path': 'repo'},
config={'working_directory': '/'},
local_borg_version=flexmock(),
global_arguments=flexmock(),
local_path=flexmock(),
remote_path=flexmock(),
) == ('/etc/path',)
assert (
module.collect_spot_check_source_paths(
repository={'path': 'repo'},
config={'working_directory': '/'},
local_borg_version=flexmock(),
global_arguments=flexmock(),
local_path=flexmock(),
remote_path=flexmock(),
)
== ()
)
def test_collect_spot_check_archive_paths_excludes_directories():
@ -769,6 +770,76 @@ def test_compare_spot_check_hashes_considers_non_existent_path_as_not_matching()
) == ('/bar',)
def test_compare_spot_check_hashes_with_too_many_paths_feeds_them_to_commands_in_chunks():
flexmock(module).SAMPLE_PATHS_SUBSET_COUNT = 2
flexmock(module.random).should_receive('sample').replace_with(
lambda population, count: population[:count]
)
flexmock(module.os.path).should_receive('exists').and_return(True)
flexmock(module.borgmatic.execute).should_receive(
'execute_command_and_capture_output'
).with_args(('xxh64sum', '/foo', '/bar')).and_return('hash1 /foo\nhash2 /bar')
flexmock(module.borgmatic.execute).should_receive(
'execute_command_and_capture_output'
).with_args(('xxh64sum', '/baz', '/quux')).and_return('hash3 /baz\nhash4 /quux')
flexmock(module.borgmatic.borg.list).should_receive('capture_archive_listing').and_return(
['hash1 /foo', 'hash2 /bar']
).and_return(['hash3 /baz', 'nothash4 /quux'])
assert module.compare_spot_check_hashes(
repository={'path': 'repo'},
archive='archive',
config={
'checks': [
{
'name': 'archives',
'frequency': '2 weeks',
},
{
'name': 'spot',
'data_sample_percentage': 100,
},
]
},
local_borg_version=flexmock(),
global_arguments=flexmock(),
local_path=flexmock(),
remote_path=flexmock(),
log_label='repo',
source_paths=('/foo', '/bar', '/baz', '/quux'),
) == ('/quux',)
def test_spot_check_without_spot_configuration_errors():
with pytest.raises(ValueError):
module.spot_check(
repository={'path': 'repo'},
config={
'checks': [
{
'name': 'archives',
},
]
},
local_borg_version=flexmock(),
global_arguments=flexmock(),
local_path=flexmock(),
remote_path=flexmock(),
)
def test_spot_check_without_any_configuration_errors():
with pytest.raises(ValueError):
module.spot_check(
repository={'path': 'repo'},
config={},
local_borg_version=flexmock(),
global_arguments=flexmock(),
local_path=flexmock(),
remote_path=flexmock(),
)
def test_spot_check_data_tolerance_percenatge_greater_than_data_sample_percentage_errors():
with pytest.raises(ValueError):
module.spot_check(

View File

@ -222,35 +222,35 @@ def test_make_archive_filter_flags_with_default_checks_and_prefix_includes_match
assert flags == ('--match-archives', 'sh:foo-*')
def test_make_check_flags_with_repository_check_returns_flag():
flags = module.make_check_flags(('repository',), ())
def test_make_check_name_flags_with_repository_check_returns_flag():
flags = module.make_check_name_flags({'repository'}, ())
assert flags == ('--repository-only',)
def test_make_check_flags_with_archives_check_returns_flag():
flags = module.make_check_flags(('archives',), ())
def test_make_check_name_flags_with_archives_check_returns_flag():
flags = module.make_check_name_flags({'archives'}, ())
assert flags == ('--archives-only',)
def test_make_check_flags_with_archives_check_and_archive_filter_flags_includes_those_flags():
flags = module.make_check_flags(('archives',), ('--match-archives', 'sh:foo-*'))
def test_make_check_name_flags_with_archives_check_and_archive_filter_flags_includes_those_flags():
flags = module.make_check_name_flags({'archives'}, ('--match-archives', 'sh:foo-*'))
assert flags == ('--archives-only', '--match-archives', 'sh:foo-*')
def test_make_check_flags_without_archives_check_and_with_archive_filter_flags_includes_those_flags():
flags = module.make_check_flags(('repository',), ('--match-archives', 'sh:foo-*'))
def test_make_check_name_flags_without_archives_check_and_with_archive_filter_flags_includes_those_flags():
flags = module.make_check_name_flags({'repository'}, ('--match-archives', 'sh:foo-*'))
assert flags == ('--repository-only',)
def test_make_check_flags_with_data_check_returns_flag_and_implies_archives():
def test_make_check_name_flags_with_data_check_returns_flag_and_implies_archives():
flexmock(module.feature).should_receive('available').and_return(True)
flexmock(module.flags).should_receive('make_match_archives_flags').and_return(())
flags = module.make_check_flags(('data',), ())
flags = module.make_check_name_flags({'data'}, ())
assert flags == (
'--archives-only',
@ -258,24 +258,24 @@ def test_make_check_flags_with_data_check_returns_flag_and_implies_archives():
)
def test_make_check_flags_with_extract_omits_extract_flag():
def test_make_check_name_flags_with_extract_omits_extract_flag():
flexmock(module.feature).should_receive('available').and_return(True)
flexmock(module.flags).should_receive('make_match_archives_flags').and_return(())
flags = module.make_check_flags(('extract',), ())
flags = module.make_check_name_flags({'extract'}, ())
assert flags == ()
def test_make_check_flags_with_repository_and_data_checks_does_not_return_repository_only():
def test_make_check_name_flags_with_repository_and_data_checks_does_not_return_repository_only():
flexmock(module.feature).should_receive('available').and_return(True)
flexmock(module.flags).should_receive('make_match_archives_flags').and_return(())
flags = module.make_check_flags(
(
flags = module.make_check_name_flags(
{
'repository',
'data',
),
},
(),
)
@ -332,8 +332,7 @@ def test_get_repository_id_with_missing_json_keys_raises():
def test_check_archives_with_progress_passes_through_to_borg():
config = {}
flexmock(module).should_receive('make_check_flags').and_return(())
flexmock(module).should_receive('execute_command').never()
flexmock(module).should_receive('make_check_name_flags').and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
flexmock(module.environment).should_receive('make_environment')
flexmock(module).should_receive('execute_command').with_args(
@ -349,7 +348,12 @@ def test_check_archives_with_progress_passes_through_to_borg():
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=True, repair=None, only_checks=None, force=None, match_archives=None
progress=True,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks={'repository'},
@ -359,8 +363,7 @@ def test_check_archives_with_progress_passes_through_to_borg():
def test_check_archives_with_repair_passes_through_to_borg():
config = {}
flexmock(module).should_receive('make_check_flags').and_return(())
flexmock(module).should_receive('execute_command').never()
flexmock(module).should_receive('make_check_name_flags').and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
flexmock(module.environment).should_receive('make_environment')
flexmock(module).should_receive('execute_command').with_args(
@ -376,7 +379,148 @@ def test_check_archives_with_repair_passes_through_to_borg():
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None, repair=True, only_checks=None, force=None, match_archives=None
progress=None,
repair=True,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks={'repository'},
archive_filter_flags=(),
)
def test_check_archives_with_max_duration_flag_passes_through_to_borg():
config = {}
flexmock(module).should_receive('make_check_name_flags').and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
flexmock(module.environment).should_receive('make_environment')
flexmock(module).should_receive('execute_command').with_args(
('borg', 'check', '--max-duration', '33', 'repo'),
extra_environment=None,
borg_local_path='borg',
borg_exit_codes=None,
).once()
module.check_archives(
repository_path='repo',
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=33,
),
global_arguments=flexmock(log_json=False),
checks={'repository'},
archive_filter_flags=(),
)
def test_check_archives_with_max_duration_flag_and_archives_check_errors():
config = {}
flexmock(module).should_receive('execute_command').never()
with pytest.raises(ValueError):
module.check_archives(
repository_path='repo',
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=33,
),
global_arguments=flexmock(log_json=False),
checks={'repository', 'archives'},
archive_filter_flags=(),
)
def test_check_archives_with_max_duration_option_passes_through_to_borg():
config = {'checks': [{'name': 'repository', 'max_duration': 33}]}
flexmock(module).should_receive('make_check_name_flags').and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
flexmock(module.environment).should_receive('make_environment')
flexmock(module).should_receive('execute_command').with_args(
('borg', 'check', '--max-duration', '33', 'repo'),
extra_environment=None,
borg_local_path='borg',
borg_exit_codes=None,
).once()
module.check_archives(
repository_path='repo',
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks={'repository'},
archive_filter_flags=(),
)
def test_check_archives_with_max_duration_option_and_archives_check_errors():
config = {'checks': [{'name': 'repository', 'max_duration': 33}]}
flexmock(module).should_receive('execute_command').never()
with pytest.raises(ValueError):
module.check_archives(
repository_path='repo',
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks={'repository', 'archives'},
archive_filter_flags=(),
)
def test_check_archives_with_max_duration_flag_overrides_max_duration_option():
config = {'checks': [{'name': 'repository', 'max_duration': 33}]}
flexmock(module).should_receive('make_check_name_flags').and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
flexmock(module.environment).should_receive('make_environment')
flexmock(module).should_receive('execute_command').with_args(
('borg', 'check', '--max-duration', '44', 'repo'),
extra_environment=None,
borg_local_path='borg',
borg_exit_codes=None,
).once()
module.check_archives(
repository_path='repo',
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=44,
),
global_arguments=flexmock(log_json=False),
checks={'repository'},
@ -395,7 +539,7 @@ def test_check_archives_with_repair_passes_through_to_borg():
)
def test_check_archives_calls_borg_with_parameters(checks):
config = {}
flexmock(module).should_receive('make_check_flags').with_args(checks, ()).and_return(())
flexmock(module).should_receive('make_check_name_flags').with_args(checks, ()).and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
insert_execute_command_mock(('borg', 'check', 'repo'))
@ -404,7 +548,12 @@ def test_check_archives_calls_borg_with_parameters(checks):
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None, repair=None, only_checks=None, force=None, match_archives=None
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks=checks,
@ -414,7 +563,7 @@ def test_check_archives_calls_borg_with_parameters(checks):
def test_check_archives_with_log_info_passes_through_to_borg():
config = {}
flexmock(module).should_receive('make_check_flags').and_return(())
flexmock(module).should_receive('make_check_name_flags').and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
insert_logging_mock(logging.INFO)
insert_execute_command_mock(('borg', 'check', '--info', 'repo'))
@ -424,7 +573,12 @@ def test_check_archives_with_log_info_passes_through_to_borg():
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None, repair=None, only_checks=None, force=None, match_archives=None
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks={'repository'},
@ -434,7 +588,7 @@ def test_check_archives_with_log_info_passes_through_to_borg():
def test_check_archives_with_log_debug_passes_through_to_borg():
config = {}
flexmock(module).should_receive('make_check_flags').and_return(())
flexmock(module).should_receive('make_check_name_flags').and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
insert_logging_mock(logging.DEBUG)
insert_execute_command_mock(('borg', 'check', '--debug', '--show-rc', 'repo'))
@ -444,7 +598,12 @@ def test_check_archives_with_log_debug_passes_through_to_borg():
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None, repair=None, only_checks=None, force=None, match_archives=None
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks={'repository'},
@ -455,7 +614,7 @@ def test_check_archives_with_log_debug_passes_through_to_borg():
def test_check_archives_with_local_path_calls_borg_via_local_path():
checks = {'repository'}
config = {}
flexmock(module).should_receive('make_check_flags').with_args(checks, ()).and_return(())
flexmock(module).should_receive('make_check_name_flags').with_args(checks, ()).and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
insert_execute_command_mock(('borg1', 'check', 'repo'))
@ -464,7 +623,12 @@ def test_check_archives_with_local_path_calls_borg_via_local_path():
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None, repair=None, only_checks=None, force=None, match_archives=None
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks=checks,
@ -477,7 +641,7 @@ def test_check_archives_with_exit_codes_calls_borg_using_them():
checks = {'repository'}
borg_exit_codes = flexmock()
config = {'borg_exit_codes': borg_exit_codes}
flexmock(module).should_receive('make_check_flags').with_args(checks, ()).and_return(())
flexmock(module).should_receive('make_check_name_flags').with_args(checks, ()).and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
insert_execute_command_mock(('borg', 'check', 'repo'), borg_exit_codes=borg_exit_codes)
@ -486,7 +650,12 @@ def test_check_archives_with_exit_codes_calls_borg_using_them():
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None, repair=None, only_checks=None, force=None, match_archives=None
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks=checks,
@ -497,7 +666,7 @@ def test_check_archives_with_exit_codes_calls_borg_using_them():
def test_check_archives_with_remote_path_passes_through_to_borg():
checks = {'repository'}
config = {}
flexmock(module).should_receive('make_check_flags').with_args(checks, ()).and_return(())
flexmock(module).should_receive('make_check_name_flags').with_args(checks, ()).and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
insert_execute_command_mock(('borg', 'check', '--remote-path', 'borg1', 'repo'))
@ -506,7 +675,12 @@ def test_check_archives_with_remote_path_passes_through_to_borg():
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None, repair=None, only_checks=None, force=None, match_archives=None
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks=checks,
@ -518,7 +692,7 @@ def test_check_archives_with_remote_path_passes_through_to_borg():
def test_check_archives_with_log_json_passes_through_to_borg():
checks = {'repository'}
config = {}
flexmock(module).should_receive('make_check_flags').with_args(checks, ()).and_return(())
flexmock(module).should_receive('make_check_name_flags').with_args(checks, ()).and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
insert_execute_command_mock(('borg', 'check', '--log-json', 'repo'))
@ -527,7 +701,12 @@ def test_check_archives_with_log_json_passes_through_to_borg():
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None, repair=None, only_checks=None, force=None, match_archives=None
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=True),
checks=checks,
@ -538,7 +717,7 @@ def test_check_archives_with_log_json_passes_through_to_borg():
def test_check_archives_with_lock_wait_passes_through_to_borg():
checks = {'repository'}
config = {'lock_wait': 5}
flexmock(module).should_receive('make_check_flags').with_args(checks, ()).and_return(())
flexmock(module).should_receive('make_check_name_flags').with_args(checks, ()).and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
insert_execute_command_mock(('borg', 'check', '--lock-wait', '5', 'repo'))
@ -547,7 +726,12 @@ def test_check_archives_with_lock_wait_passes_through_to_borg():
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None, repair=None, only_checks=None, force=None, match_archives=None
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks=checks,
@ -559,7 +743,7 @@ def test_check_archives_with_retention_prefix():
checks = {'repository'}
prefix = 'foo-'
config = {'prefix': prefix}
flexmock(module).should_receive('make_check_flags').with_args(checks, ()).and_return(())
flexmock(module).should_receive('make_check_name_flags').with_args(checks, ()).and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
insert_execute_command_mock(('borg', 'check', 'repo'))
@ -568,7 +752,12 @@ def test_check_archives_with_retention_prefix():
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None, repair=None, only_checks=None, force=None, match_archives=None
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks=checks,
@ -578,7 +767,7 @@ def test_check_archives_with_retention_prefix():
def test_check_archives_with_extra_borg_options_passes_through_to_borg():
config = {'extra_borg_options': {'check': '--extra --options'}}
flexmock(module).should_receive('make_check_flags').and_return(())
flexmock(module).should_receive('make_check_name_flags').and_return(())
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
insert_execute_command_mock(('borg', 'check', '--extra', '--options', 'repo'))
@ -587,7 +776,12 @@ def test_check_archives_with_extra_borg_options_passes_through_to_borg():
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None, repair=None, only_checks=None, force=None, match_archives=None
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives=None,
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks={'repository'},
@ -597,7 +791,9 @@ def test_check_archives_with_extra_borg_options_passes_through_to_borg():
def test_check_archives_with_match_archives_passes_through_to_borg():
config = {}
flexmock(module).should_receive('make_check_flags').and_return(('--match-archives', 'foo-*'))
flexmock(module).should_receive('make_check_name_flags').and_return(
('--match-archives', 'foo-*')
)
flexmock(module.flags).should_receive('make_repository_flags').and_return(('repo',))
flexmock(module.environment).should_receive('make_environment')
flexmock(module).should_receive('execute_command').with_args(
@ -612,7 +808,12 @@ def test_check_archives_with_match_archives_passes_through_to_borg():
config=config,
local_borg_version='1.2.3',
check_arguments=flexmock(
progress=None, repair=None, only_checks=None, force=None, match_archives='foo-*'
progress=None,
repair=None,
only_checks=None,
force=None,
match_archives='foo-*',
max_duration=None,
),
global_arguments=flexmock(log_json=False),
checks={'archives'},

View File

@ -693,6 +693,7 @@ def test_make_base_create_command_includes_exclude_patterns_in_borg_command():
('one_file_system', True, True, ('--one-file-system',)),
('upload_rate_limit', 100, True, ('--upload-ratelimit', '100')),
('upload_rate_limit', 100, False, ('--remote-ratelimit', '100')),
('upload_buffer_size', 160, True, ('--upload-buffer', '160')),
('numeric_ids', True, True, ('--numeric-ids',)),
('numeric_ids', True, False, ('--numeric-owner',)),
('read_special', True, True, ('--read-special',)),

View File

@ -130,36 +130,175 @@ def test_parse_and_record_action_arguments_with_borg_action_consumes_arguments_a
assert borg_parsed_arguments.options == ('list',)
@pytest.mark.parametrize(
'argument, expected',
[
('--foo', True),
('foo', False),
(33, False),
],
)
def test_argument_is_flag_only_for_string_starting_with_double_dash(argument, expected):
assert module.argument_is_flag(argument) == expected
@pytest.mark.parametrize(
'arguments, expected',
[
# A global flag remaining from each parsed action.
# Ending with a valueless flag.
(
('--foo', '--bar', 33, '--baz'),
(
('--latest', 'archive', 'prune', 'extract', 'list', '--test-flag'),
('--latest', 'archive', 'check', 'extract', 'list', '--test-flag'),
('prune', 'check', 'list', '--test-flag'),
('prune', 'check', 'extract', '--test-flag'),
('--foo',),
('--bar', 33),
('--baz',),
),
('--test-flag',),
),
# No global flags remaining.
# Ending with a flag and its corresponding value.
(
('--foo', '--bar', 33, '--baz', '--quux', 'thing'),
(('--foo',), ('--bar', 33), ('--baz',), ('--quux', 'thing')),
),
# Starting with an action name.
(
('check', '--foo', '--bar', 33, '--baz'),
(
('check',),
('--foo',),
('--bar', 33),
('--baz',),
),
),
# Action name that one could mistake for a flag value.
(('--progress', 'list'), (('--progress',), ('list',))),
# No arguments.
((), ()),
],
)
def test_group_arguments_with_values_returns_flags_with_corresponding_values(arguments, expected):
flexmock(module).should_receive('argument_is_flag').with_args('--foo').and_return(True)
flexmock(module).should_receive('argument_is_flag').with_args('--bar').and_return(True)
flexmock(module).should_receive('argument_is_flag').with_args('--baz').and_return(True)
flexmock(module).should_receive('argument_is_flag').with_args('--quux').and_return(True)
flexmock(module).should_receive('argument_is_flag').with_args('--progress').and_return(True)
flexmock(module).should_receive('argument_is_flag').with_args(33).and_return(False)
flexmock(module).should_receive('argument_is_flag').with_args('thing').and_return(False)
flexmock(module).should_receive('argument_is_flag').with_args('check').and_return(False)
flexmock(module).should_receive('argument_is_flag').with_args('list').and_return(False)
assert module.group_arguments_with_values(arguments) == expected
@pytest.mark.parametrize(
'arguments, grouped_arguments, expected',
[
# An unparsable flag remaining from each parsed action.
(
(
('--latest', 'archive', 'prune', 'extract', 'list'),
('--latest', 'archive', 'check', 'extract', 'list'),
('--latest', 'archive', 'prune', 'extract', 'list', '--flag'),
('--latest', 'archive', 'check', 'extract', 'list', '--flag'),
('prune', 'check', 'list', '--flag'),
('prune', 'check', 'extract', '--flag'),
),
(
(
('--latest',),
('archive',),
('prune',),
('extract',),
('list',),
('--flag',),
),
(
('--latest',),
('archive',),
('check',),
('extract',),
('list',),
('--flag',),
),
(('prune',), ('check',), ('list',), ('--flag',)),
(('prune',), ('check',), ('extract',), ('--flag',)),
),
('--flag',),
),
# No unparsable flags remaining.
(
(
('--archive', 'archive', 'prune', 'extract', 'list'),
('--archive', 'archive', 'check', 'extract', 'list'),
('prune', 'check', 'list'),
('prune', 'check', 'extract'),
),
(
(
(
'--archive',
'archive',
),
('prune',),
('extract',),
('list',),
),
(
(
'--archive',
'archive',
),
('check',),
('extract',),
('list',),
),
(('prune',), ('check',), ('list',)),
(('prune',), ('check',), ('extract',)),
),
(),
),
# No unparsable flags remaining, but some values in common.
(
(
('--verbosity', '5', 'archive', 'prune', 'extract', 'list'),
('--last', '5', 'archive', 'check', 'extract', 'list'),
('prune', 'check', 'list', '--last', '5'),
('prune', 'check', '--verbosity', '5', 'extract'),
),
(
(('--verbosity', '5'), ('archive',), ('prune',), ('extract',), ('list',)),
(
(
'--last',
'5',
),
('archive',),
('check',),
('extract',),
('list',),
),
(('prune',), ('check',), ('list',), ('--last', '5')),
(
('prune',),
('check',),
(
'--verbosity',
'5',
),
('extract',),
),
),
(),
),
# No flags.
((), ()),
((), (), ()),
],
)
def test_get_unparsable_arguments_returns_remaining_arguments_that_no_action_can_parse(
arguments, expected
arguments, grouped_arguments, expected
):
for action_arguments, grouped_action_arguments in zip(arguments, grouped_arguments):
flexmock(module).should_receive('group_arguments_with_values').with_args(
action_arguments
).and_return(grouped_action_arguments)
assert module.get_unparsable_arguments(arguments) == expected

View File

@ -487,6 +487,45 @@ def test_run_actions_runs_rcreate():
)
def test_run_actions_adds_label_file_to_hook_context():
flexmock(module).should_receive('add_custom_log_levels')
flexmock(module).should_receive('get_skip_actions').and_return([])
flexmock(module.command).should_receive('execute_hook')
expected = flexmock()
flexmock(borgmatic.actions.create).should_receive('run_create').with_args(
config_filename=object,
repository={'path': 'repo', 'label': 'my repo'},
config={'repositories': []},
config_paths=[],
hook_context={
'repository_label': 'my repo',
'log_file': '',
'repositories': '',
'repository': 'repo',
},
local_borg_version=object,
create_arguments=object,
global_arguments=object,
dry_run_label='',
local_path=object,
remote_path=object,
).once().and_return(expected)
result = tuple(
module.run_actions(
arguments={'global': flexmock(dry_run=False, log_file=None), 'create': flexmock()},
config_filename=flexmock(),
config={'repositories': []},
config_paths=[],
local_path=flexmock(),
remote_path=flexmock(),
local_borg_version=flexmock(),
repository={'path': 'repo', 'label': 'my repo'},
)
)
assert result == (expected,)
def test_run_actions_adds_log_file_to_hook_context():
flexmock(module).should_receive('add_custom_log_levels')
flexmock(module).should_receive('get_skip_actions').and_return([])
@ -497,7 +536,12 @@ def test_run_actions_adds_log_file_to_hook_context():
repository={'path': 'repo'},
config={'repositories': []},
config_paths=[],
hook_context={'repository': 'repo', 'repositories': '', 'log_file': 'foo'},
hook_context={
'repository_label': '',
'log_file': 'foo',
'repositories': '',
'repository': 'repo',
},
local_borg_version=object,
create_arguments=object,
global_arguments=object,

View File

@ -50,6 +50,16 @@ def test_apply_constants_with_empty_constants_passes_through_value():
({'before_backup': '{inject}'}, {'before_backup': "'echo hi; naughty-command'"}),
({'after_backup': '{inject}'}, {'after_backup': "'echo hi; naughty-command'"}),
({'on_error': '{inject}'}, {'on_error': "'echo hi; naughty-command'"}),
(
{
'before_backup': '{env_pass}',
'postgresql_databases': [{'name': 'users', 'password': '{env_pass}'}],
},
{
'before_backup': "'${PASS}'",
'postgresql_databases': [{'name': 'users', 'password': '${PASS}'}],
},
),
(3, 3),
(True, True),
(False, False),
@ -63,6 +73,7 @@ def test_apply_constants_makes_string_substitutions(value, expected_value):
'int': 3,
'bool': True,
'inject': 'echo hi; naughty-command',
'env_pass': '${PASS}',
}
assert module.apply_constants(value, constants) == expected_value

View File

@ -264,6 +264,75 @@ def test_ping_monitor_hits_ping_url_when_states_matching():
)
def test_ping_monitor_adds_create_query_parameter_when_create_slug_true():
flexmock(module.borgmatic.hooks.logs).should_receive('Forgetful_buffering_handler').never()
hook_config = {'ping_url': 'https://example.com', 'create_slug': True}
flexmock(module.requests).should_receive('post').with_args(
'https://example.com/start?create=1', data=''.encode('utf-8'), verify=True
).and_return(flexmock(ok=True))
module.ping_monitor(
hook_config,
{},
'config.yaml',
state=module.monitor.State.START,
monitoring_log_level=1,
dry_run=False,
)
def test_ping_monitor_does_not_add_create_query_parameter_when_create_slug_false():
flexmock(module.borgmatic.hooks.logs).should_receive('Forgetful_buffering_handler').never()
hook_config = {'ping_url': 'https://example.com', 'create_slug': False}
flexmock(module.requests).should_receive('post').with_args(
'https://example.com/start', data=''.encode('utf-8'), verify=True
).and_return(flexmock(ok=True))
module.ping_monitor(
hook_config,
{},
'config.yaml',
state=module.monitor.State.START,
monitoring_log_level=1,
dry_run=False,
)
def test_ping_monitor_does_not_add_create_query_parameter_when_ping_url_is_uuid():
hook_config = {'ping_url': 'b3611b24-df9c-4d36-9203-fa292820bf2a', 'create_slug': True}
flexmock(module.requests).should_receive('post').with_args(
f"https://hc-ping.com/{hook_config['ping_url']}",
data=''.encode('utf-8'),
verify=True,
).and_return(flexmock(ok=True))
module.ping_monitor(
hook_config,
{},
'config.yaml',
state=module.monitor.State.FINISH,
monitoring_log_level=1,
dry_run=False,
)
def test_ping_monitor_issues_warning_when_ping_url_is_uuid_and_create_slug_true():
hook_config = {'ping_url': 'b3611b24-df9c-4d36-9203-fa292820bf2a', 'create_slug': True}
flexmock(module.requests).should_receive('post').and_return(flexmock(ok=True))
flexmock(module.logger).should_receive('warning').once()
module.ping_monitor(
hook_config,
{},
'config.yaml',
state=module.monitor.State.FINISH,
monitoring_log_level=1,
dry_run=False,
)
def test_ping_monitor_with_connection_error_logs_warning():
flexmock(module.borgmatic.hooks.logs).should_receive('Forgetful_buffering_handler').never()
hook_config = {'ping_url': 'https://example.com'}

View File

@ -217,10 +217,11 @@ def test_add_logging_level_skips_global_setting_if_already_set():
def test_configure_logging_with_syslog_log_level_probes_for_log_socket_on_linux():
flexmock(module).should_receive('add_custom_log_levels')
flexmock(module.logging).ANSWER = module.ANSWER
fake_formatter = flexmock()
flexmock(module).should_receive('Console_color_formatter').and_return(fake_formatter)
multi_stream_handler = flexmock(setLevel=lambda level: None, level=logging.INFO)
multi_stream_handler.should_receive('setFormatter').once()
multi_stream_handler.should_receive('setFormatter').with_args(fake_formatter).once()
flexmock(module).should_receive('Multi_stream_handler').and_return(multi_stream_handler)
flexmock(module).should_receive('Console_color_formatter')
flexmock(module).should_receive('interactive_console').and_return(False)
flexmock(module.logging).should_receive('basicConfig').with_args(
level=logging.DEBUG, handlers=list
@ -237,10 +238,11 @@ def test_configure_logging_with_syslog_log_level_probes_for_log_socket_on_linux(
def test_configure_logging_with_syslog_log_level_probes_for_log_socket_on_macos():
flexmock(module).should_receive('add_custom_log_levels')
flexmock(module.logging).ANSWER = module.ANSWER
fake_formatter = flexmock()
flexmock(module).should_receive('Console_color_formatter').and_return(fake_formatter)
multi_stream_handler = flexmock(setLevel=lambda level: None, level=logging.INFO)
multi_stream_handler.should_receive('setFormatter').once()
multi_stream_handler.should_receive('setFormatter').with_args(fake_formatter).once()
flexmock(module).should_receive('Multi_stream_handler').and_return(multi_stream_handler)
flexmock(module).should_receive('Console_color_formatter')
flexmock(module).should_receive('interactive_console').and_return(False)
flexmock(module.logging).should_receive('basicConfig').with_args(
level=logging.DEBUG, handlers=list
@ -258,10 +260,11 @@ def test_configure_logging_with_syslog_log_level_probes_for_log_socket_on_macos(
def test_configure_logging_with_syslog_log_level_probes_for_log_socket_on_freebsd():
flexmock(module).should_receive('add_custom_log_levels')
flexmock(module.logging).ANSWER = module.ANSWER
fake_formatter = flexmock()
flexmock(module).should_receive('Console_color_formatter').and_return(fake_formatter)
multi_stream_handler = flexmock(setLevel=lambda level: None, level=logging.INFO)
multi_stream_handler.should_receive('setFormatter').once()
multi_stream_handler.should_receive('setFormatter').with_args(fake_formatter).once()
flexmock(module).should_receive('Multi_stream_handler').and_return(multi_stream_handler)
flexmock(module).should_receive('Console_color_formatter')
flexmock(module).should_receive('interactive_console').and_return(False)
flexmock(module.logging).should_receive('basicConfig').with_args(
level=logging.DEBUG, handlers=list
@ -280,10 +283,11 @@ def test_configure_logging_with_syslog_log_level_probes_for_log_socket_on_freebs
def test_configure_logging_without_syslog_log_level_skips_syslog():
flexmock(module).should_receive('add_custom_log_levels')
flexmock(module.logging).ANSWER = module.ANSWER
fake_formatter = flexmock()
flexmock(module).should_receive('Console_color_formatter').and_return(fake_formatter)
multi_stream_handler = flexmock(setLevel=lambda level: None, level=logging.INFO)
multi_stream_handler.should_receive('setFormatter').once()
multi_stream_handler.should_receive('setFormatter').with_args(fake_formatter).once()
flexmock(module).should_receive('Multi_stream_handler').and_return(multi_stream_handler)
flexmock(module).should_receive('Console_color_formatter')
flexmock(module.logging).should_receive('basicConfig').with_args(
level=logging.INFO, handlers=list
)
@ -296,10 +300,11 @@ def test_configure_logging_without_syslog_log_level_skips_syslog():
def test_configure_logging_skips_syslog_if_not_found():
flexmock(module).should_receive('add_custom_log_levels')
flexmock(module.logging).ANSWER = module.ANSWER
fake_formatter = flexmock()
flexmock(module).should_receive('Console_color_formatter').and_return(fake_formatter)
multi_stream_handler = flexmock(setLevel=lambda level: None, level=logging.INFO)
multi_stream_handler.should_receive('setFormatter').once()
multi_stream_handler.should_receive('setFormatter').with_args(fake_formatter).once()
flexmock(module).should_receive('Multi_stream_handler').and_return(multi_stream_handler)
flexmock(module).should_receive('Console_color_formatter')
flexmock(module.logging).should_receive('basicConfig').with_args(
level=logging.INFO, handlers=list
)
@ -312,8 +317,10 @@ def test_configure_logging_skips_syslog_if_not_found():
def test_configure_logging_skips_log_file_if_log_file_logging_is_disabled():
flexmock(module).should_receive('add_custom_log_levels')
flexmock(module.logging).DISABLED = module.DISABLED
fake_formatter = flexmock()
flexmock(module).should_receive('Console_color_formatter').and_return(fake_formatter)
multi_stream_handler = flexmock(setLevel=lambda level: None, level=logging.INFO)
multi_stream_handler.should_receive('setFormatter').once()
multi_stream_handler.should_receive('setFormatter').with_args(fake_formatter).once()
flexmock(module).should_receive('Multi_stream_handler').and_return(multi_stream_handler)
flexmock(module.logging).should_receive('basicConfig').with_args(
@ -331,8 +338,10 @@ def test_configure_logging_skips_log_file_if_log_file_logging_is_disabled():
def test_configure_logging_to_log_file_instead_of_syslog():
flexmock(module).should_receive('add_custom_log_levels')
flexmock(module.logging).ANSWER = module.ANSWER
fake_formatter = flexmock()
flexmock(module).should_receive('Console_color_formatter').and_return(fake_formatter)
multi_stream_handler = flexmock(setLevel=lambda level: None, level=logging.INFO)
multi_stream_handler.should_receive('setFormatter').once()
multi_stream_handler.should_receive('setFormatter').with_args(fake_formatter).once()
flexmock(module).should_receive('Multi_stream_handler').and_return(multi_stream_handler)
flexmock(module.logging).should_receive('basicConfig').with_args(
@ -356,8 +365,10 @@ def test_configure_logging_to_log_file_instead_of_syslog():
def test_configure_logging_to_both_log_file_and_syslog():
flexmock(module).should_receive('add_custom_log_levels')
flexmock(module.logging).ANSWER = module.ANSWER
fake_formatter = flexmock()
flexmock(module).should_receive('Console_color_formatter').and_return(fake_formatter)
multi_stream_handler = flexmock(setLevel=lambda level: None, level=logging.INFO)
multi_stream_handler.should_receive('setFormatter').once()
multi_stream_handler.should_receive('setFormatter').with_args(fake_formatter).once()
flexmock(module).should_receive('Multi_stream_handler').and_return(multi_stream_handler)
flexmock(module.logging).should_receive('basicConfig').with_args(
@ -387,8 +398,10 @@ def test_configure_logging_to_log_file_formats_with_custom_log_format():
flexmock(module.logging).should_receive('Formatter').with_args(
'{message}', style='{' # noqa: FS003
).once()
fake_formatter = flexmock()
flexmock(module).should_receive('Console_color_formatter').and_return(fake_formatter)
multi_stream_handler = flexmock(setLevel=lambda level: None, level=logging.INFO)
multi_stream_handler.should_receive('setFormatter').once()
multi_stream_handler.should_receive('setFormatter').with_args(fake_formatter).once()
flexmock(module).should_receive('Multi_stream_handler').and_return(multi_stream_handler)
flexmock(module).should_receive('interactive_console').and_return(False)
@ -413,8 +426,10 @@ def test_configure_logging_to_log_file_formats_with_custom_log_format():
def test_configure_logging_skips_log_file_if_argument_is_none():
flexmock(module).should_receive('add_custom_log_levels')
flexmock(module.logging).ANSWER = module.ANSWER
fake_formatter = flexmock()
flexmock(module).should_receive('Console_color_formatter').and_return(fake_formatter)
multi_stream_handler = flexmock(setLevel=lambda level: None, level=logging.INFO)
multi_stream_handler.should_receive('setFormatter').once()
multi_stream_handler.should_receive('setFormatter').with_args(fake_formatter).once()
flexmock(module).should_receive('Multi_stream_handler').and_return(multi_stream_handler)
flexmock(module.logging).should_receive('basicConfig').with_args(
@ -426,11 +441,14 @@ def test_configure_logging_skips_log_file_if_argument_is_none():
module.configure_logging(console_log_level=logging.INFO, log_file=None)
def test_configure_logging_skips_console_color_formatter_if_color_disabled():
def test_configure_logging_uses_console_no_color_formatter_if_color_disabled():
flexmock(module).should_receive('add_custom_log_levels')
flexmock(module.logging).ANSWER = module.ANSWER
fake_formatter = flexmock()
flexmock(module).should_receive('Console_color_formatter').never()
flexmock(module).should_receive('Console_no_color_formatter').and_return(fake_formatter)
multi_stream_handler = flexmock(setLevel=lambda level: None, level=logging.INFO)
multi_stream_handler.should_receive('setFormatter').never()
multi_stream_handler.should_receive('setFormatter').with_args(fake_formatter).once()
flexmock(module).should_receive('Multi_stream_handler').and_return(multi_stream_handler)
flexmock(module.logging).should_receive('basicConfig').with_args(