error but not really #202
Labels
No Label
bug
data loss
design finalized
good first issue
new feature area
question / support
security
waiting for response
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: borgmatic-collective/borgmatic#202
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
What I'm trying to do and why
borgmatic via crontab with "on_error" and "after_backup" hooks
Steps to reproduce (if a bug)
in crontab:
30 4 * * * root /usr/local/bin/borgmatic > /dev/null 2>&1
in
config.yaml
:error.sh is just
curl -X POST ...
to Slack channelI got notification on Slack that there was an error - note that
ok.sh
didn't execute.But when I SSH-ed and ran
borgmatic info --last 1
everything seemed fine...Time (start), Time (end), duration, normal output...I need help figuring out what happened :)
Environment
borgmatic version: 1.3.12
borgmatic installation method: pip3
Borg version: borg 1.1.9
Python version: Python 3.6.8
operating system and version: CentOS Linux release 7.6.1810 (Core)
EDIT: after inspecting
/var/log/messages
it seems that borgmatic finished successfully, butok.sh
failed (something with/var/tmp/systemd-private...
and mariadb), and when that failed, borgmatic ranerror.sh
...Thanks for filing the detailed report! Based on the behavior you described, it sounds like borgmatic is running the
on_error
hook only once theafter_backup
hook fails. Which sounds like correct behavior to me: Any cleanup steps you might run in an after backup hook are probably important to do, and should trigger an error hook if they fail. However, if you have other expectations, please talk about what you'd like to see instead! Or perhaps some additional logging would make it clearer as to what's going on? Thanks.I was just under the impression that "on_error" would run if something with backup fails, not if backup was fine but "after_backup" fails.
This behavior is fine when I'm aware of it, maybe just a note in that section of .yaml?
Sure, I will update the comments/docs accordingly. Thanks again for reporting!