When running multiple configs, do not fail at first error #116

Closed
opened 2018-12-09 19:27:43 +00:00 by nicoulaj · 6 comments
Contributor

I put multiple configs in /etc/borgmatic.d, and my systemd service runs them all at once (by just calling borgmatic).

The issue is, if one failure occurs borgmatic just stops here. I would expect borgmatic to try them all.

Ideally it could also display a summary of which failed and which succeeded at the end.

I put multiple configs in `/etc/borgmatic.d`, and my systemd service runs them all at once (by just calling borgmatic). The issue is, if one failure occurs borgmatic just stops here. I would expect borgmatic to try them all. Ideally it could also display a summary of which failed and which succeeded at the end.
nicoulaj changed title from When running multiple configs, do not fail at first to When running multiple configs, do not fail at first error 2018-12-09 19:28:11 +00:00
Owner

Thanks for reporting. Right now, any exception due to calling Borg gets immediately bubbled up to main() and printed to the console. So one way I could see to implement your request would be: Catch any exception that occurs when calling Borg on a given configuration file, add the exception to a list, and then proceed to the next configuration file. Finally, upon exit, print out any accumulated exceptions from that list to the console.

Anyway, could you say a little more about your use case? For instance, what are your different borgmatic config files used for? Why does one of them sometimes fail? And why do you want the other configurations to continue running even if the one fails? Thanks.

Thanks for reporting. Right now, any exception due to calling Borg gets immediately bubbled up to `main()` and printed to the console. So one way I could see to implement your request would be: Catch any exception that occurs when calling Borg on a given configuration file, add the exception to a list, and then proceed to the next configuration file. Finally, upon exit, print out any accumulated exceptions from that list to the console. Anyway, could you say a little more about your use case? For instance, what are your different borgmatic config files used for? Why does one of them sometimes fail? And why do you want the other configurations to continue running even if the one fails? Thanks.
Author
Contributor

Anyway, could you say a little more about your use case? For instance, what are your different borgmatic config files used for?

It's just for separating different data volumes (eg: work documents, personal documents, system config). I want to keep them isolate for several reasons:

  • minimize risk if borg repo somehow gets corrupted
  • use different retention policies for each repo
  • repos can have different lifecycles (eg: system can be reinstalled, documents continue forever)
  • smaller archives for browsing/extracting borg repos

Why does one of them sometimes fail? And why do you want the other configurations to continue running even if the one fails?

In this case it was just bad configuration, but I upload the borg repo to backblaze B2 as a post step, so you could imagine it can fail for various reasons (connectivity issues, storage quota reached...)

Anyway, maybe I am doing it wrong, I could have 1 systemd timer+service per repo, but I want them to run sequentially, so it's a little more annoying to manage dans just being able to drop configs in /etc/borgmatic.d and have it run automatically.

> Anyway, could you say a little more about your use case? For instance, what are your different borgmatic config files used for? It's just for separating different data volumes (eg: work documents, personal documents, system config). I want to keep them isolate for several reasons: * minimize risk if borg repo somehow gets corrupted * use different retention policies for each repo * repos can have different lifecycles (eg: system can be reinstalled, documents continue forever) * smaller archives for browsing/extracting borg repos > Why does one of them sometimes fail? And why do you want the other configurations to continue running even if the one fails? In this case it was just bad configuration, but I upload the borg repo to backblaze B2 as a post step, so you could imagine it can fail for various reasons (connectivity issues, storage quota reached...) Anyway, maybe I am doing it wrong, I could have 1 systemd timer+service per repo, but I want them to run sequentially, so it's a little more annoying to manage dans just being able to drop configs in `/etc/borgmatic.d` and have it run automatically.
Author
Contributor

Unrelated: I think you have a config issue with your gitea, tried to edit my previous comment and cannot save it, the Firefox Javascript console shows:

ReferenceError: issuesTribute is not defined[Learn More] index.js:690:17
initRepository/<https://projects.torsion.org/js/index.js:690:17
dispatch https://projects.torsion.org/vendor/plugins/jquery/jquery.min.js:3:12392
add/r.handle https://projects.torsion.org/vendor/plugins/jquery/jquery.min.js:3:9156
Source map error: request failed with status 404
Resource URL: https://projects.torsion.org/vendor/plugins/tribute/tribute.min.js
Source Map URL: tribute.min.js.map[Learn More]
Unrelated: I think you have a config issue with your gitea, tried to edit my previous comment and cannot save it, the Firefox Javascript console shows: ``` ReferenceError: issuesTribute is not defined[Learn More] index.js:690:17 initRepository/<https://projects.torsion.org/js/index.js:690:17 dispatch https://projects.torsion.org/vendor/plugins/jquery/jquery.min.js:3:12392 add/r.handle https://projects.torsion.org/vendor/plugins/jquery/jquery.min.js:3:9156 Source map error: request failed with status 404 Resource URL: https://projects.torsion.org/vendor/plugins/tribute/tribute.min.js Source Map URL: tribute.min.js.map[Learn More] ```
Owner

Nope, your use case makes sense.. I just wanted to make sure I understood it before trying to implement anything.

As for the Gitea error, it sounds like you're hitting this: https://github.com/go-gitea/gitea/issues/4755 :(

Nope, your use case makes sense.. I just wanted to make sure I understood it before trying to implement anything. As for the Gitea error, it sounds like you're hitting this: https://github.com/go-gitea/gitea/issues/4755 :(
witten added the
design finalized
label 2018-12-11 01:44:01 +00:00
Owner

Implemented in master. This enhancement will go out as part of the next release.

Implemented in master. This enhancement will go out as part of the next release.
Owner

Just released in borgmatic 1.2.14.

Just released in borgmatic 1.2.14.
Sign in to join this conversation.
No Milestone
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: borgmatic-collective/borgmatic#116
No description provided.