From 18ae91ea6edc20215617dd26b4eba0a856f7e9a9 Mon Sep 17 00:00:00 2001 From: Dan Helfman Date: Mon, 4 Feb 2019 20:58:27 -0800 Subject: [PATCH] Strike some unnecessary words from docs. --- docs/how-to/deal-with-very-large-backups.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/how-to/deal-with-very-large-backups.md b/docs/how-to/deal-with-very-large-backups.md index f820d039..5f02010e 100644 --- a/docs/how-to/deal-with-very-large-backups.md +++ b/docs/how-to/deal-with-very-large-backups.md @@ -4,10 +4,10 @@ title: How to deal with very large backups ## Biggish data Borg itself is great for efficiently de-duplicating data across successive -backup archives, even when dealing with very large repositories. However, you -may find that while borgmatic's default mode of "prune, create, and check" -works well on small repositories, it's not so great on larger ones. That's -because running the default consistency checks just takes a long time on large +backup archives, even when dealing with very large repositories. But you may +find that while borgmatic's default mode of "prune, create, and check" works +well on small repositories, it's not so great on larger ones. That's because +running the default consistency checks takes a long time on large repositories. ### A la carte actions @@ -34,7 +34,7 @@ Another option is to customize your consistency checks. The default consistency checks run both full-repository checks and per-archive checks within each repository. -But if you find that archive checks are just too slow, for example, you can +But if you find that archive checks are too slow, for example, you can configure borgmatic to run repository checks only. Configure this in the `consistency` section of borgmatic configuration: