Remote: Host key verification failed | Automating host key verification #601
Labels
No Label
bug
data loss
design finalized
good first issue
new feature area
question / support
security
waiting for response
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: borgmatic-collective/borgmatic#601
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
What I'm trying to do and why
Hi! I am trying to automate the borgmatic docker container in order to deploy it in our k8s cluster for backing up pod databases and required data.
Steps to reproduce (if a bug)
Actual behavior (if a bug)
The borgmatic process can obviously not connect to the remote side because of
Remote: Host key verification failed
.What I do not want to do is to manually type "yes" to confirm the host key to be put in the
known_hosts
file. It would be nice to automate this step, presumably withssh-keyscan -H host >> ~/.ssh/known_hosts
but I cannot find where to put this step.Expected behavior (if a bug)
The borgmatic process should automatically verify the key and put the entries to the
known_hosts
fileOther notes / implementation ideas
Are there any considerations on to deploy borgmatic to k8s? If yes, are there any Helm Charts out there?
Environment
borgmatic version: [version here]
latest version from docker container -> borgmatic version 1.7.4
borgmatic installation method: [e.g., Debian package, Docker container, etc.]
Docker container from here
Borg version: [version here]
borgbackpu version 1.2.2
Python version: [version here]
Docker python alpine image
FROM python:3.10.6-alpine3.16
Database version (if applicable): [version here]
MySQL 5.1.7
operating system and version: [OS here]
Alpine Linux 3.16
The borgmatic Docker container is a separate project with their own issue tracker: https://github.com/borgmatic-collective/docker-borgmatic/issues ... So I'd recommend filing a ticket there.
However, I can address a couple points. If you mount your existing SSH keys at /root/.ssh in the container, I believe that should work. However, that won't auto-scan/auto-generate the keys for you AFAIK.
And while some folks do run borgmatic in K8s, as far as I know, it's not officially supported. Here's some relevant discussion on that: #568.
Hope that helps!
Hi witten!
Thanks for your fast answer.
Yes, I have already submitted one issue there but have closed it moments after, because I was thinking that this problem is not neccesarily connected to the docker container and more to borgmatic itself.
I have tested mounting the
~/.ssh
folder into the docker container but as I have mentioned, it will not bypass the message with the fingerprint. Is there no librarie or mechanism in python to usessh-keyscan -H
? I hope we are on the same boat but I didn't mean to auto generate the ssh-keys itself, I rather mean that you can bypass the message when first sshing into another machine like:Regarding the k8s setup:
I can provide all the setup I have made myself to deploy borgmatic using FluxCD with GitOps!
Thanks for your help!
Cheers
I believe that you can bypass that "are you sure?" message by first populating
~/.ssh/known_hosts
with the appropriate public key for your SSH server. You can even lay this down with your provisioning mechanism. I'm not familiar with GitOps, but I do this on my own infrastructure with Ansible: On each machine that pushes backups via borgmatic to another server by SSH, I lay down an~/.ssh/known_hosts
file with an entry for that server. And if I'm using borgmatic in Docker, I mount that~/.ssh
directory into the container.As for borgmatic taking on that responsibility and automatically accepting public keys from a server, it would potentially introduce a security vulnerablity (unless I'm missing something)—a malacious user could compromise a server or introduce a MITM attack using a different SSH key, and nobody would be the wiser. In contrast, if you are responsible for explicitly accepting that key (or laying it down via provisioning), that pushes the validation responsibility onto you, the user. Which is where it belongs, IMO.
I'd be happy to take a look at your K8s setup, although I can't make any promises at this time on official support. Thanks for offering to share it!
EDIT: Fixed
authorized_keys
->known_hosts
.Hi!
Very happy to always get your fast answers, thank you!
I think I do have a complete different understanding from SSH with key-based authentication. From my understanding the procedure and according steps are:
2.ssh-copy-id -i id_rsa.pub
from machine to remote server -> then in theauthorized_keys
file on my server a new entry with the pub key from the machine will be put inWhen using
ssh-copy-id
you will manually ssh into the server and create the entry inknown_hosts
anyway.Are you sure?
appearsknown_hosts
will be created on my machineEDIT: I think I have answered those questions myself.
As far as I have understood your answer, you said to lay down the
authorized_keys
on the machine. But the machine is actually holding the private key and and the server has theauthorized_keys
to verify the private key. In other words - the docker container would have the private / public key pair +known_hosts
(I know this host already) and the server holds theauthorized_keys
file which says that private key XY is already authorized.I understand when you are saying that the responsibility belongs to the user itself to accept the keys and everything, but then it is going to be hard to automate this step. Then there is always at least 1 manual step needed to run the backups (of course only initially). As soon as the server host would change, this step would be needed again and the backup cannot run anymore.
Do I understand something completely wrong or is my knowledge about ssh not correct? :-D
Thanks in advance and happy to help with my setup :-)
EDIT 2:
From my understanding when using the command
ssh-keyscan -H hostname >> ~/.ssh/known_hosts
(if there is something similar in python) the command would only output anything if from this specified host a public key is already authorized. Thus, an entry with the public key from the machine is present inauthorized_keys
on the remote server. Therefore, there would be no possibility to create any MITM attack (if I am not mistaken)EDIT 3:
After discussing this topic with some colleagues of mine, we came to the conclusion that automating this step would be crucial for our environment.
However, I will manually add this command in some stage where the Docker container will be built to ensure that the known_hosts file exists. If something from you could be provided (such as an option or smth) would be awesome! If not, I will apply the previously mentioned step.
Thank you very much for your help!
Yes, my bad. I meant to say
known_hosts
instead ofauthorized_keys
! Thanks for pointing that out.I'm not suggesting there's a manual step.. I agree that a user having to manually SSH in and confirm an interactive prompt would be a huge pain! However, I do believe it's up to the administrator to setup provisioning (GitOps, Ansible, etc.) to automatically lay down a valid
~/.ssh/known_hosts
file that will securely allow the interactive prompt to be bypassed. Meaning that the administrator will probably need to extract the public key from the server via a secure mechanism and put it into provisioning.Yes, and my layman's understanding is that this is part of the security design of SSH. If the server's SSH public key changes, that could be a MITM attack. So it would require the administrator to update the server's public key in provisioning and then re-provision
~/.ssh/known_hosts
files as appropriate.An initial run of
ssh-keyscan
can still be MITM attacked, too. For instance, you spin up the server and something's already sitting on the network intercepting traffic. Now I totally understand if this kind of attack is not be part of your organization's threat model, but failing against that kind of attack is not behavior I'd want to encode into borgmatic.Automation makes sense! I only suggest that it be done in a way where the public keys are retrieved securely or otherwise verified. From the
ssh-keyscan
documentation: "If a ssh_known_hosts file is constructed using ssh-keyscan without verifying the keys, users will be vulnerable to man in the middle attacks."Hi witten!
Thanks for having this interesting conversation here - really like it :)
Okay yes, now I am aware of your concerns - makes totally sense.
Yes, this also seems correct that when the public key would change that some malicious attacker might sniff the traffic and hack into the system - I have learned a lot today about ssh!
In the end, as we are strictly dependent on this step being automated, we will still use the approach of using
ssh-keyscan
. The "good" thing is that we only run the environment in an internal network and therefore might not be as concerned as others.Thank you for all your expertise and help!
I will soon let you know about the k8s setup :-)
Cheers
Also, since you're using Kubernetes already, have you considered using a built-in K8s mechanism for distributing SSH public keys securely? E.g., a
ConfigMap
published along with the server's deployment?I am using a secret which injects the
id_rsa
andid_rsa.pub
securely - yes thanks for the hint!For the required MySQL credentials I am using a
secret
as well.The pod (container) definition is as follows:
You can also see the step at
command:
what I am using right now to automate the mentionedssh-keyscan
step.Later, I will also inject the
BORG_PASSPHRASE
with a secret.Cheers
Cool! You might be able to skip the
ssh-keyscan
command and similarly mount/copy the server's SSH public key directly into the borgmatic pod in the appropriateknown_hosts
location, assuming the pod shown here has access to the server's SSH key Secret.Thanks!
I think I have understood something wrong.
I have generated the keys on the machine e.g
id_rsa
andid_rsa.pub
.You mean to now mount the content of
id_rsa.pub
directly into theknown_hosts
file?When I compare the two content I see a major difference:
Content of
id_rsa.pub
:Content of
known_hosts
file when manually sshing to Server:Or do you mean by
the actual
~/.ssh
directory?In other words to copy the public key directly to
~/.ssh
? If so, I have already did that.Eventually, I am thinking in the wrong way or didn't completely understand your suggestion, but IMO this won't work.
Thanks for helping!
Yeah, I was suggesting you mount the public key of your server in the
known_hosts
format at~/.ssh/known_hosts
. If you don't already have it published within a Secret in that format, you'd either have to add that data to your secret (e.g. as produced byssh-keyscan
locally on your server) or massage your existing public key into that format.But this is just an idea. You know best the design and limitations of your current system!
I'm closing this for now, but please feel free to continue the discussion or open another ticket.