Remote: Host key verification failed | Automating host key verification #601

Closed
opened 2022-10-17 09:44:51 +00:00 by andim202 · 12 comments

What I'm trying to do and why

Hi! I am trying to automate the borgmatic docker container in order to deploy it in our k8s cluster for backing up pod databases and required data.

Steps to reproduce (if a bug)

  1. create borg remote repository
  2. create passwordless ssh keys (private / public)
  3. mount ssh keys to docker container via volume bind / mount
  4. start docker container and backup any local directory or dump database

Actual behavior (if a bug)

The borgmatic process can obviously not connect to the remote side because of Remote: Host key verification failed.

What I do not want to do is to manually type "yes" to confirm the host key to be put in the known_hosts file. It would be nice to automate this step, presumably with ssh-keyscan -H host >> ~/.ssh/known_hosts but I cannot find where to put this step.

Expected behavior (if a bug)

The borgmatic process should automatically verify the key and put the entries to the known_hosts file

Other notes / implementation ideas

Are there any considerations on to deploy borgmatic to k8s? If yes, are there any Helm Charts out there?

Environment

borgmatic version: [version here]

latest version from docker container -> borgmatic version 1.7.4

borgmatic installation method: [e.g., Debian package, Docker container, etc.]

Docker container from here

Borg version: [version here]

borgbackpu version 1.2.2

Python version: [version here]

Docker python alpine image FROM python:3.10.6-alpine3.16

Database version (if applicable): [version here]

MySQL 5.1.7

operating system and version: [OS here]

Alpine Linux 3.16

#### What I'm trying to do and why Hi! I am trying to automate the borgmatic docker container in order to deploy it in our k8s cluster for backing up pod databases and required data. #### Steps to reproduce (if a bug) 1. create borg remote repository 2. create passwordless ssh keys (private / public) 3. mount ssh keys to docker container via volume bind / mount 4. start docker container and backup any local directory or dump database #### Actual behavior (if a bug) The borgmatic process can obviously not connect to the remote side because of `Remote: Host key verification failed`. What I do not want to do is to manually type "yes" to confirm the host key to be put in the `known_hosts` file. It would be nice to automate this step, presumably with `ssh-keyscan -H host >> ~/.ssh/known_hosts` but I cannot find where to put this step. #### Expected behavior (if a bug) The borgmatic process should automatically verify the key and put the entries to the `known_hosts` file #### Other notes / implementation ideas Are there any considerations on to deploy borgmatic to k8s? If yes, are there any Helm Charts out there? #### Environment **borgmatic version:** [version here] latest version from [docker container](https://github.com/borgmatic-collective/docker-borgmatic/tree/master/base) -> borgmatic version 1.7.4 **borgmatic installation method:** [e.g., Debian package, Docker container, etc.] Docker container from [here](https://github.com/borgmatic-collective/docker-borgmatic/tree/master/base) **Borg version:** [version here] borgbackpu version 1.2.2 **Python version:** [version here] Docker python alpine image `FROM python:3.10.6-alpine3.16` **Database version (if applicable):** [version here] MySQL 5.1.7 **operating system and version:** [OS here] Alpine Linux 3.16
Owner

The borgmatic Docker container is a separate project with their own issue tracker: https://github.com/borgmatic-collective/docker-borgmatic/issues ... So I'd recommend filing a ticket there.

However, I can address a couple points. If you mount your existing SSH keys at /root/.ssh in the container, I believe that should work. However, that won't auto-scan/auto-generate the keys for you AFAIK.

And while some folks do run borgmatic in K8s, as far as I know, it's not officially supported. Here's some relevant discussion on that: #568.

Hope that helps!

The borgmatic Docker container is a separate project with their own issue tracker: https://github.com/borgmatic-collective/docker-borgmatic/issues ... So I'd recommend filing a ticket there. However, I can address a couple points. If you [mount your existing SSH keys at /root/.ssh in the container](https://github.com/borgmatic-collective/docker-borgmatic/tree/master/base#example-run-command), I believe that should work. However, that won't auto-scan/auto-generate the keys for you AFAIK. And while some folks do run borgmatic in K8s, as far as I know, it's not officially supported. Here's some relevant discussion on that: #568. Hope that helps!
witten added the
question / support
label 2022-10-17 17:01:19 +00:00
Author

Hi witten!

Thanks for your fast answer.

Yes, I have already submitted one issue there but have closed it moments after, because I was thinking that this problem is not neccesarily connected to the docker container and more to borgmatic itself.

I have tested mounting the ~/.ssh folder into the docker container but as I have mentioned, it will not bypass the message with the fingerprint. Is there no librarie or mechanism in python to use ssh-keyscan -H? I hope we are on the same boat but I didn't mean to auto generate the ssh-keys itself, I rather mean that you can bypass the message when first sshing into another machine like:

The authenticity of host 'remote (remote)' can't be established.
ED25519 key fingerprint is SHA256:GeXf5SW9kVCoFzNxhOk+cpR1mvYUqe+VVrfNbQeBFuc.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])?

Regarding the k8s setup:

I can provide all the setup I have made myself to deploy borgmatic using FluxCD with GitOps!

Thanks for your help!

Cheers

Hi witten! Thanks for your fast answer. Yes, I have already submitted one issue there but have closed it moments after, because I was thinking that this problem is not neccesarily connected to the docker container and more to borgmatic itself. I have tested mounting the `~/.ssh` folder into the docker container but as I have mentioned, it will not bypass the message with the fingerprint. Is there no librarie or mechanism in python to use `ssh-keyscan -H`? I hope we are on the same boat but I didn't mean to auto generate the ssh-keys itself, I rather mean that you can bypass the message when first sshing into another machine like: ``` The authenticity of host 'remote (remote)' can't be established. ED25519 key fingerprint is SHA256:GeXf5SW9kVCoFzNxhOk+cpR1mvYUqe+VVrfNbQeBFuc. This key is not known by any other names. Are you sure you want to continue connecting (yes/no/[fingerprint])? ``` Regarding the k8s setup: I can provide all the setup I have made myself to deploy borgmatic using FluxCD with GitOps! Thanks for your help! Cheers
Owner

I believe that you can bypass that "are you sure?" message by first populating ~/.ssh/known_hosts with the appropriate public key for your SSH server. You can even lay this down with your provisioning mechanism. I'm not familiar with GitOps, but I do this on my own infrastructure with Ansible: On each machine that pushes backups via borgmatic to another server by SSH, I lay down an ~/.ssh/known_hosts file with an entry for that server. And if I'm using borgmatic in Docker, I mount that ~/.ssh directory into the container.

As for borgmatic taking on that responsibility and automatically accepting public keys from a server, it would potentially introduce a security vulnerablity (unless I'm missing something)—a malacious user could compromise a server or introduce a MITM attack using a different SSH key, and nobody would be the wiser. In contrast, if you are responsible for explicitly accepting that key (or laying it down via provisioning), that pushes the validation responsibility onto you, the user. Which is where it belongs, IMO.

I'd be happy to take a look at your K8s setup, although I can't make any promises at this time on official support. Thanks for offering to share it!

EDIT: Fixed authorized_keys -> known_hosts.

I believe that you can bypass that "are you sure?" message by first populating `~/.ssh/known_hosts` with the appropriate public key for your SSH server. You can even lay this down with your provisioning mechanism. I'm not familiar with GitOps, but I do this on my own infrastructure with Ansible: On each machine that pushes backups via borgmatic to another server by SSH, I lay down an `~/.ssh/known_hosts` file with an entry for that server. And if I'm using borgmatic in Docker, I mount that `~/.ssh` directory into the container. As for borgmatic taking on that responsibility and automatically accepting public keys from a server, it would potentially introduce a security vulnerablity (unless I'm missing something)—a malacious user could compromise a server or introduce a MITM attack using a different SSH key, and nobody would be the wiser. In contrast, if you are responsible for explicitly accepting that key (or laying it down via provisioning), that pushes the validation responsibility onto you, the user. Which is where it belongs, IMO. I'd be happy to take a look at your K8s setup, although I can't make any promises at this time on official support. Thanks for offering to share it! EDIT: Fixed `authorized_keys` -> `known_hosts`.
Author

Hi!

Very happy to always get your fast answers, thank you!

I think I do have a complete different understanding from SSH with key-based authentication. From my understanding the procedure and according steps are:

  1. generate a private/public key pair without passphrase
    2. ssh-copy-id -i id_rsa.pub from machine to remote server -> then in the authorized_keys file on my server a new entry with the pub key from the machine will be put in
    When using ssh-copy-id you will manually ssh into the server and create the entry in known_hosts anyway.
  2. If I am trying to ssh from my machine to the server, the message with Are you sure? appears
  3. If I am accepting this, a new entry in a new file known_hosts will be created on my machine

EDIT: I think I have answered those questions myself.

As far as I have understood your answer, you said to lay down the authorized_keys on the machine. But the machine is actually holding the private key and and the server has the authorized_keys to verify the private key. In other words - the docker container would have the private / public key pair + known_hosts (I know this host already) and the server holds the authorized_keys file which says that private key XY is already authorized.

I understand when you are saying that the responsibility belongs to the user itself to accept the keys and everything, but then it is going to be hard to automate this step. Then there is always at least 1 manual step needed to run the backups (of course only initially). As soon as the server host would change, this step would be needed again and the backup cannot run anymore.

Do I understand something completely wrong or is my knowledge about ssh not correct? :-D

Thanks in advance and happy to help with my setup :-)

EDIT 2:

From my understanding when using the command ssh-keyscan -H hostname >> ~/.ssh/known_hosts (if there is something similar in python) the command would only output anything if from this specified host a public key is already authorized. Thus, an entry with the public key from the machine is present in authorized_keys on the remote server. Therefore, there would be no possibility to create any MITM attack (if I am not mistaken)

EDIT 3:

After discussing this topic with some colleagues of mine, we came to the conclusion that automating this step would be crucial for our environment.
However, I will manually add this command in some stage where the Docker container will be built to ensure that the known_hosts file exists. If something from you could be provided (such as an option or smth) would be awesome! If not, I will apply the previously mentioned step.

Thank you very much for your help!

Hi! Very happy to always get your fast answers, thank you! I think I do have a complete different understanding from SSH with key-based authentication. From my understanding the procedure and according steps are: 1. generate a private/public key pair without passphrase ~~2. `ssh-copy-id -i id_rsa.pub` from machine to remote server -> then in the `authorized_keys` file **on my server** a new entry with the pub key from the machine will be put in~~ When using `ssh-copy-id` you will manually ssh into the server and create the entry in `known_hosts` anyway. 3. If I am trying to ssh from my machine to the server, the message with `Are you sure?` appears 4. If I am accepting this, a new entry in a new file `known_hosts` will be created **on my machine** EDIT: I think I have answered those questions myself. As far as I have understood your answer, you said to lay down the `authorized_keys` on the machine. But the machine is actually holding the private key and and the server has the `authorized_keys` to verify the private key. In other words - the docker container would have the private / public key pair + `known_hosts` (I know this host already) and the server holds the `authorized_keys` file which says that private key XY is already authorized. I understand when you are saying that the responsibility belongs to the user itself to accept the keys and everything, but then it is going to be hard to automate this step. Then there is always at least 1 manual step needed to run the backups (of course only initially). As soon as the server host would change, this step would be needed again and the backup cannot run anymore. Do I understand something completely wrong or is my knowledge about ssh not correct? :-D Thanks in advance and happy to help with my setup :-) EDIT 2: From my understanding when using the command `ssh-keyscan -H hostname >> ~/.ssh/known_hosts` (if there is something similar in python) the command would only output anything if from this specified host a public key is already authorized. Thus, an entry with the public key from the machine is present in `authorized_keys` **on the remote server**. Therefore, there would be no possibility to create any MITM attack (if I am not mistaken) EDIT 3: After discussing this topic with some colleagues of mine, we came to the conclusion that automating this step would be crucial for our environment. However, I will manually add this command in some stage where the Docker container will be built to ensure that the known_hosts file exists. If something from you could be provided (such as an option or smth) would be awesome! If not, I will apply the previously mentioned step. Thank you very much for your help!
Owner

Yes, my bad. I meant to say known_hosts instead of authorized_keys! Thanks for pointing that out.

I understand when you are saying that the responsibility belongs to the user itself to accept the keys and everything, but then it is going to be hard to automate this step. Then there is always at least 1 manual step needed to run the backups (of course only initially).

I'm not suggesting there's a manual step.. I agree that a user having to manually SSH in and confirm an interactive prompt would be a huge pain! However, I do believe it's up to the administrator to setup provisioning (GitOps, Ansible, etc.) to automatically lay down a valid ~/.ssh/known_hosts file that will securely allow the interactive prompt to be bypassed. Meaning that the administrator will probably need to extract the public key from the server via a secure mechanism and put it into provisioning.

As soon as the server host would change, this step would be needed again and the backup cannot run anymore.

Yes, and my layman's understanding is that this is part of the security design of SSH. If the server's SSH public key changes, that could be a MITM attack. So it would require the administrator to update the server's public key in provisioning and then re-provision ~/.ssh/known_hosts files as appropriate.

From my understanding when using the command ssh-keyscan -H hostname >> ~/.ssh/known_hosts (if there is something similar in python) the command would only output anything if from this specified host a public key is already authorized. Thus, an entry with the public key from the machine is present in authorized_keys on the remote server. Therefore, there would be no possibility to create any MITM attack (if I am not mistaken)

An initial run of ssh-keyscan can still be MITM attacked, too. For instance, you spin up the server and something's already sitting on the network intercepting traffic. Now I totally understand if this kind of attack is not be part of your organization's threat model, but failing against that kind of attack is not behavior I'd want to encode into borgmatic.

After discussing this topic with some colleagues of mine, we came to the conclusion that automating this step would be crucial for our environment.
However, I will manually add this command in some stage where the Docker container will be built to ensure that the known_hosts file exists. If something from you could be provided (such as an option or smth) would be awesome! If not, I will apply the previously mentioned step.

Automation makes sense! I only suggest that it be done in a way where the public keys are retrieved securely or otherwise verified. From the ssh-keyscan documentation: "If a ssh_known_hosts file is constructed using ssh-keyscan without verifying the keys, users will be vulnerable to man in the middle attacks."

Yes, my bad. I meant to say `known_hosts` instead of `authorized_keys`! Thanks for pointing that out. > I understand when you are saying that the responsibility belongs to the user itself to accept the keys and everything, but then it is going to be hard to automate this step. Then there is always at least 1 manual step needed to run the backups (of course only initially). I'm not suggesting there's a manual step.. I agree that a user having to manually SSH in and confirm an interactive prompt would be a huge pain! However, I *do* believe it's up to the administrator to setup provisioning (GitOps, Ansible, etc.) to automatically lay down a valid `~/.ssh/known_hosts` file that will securely allow the interactive prompt to be bypassed. Meaning that the administrator will probably need to extract the public key from the server via a secure mechanism and put it into provisioning. > As soon as the server host would change, this step would be needed again and the backup cannot run anymore. Yes, and my layman's understanding is that this is part of the security design of SSH. If the server's SSH public key changes, that could be a MITM attack. So it would require the administrator to update the server's public key in provisioning and then re-provision `~/.ssh/known_hosts` files as appropriate. > From my understanding when using the command ssh-keyscan -H hostname >> ~/.ssh/known_hosts (if there is something similar in python) the command would only output anything if from this specified host a public key is already authorized. Thus, an entry with the public key from the machine is present in authorized_keys on the remote server. Therefore, there would be no possibility to create any MITM attack (if I am not mistaken) An initial run of `ssh-keyscan` can still be MITM attacked, too. For instance, you spin up the server and something's already sitting on the network intercepting traffic. Now I totally understand if this kind of attack is not be part of your organization's threat model, but failing against that kind of attack is not behavior I'd want to encode into borgmatic. > After discussing this topic with some colleagues of mine, we came to the conclusion that automating this step would be crucial for our environment. However, I will manually add this command in some stage where the Docker container will be built to ensure that the known_hosts file exists. If something from you could be provided (such as an option or smth) would be awesome! If not, I will apply the previously mentioned step. Automation makes sense! I only suggest that it be done in a way where the public keys are retrieved securely or otherwise verified. From the `ssh-keyscan` documentation: "If a ssh_known_hosts file is constructed using ssh-keyscan without verifying the keys, users will be vulnerable to man in the middle attacks."
Author

Hi witten!

Thanks for having this interesting conversation here - really like it :)

Okay yes, now I am aware of your concerns - makes totally sense.

Yes, this also seems correct that when the public key would change that some malicious attacker might sniff the traffic and hack into the system - I have learned a lot today about ssh!

In the end, as we are strictly dependent on this step being automated, we will still use the approach of using ssh-keyscan. The "good" thing is that we only run the environment in an internal network and therefore might not be as concerned as others.

Thank you for all your expertise and help!

I will soon let you know about the k8s setup :-)

Cheers

Hi witten! Thanks for having this interesting conversation here - really like it :) Okay yes, now I am aware of your concerns - makes totally sense. Yes, this also seems correct that when the public key would change that some malicious attacker might sniff the traffic and hack into the system - I have learned a lot today about ssh! In the end, as we are strictly dependent on this step being automated, we will still use the approach of using `ssh-keyscan`. The "good" thing is that we only run the environment in an internal network and therefore might not be as concerned as others. Thank you for all your expertise and help! I will soon let you know about the k8s setup :-) Cheers
Owner

Also, since you're using Kubernetes already, have you considered using a built-in K8s mechanism for distributing SSH public keys securely? E.g., a ConfigMap published along with the server's deployment?

Also, since you're using Kubernetes already, have you considered using a built-in K8s mechanism for distributing SSH public keys securely? E.g., a `ConfigMap` published along with the server's deployment?
Author

I am using a secret which injects the id_rsa and id_rsa.pub securely - yes thanks for the hint!

For the required MySQL credentials I am using a secret as well.

The pod (container) definition is as follows:

apiVersion: v1
kind: Pod
metadata:
   name: borgmatic
   namespace: backup
spec:
  hostAliases:
  - ip: "192.168.0.0"
    hostnames:
    - "some-hostname"
  containers:
   - name: borgmatic
     image: b3vis/borgmatic:latest
     command: ["/bin/sh", "-c"]
     args: ["cp /root/keys/* /root/.ssh; ssh-keyscan -H some-hostname >> /root/.ssh/known_hosts; /entry.sh"]
     ports:
     - containerPort: 6379
     env:
     - name: BORG_PASSPHRASE
       value: "123"
     - name: TZ
       value: "Europe/Berlin"
     volumeMounts:
     - mountPath: "/root/keys"
       name: secret-volume
     - mountPath: "/etc/borgmatic.d/"
       name: borg-config
     - mountPath: "/etc/mysql/"
       name: mysql-volume
       readOnly: true
  volumes: 
    - name: secret-volume
      secret:
        secretName: ssh-key-secret
        defaultMode: 0600
    - name: borg-config
      configMap:
        name: borg-config
    - name: mysql-volume
      secret:
        secretName: mysql-secret
        defaultMode: 0600

You can also see the step at command: what I am using right now to automate the mentioned ssh-keyscan step.

Later, I will also inject the BORG_PASSPHRASE with a secret.

Cheers

I am using a secret which injects the `id_rsa` and `id_rsa.pub` securely - yes thanks for the hint! For the required MySQL credentials I am using a `secret` as well. The pod (container) definition is as follows: ```YAML apiVersion: v1 kind: Pod metadata: name: borgmatic namespace: backup spec: hostAliases: - ip: "192.168.0.0" hostnames: - "some-hostname" containers: - name: borgmatic image: b3vis/borgmatic:latest command: ["/bin/sh", "-c"] args: ["cp /root/keys/* /root/.ssh; ssh-keyscan -H some-hostname >> /root/.ssh/known_hosts; /entry.sh"] ports: - containerPort: 6379 env: - name: BORG_PASSPHRASE value: "123" - name: TZ value: "Europe/Berlin" volumeMounts: - mountPath: "/root/keys" name: secret-volume - mountPath: "/etc/borgmatic.d/" name: borg-config - mountPath: "/etc/mysql/" name: mysql-volume readOnly: true volumes: - name: secret-volume secret: secretName: ssh-key-secret defaultMode: 0600 - name: borg-config configMap: name: borg-config - name: mysql-volume secret: secretName: mysql-secret defaultMode: 0600 ``` You can also see the step at `command:` what I am using right now to automate the mentioned `ssh-keyscan` step. Later, I will also inject the `BORG_PASSPHRASE` with a secret. Cheers
Owner

Cool! You might be able to skip the ssh-keyscan command and similarly mount/copy the server's SSH public key directly into the borgmatic pod in the appropriate known_hosts location, assuming the pod shown here has access to the server's SSH key Secret.

Cool! You might be able to skip the `ssh-keyscan` command and similarly mount/copy the server's SSH public key directly into the borgmatic pod in the appropriate `known_hosts` location, assuming the pod shown here has access to the server's SSH key Secret.
Author

Thanks!

I think I have understood something wrong.

I have generated the keys on the machine e.g id_rsa and id_rsa.pub.
You mean to now mount the content of id_rsa.pub directly into the known_hosts file?

When I compare the two content I see a major difference:

Content of id_rsa.pub:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXv3Fq3Phzi9BgZ5UOy+dt2BR+aOPEnz6xFrTAdbHtD4V6FUsq3bZoxk7gniR4lWc2ZRO/PVb4RZM1O+Nu9lWFKrXEtJp222rlUdqP5FyIwXACzeCFwuy3JXuACLaS/BdUdijHVOs9dP3lsVIUiVELL4FyemdtvHeThZteqbsTKFkEgZbnP9j3UGqDLi3xY5RUZq2JfQwo4As8O1jtJS41Qm5Z/u6vIazSaclUIwDw27rJy9sTxhopVc5WASrNMzN4RQJl/kMgXBqg+mLWVikOD38QrYQ7aVk5HXho/fk4NXFXikgQb9FzMXOW7ve0yUp4i/00OscCgSAF93Qlmbsjjtsipmu2Lu4Dn/hgNwjd8qpTV7BDKu+3yZQ9uyfvEm9LgcnmJ8dakZ3TyC9SoCXb6rNcq/42pCwj7R+QpyXhjdgZ1mRqHOk6JBgVj4NlOQxW5f6fQ9ARrWY+gzefX5w49bMVfZ0i9IMWWCUZXbbVOlk4CAZPNOGQPeyohlHEHeE= borg-local

Content of known_hosts file when manually sshing to Server:

some-host ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMu3A+XGiwGU3QsDQxVUwIm1ChzFBb3+7pS+7osk8OgbdOgQw88OfmLnFXjgqijV/ip8/IcIWbKQ37Kx/pAUQxc=

Or do you mean by

in the appropriate known_hosts location

the actual ~/.ssh directory?

In other words to copy the public key directly to ~/.ssh? If so, I have already did that.

Eventually, I am thinking in the wrong way or didn't completely understand your suggestion, but IMO this won't work.

Thanks for helping!

Thanks! I think I have understood something wrong. I have generated the keys **on the machine** e.g `id_rsa` and `id_rsa.pub`. You mean to now mount the content of `id_rsa.pub` directly into the `known_hosts` file? When I compare the two content I see a major difference: Content of `id_rsa.pub`: ```text ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXv3Fq3Phzi9BgZ5UOy+dt2BR+aOPEnz6xFrTAdbHtD4V6FUsq3bZoxk7gniR4lWc2ZRO/PVb4RZM1O+Nu9lWFKrXEtJp222rlUdqP5FyIwXACzeCFwuy3JXuACLaS/BdUdijHVOs9dP3lsVIUiVELL4FyemdtvHeThZteqbsTKFkEgZbnP9j3UGqDLi3xY5RUZq2JfQwo4As8O1jtJS41Qm5Z/u6vIazSaclUIwDw27rJy9sTxhopVc5WASrNMzN4RQJl/kMgXBqg+mLWVikOD38QrYQ7aVk5HXho/fk4NXFXikgQb9FzMXOW7ve0yUp4i/00OscCgSAF93Qlmbsjjtsipmu2Lu4Dn/hgNwjd8qpTV7BDKu+3yZQ9uyfvEm9LgcnmJ8dakZ3TyC9SoCXb6rNcq/42pCwj7R+QpyXhjdgZ1mRqHOk6JBgVj4NlOQxW5f6fQ9ARrWY+gzefX5w49bMVfZ0i9IMWWCUZXbbVOlk4CAZPNOGQPeyohlHEHeE= borg-local ``` Content of `known_hosts` file when manually sshing to Server: ```text some-host ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMu3A+XGiwGU3QsDQxVUwIm1ChzFBb3+7pS+7osk8OgbdOgQw88OfmLnFXjgqijV/ip8/IcIWbKQ37Kx/pAUQxc= ``` Or do you mean by > in the appropriate `known_hosts` location the actual `~/.ssh` directory? In other words to copy the public key directly to `~/.ssh`? If so, I have already did that. Eventually, I am thinking in the wrong way or didn't completely understand your suggestion, but IMO this won't work. Thanks for helping!
Owner

Yeah, I was suggesting you mount the public key of your server in the known_hosts format at ~/.ssh/known_hosts. If you don't already have it published within a Secret in that format, you'd either have to add that data to your secret (e.g. as produced by ssh-keyscan locally on your server) or massage your existing public key into that format.

But this is just an idea. You know best the design and limitations of your current system!

Yeah, I was suggesting you mount the public key of your server *in the `known_hosts` format* at `~/.ssh/known_hosts`. If you don't already have it published within a Secret in that format, you'd either have to add that data to your secret (e.g. as produced by `ssh-keyscan` locally on your server) or [massage your existing public key into that format](https://superuser.com/questions/1586151/how-to-programmatically-populate-the-known-hosts). But this is just an idea. You know best the design and limitations of your current system!
Owner

I'm closing this for now, but please feel free to continue the discussion or open another ticket.

I'm closing this for now, but please feel free to continue the discussion or open another ticket.
Sign in to join this conversation.
No Milestone
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: borgmatic-collective/borgmatic#601
No description provided.