Ansible SSH Setup Playbook

It is best practice to use Ansible with SSH keys in order to create the SSH connections to the servers. This does require a little bit of extra setup before hand in order to ensure that the server can be reached by Ansible via SSH keys alone. As I have been doing this quite a lot recently I decided to package the setup steps into an Ansible playbook.

When you first set up a Linux server you will find that you are usually given root access, and it is up to you to configure it after the fact in order to have an administrator user with the correct access. With this root user we will use Ansible to log into the host, create a new user, setup SSH key access and then alter the sudoers file so that the new user can perform Ansible tasks.

Assuming that the host we want to configure has an IP address of we can create an inventory file that looks like the following.

[hosts] ansible_connection=ssh ansible_ssh_user=root ansible_ssh_pass=myrootpassword

As we don't have SSH key access we need to tell Ansible to use an alternative method. The ansible_ssh_user parameter tells Ansible which user to login as and ansible_ssh_pass tells Ansible what the users password is.

Those of you who know something about Ansible might wonder why you would pass the password to Ansilbe using the ansible_ssh_pass parameter and not the --ask-pass flag on the command line. The reason is that --ask-pass will only take in a single password and pass this to all hosts. This is fine if every host in your inventory file has the same root password, but I'm guessing that this is probably not the case. The idea here is that you can spend some time setting up your virtual machines and then plug them into this setup playbook in order to get them to a minimal level for Ansible provisioning. Once done you can continue on to your other playbooks to provision the Ansible hosts accordingly.

In order to use the ansible_ssh_pass parameter you first need to install the sshpass program. This allows you to send passwords to SSH commands, and Ansible utilises this program to send passwords to it's own connections. If you are on Ubuntu you can install sshpass like this.

sudo apt-get install sshpass

If you try to connect with Ansible now you might get an authentication failed message, even if the password is correct. This is because your local system is trying to ask if you want to store a key check of the host you are connecting to, which gets in the way of Ansible trying to connect. To disable the host key check you need to create a file called ansible.cfg (in the same folder as your inventory file) and add the following.


The ansible.cfg file is automatically picked up by Ansible and is used to set certain Ansible configuration options. In this case we are turning off host key checking and allowing Ansible to connect to the host without asking if it should add the host key to the list of known hosts.

Before setting up the playbook you first need to create an ssh key that will be used to setup the connection. This can be done with the ssh-keygen command in the usual manner. Once created, place the key into the same directory as the Ansible script. Just remember not to commit them into any source control systems, especially if they are public repositories.

After this you are ready to create a setup playbook that will put the ssh key on the server. This playbook will run using the connection details in the hosts.ini file created above. This playbook will run through the following actions.

  • Create a user on the remote host. The name of this user is defined at the top of the playbook as a variable.
  • Set the password for the user created. This is mainly so this use has a full presence on the server and can also be used to test commands on the server before porting them back into Ansible playbooks. Again, the password is set using a variable at the top of the playbook.
  • Use the authorized_key Ansible module to copy the public ssh key (kept in the same folder as the Ansible project) and place it on the server in the .ssh/authorized_keys file. After this step it is possible to connect to the server using the ssh keys alone. There is still one step left to do though.
  • The final step is to allow the 'ansibleremote' user to complete 'sudo' actions on the remote host without needing to enter a password. We do this by adding a line to the /etc/sudoers file.

Here is the setup playbook in full.

- hosts: all
  user: root
    createuser: 'ansibleremote'
    createpassword: 'myamazingpassword'
  - name: Setup | create user
    command: useradd -m {{ createuser }} creates=/home/{{ createuser }}
    sudo: true

  - name: Setup | set user password
    shell: usermod -p $(echo '{{ createpassword }}' | openssl passwd -1 -stdin) {{ createuser }}
    sudo: true

  - name: Setup | authorized key upload
    authorized_key: user={{ createuser }}
      key="{{ lookup('file', '') }}"
      path='/home/{{ createuser }}/.ssh/authorized_keys'
    sudo: true

  - name: Sudoers | update sudoers file and validate
    lineinfile: "dest=/etc/sudoers
      line='{{ createuser }} ALL=(ALL) NOPASSWD: ALL'
      regexp='{{ createuser }} ALL=(ALL) NOPASSWD: ALL'
    sudo: true

You can run this setup playbook using the following command.

ansible-playbook --inventory-file=hosts.ini setup.yml

Once complete you can now use the 'ansibleremote' user to run other Ansible playbooks and complete actions on hosts using the secure ssh keys. There are a few ways of using the ssh key connection, but one way is by referencing it in your hosts.ini file in the following way.

[default] ansible_ssh_user=ansibleremote ansible_ssh_private_key_file=privatekey


The "-n" flag to useradd doesn't seem to exist. Maybe you meant "-N"?
you should use "-m" not "-n" there
Could you explain why I should use -m and not -n here?
Philip Norton
Worked perfectly except the Sudoers part which generated:TASK: [Sudoers | update sudoers file and validate] **************************** fatal: [XX.XXX.X.X] => a duplicate parameter was found in the argument string (ALL) fatal: [] => a duplicate parameter was found in the argument string (ALL) FATAL: all hosts have already failed -- abortingNot been able to work out why so far. Thanks Mike
On RHEL 5, useradd's options do not have -n. Maybe you are on another flavor that has it? There is "-N, --no-user-group Do not create a group with the same name as the user, but add the user to the group specified by the -g option or by the GROUP". Since a home path is given, it seems like that is not the intent, whereas there is another option: "-m, --create-home Create the user´s home directory if it does not exist." which seems to be what you want.
Thanks :) I've changed it to -m.
Philip Norton
Thanks, this is useful. ansible has a module for setting up users: will resolve some or all of the linux flavor issues you're having doing this by hand. Also, you wouldn't want to check this playbook into version control with passwords specified. Among other solutions, you could prompt the user for the password when the playbook runs using vars_prompt: (
I did this on my laptop running Fedora 21 and applied this playbook on a CentOS 6 machine. I didn't need this 'sshpass' tool.
Maybe it's just for osx and Ubuntu machines as that is what I've tried this on.
Philip Norton
hi philip i am having this error fatal: [] => Missing become password FATAL: all hosts have already failed -- aborting what to do
Looks like you aren't supplying the sudo password. 'become' is the new 'sudo' feature in Ansible that elevates your user permissions so maybe it isn't getting the right variables?
Philip Norton
authorized_key Use manage_dir=yes instead of manage_dir=no If set, the module will create the directory, as well as set the owner and permissions of an existing directory. With strict sshd servers ansibleremote can't login passwordless because the keys aren't accepted. sshd debug3: secure_filename: checking '/home/ansibleremote/.ssh' Authentication refused: bad ownership or modes for directory /home/ansibleremote/.ssh debug1: restore_uid: 0/0 Failed publickey for ansibleremote from
I love this, thank you for posting it! @wekker: the directory /home/ansibleremote/.ssh should have 700 permission and be owned by ansibleremote/ansibleremote. An easy way to fix that is to add a file resource to the playbook that sets the correct ownership and mode. A few notes: I would not set a password for the ansibleremote user - it's one less way for somebody to break in. The password is not actually necessary for your purpose. If you really do need to log on as ansibleremote user, you can instead log on as somebody else, then su - to become root, and then su - ansibleremote to become the ansibleremote user. Or you can SSH in from the Ansible server with Ansible's private key. I also prefer not to turn off host key checking. And lastly, instead of editing /etc/sudoers I prefer to drop a new configuration file into /etc/sudoers.d (but not all distros support that). Overall, a great post. Thanks!

Really like your stuff man, thanks for sharing


Add new comment

The content of this field is kept private and will not be shown publicly.
13 + 1 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.