Home Ansible Ansible SSH Authentication And Privilege Escalation

Ansible SSH Authentication And Privilege Escalation

By Karthick
Published: Last Updated on 6.6K views

In this article, we are going to focus on two important Ansible concepts. The first concept will be how SSH Key-based and password-based authentication works in Ansible. The second concept is how to elevate the privilege when working with ad hoc commands and playbooks.

I have a three-node lab setup running Ubuntu 20.04 LTS machines using VirtualBox and Vagrant. There is a detailed article about the lab setup which you can read from the link below.

Key Based Authentication in Ansible

The first thing you have to understand when learning ansible is how the communication is happening between the controller and managed nodes. Ansible uses SSH protocol to connect to the managed nodes and run the task.

Every time when you run a playbook or ad hoc commands, you have to provide the SSH password for ansible to authenticate to the managed nodes via SSH.

To eliminate this, it is recommended to create an SSH key pair and share the public key with all the nodes so ansible can communicate using the keypair.

I have created two key pairs named first_key and second_key using the below script for demonstration.

Create a text file called create_keypair.sh with the following contents.

#!/usr/bin/env bash

# THIS SCRIPT WILL CREATE SSH KEY PAIR AND DISTRIBUTE ACROSS ALL NODES

read -p "Enter the name for the key : " KEY_NAME
ssh-keygen -b 2048 -t rsa -f /home/vagrant/.ssh/${KEY_NAME} -q -N ""

# LOOPING THROUGH AND DISTRIBUTING THE KEY

for val in controller managed1 managed2; do
    echo "-------------------- COPYING KEY TO ${val^^} NODE ------------------------------"
    sshpass -p 'vagrant' ssh-copy-id -f -i /home/vagrant/.ssh/${KEY_NAME}.pub -o "StrictHostKeyChecking=no" vagrant@$val
done

Give execute permission to the script and run it.

$ chmod +x path/to/create_keypair.sh
$ ./create_keypair.sh

I have created this script for my lab setup. You can edit the for loop section and add your managed node names accordingly.

$ tree .ssh/
.ssh/
├── authorized_keys
├── first_key
├── first_key.pub
├── known_hosts
├── second_key
└── second_key.pub

When you have keys created with different names other than the default name(id_rsa), ssh will try to find the default key names and if not found, it will prompt for password-based authentication.

debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Trying private key: /home/vagrant/.ssh/id_rsa
debug1: Trying private key: /home/vagrant/.ssh/id_dsa
debug1: Trying private key: /home/vagrant/.ssh/id_ecdsa
debug1: Trying private key: /home/vagrant/.ssh/id_ecdsa_sk
debug1: Trying private key: /home/vagrant/.ssh/id_ed25519
debug1: Trying private key: /home/vagrant/.ssh/id_ed25519_sk
debug1: Trying private key: /home/vagrant/.ssh/id_xmss
debug1: Next authentication method: password
vagrant@managed1's password:

In this case, you have to mention the private key file explicitly using the -i flag.

$ ssh -v -i /home/vagrant/.ssh/first_key vagrant@managed1

When you run an ad hoc command or playbook, you can use the --key-file or --private-key flag and pass the private key file as the argument. In the below example, you can see I have used both keys(first_key & second_key) to successfully communicate to the managed nodes.

# USING --key-file FLAG

$ ansible managed1 -m ping --key-file /home/vagrant/.ssh/second_key

managed1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
# USING --private-key FLAG

$ ansible managed1 -m ping --private-key /home/vagrant/.ssh/first_key

managed1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

You can also add the parameter "private_key_file" in the ansible.cfg configuration file which will be applied to adhoc commands and all playbook tasks.

$ vim ansible.cfg

Add the following line:

Private_key_file = /home/vagrant/.ssh/first_key

Replace /home/vagrant/.ssh/first_key with your own.

Private Key File
Private Key File

You can also add "ansible_ssh_private_key" parameter in your inventory file which will take higher precedence over ansible.cfg file. Below is how my inventory is now set up. Node managed1 will use "first_key" and managed2 will use "second_key".

[ubuntu1]
managed1 ansible_ssh_private_key_file=/home/vagrant/.ssh/first_key

[ubuntu2]
managed2 ansible_ssh_private_key_file=/home/vagrant/.ssh/second_key

Now if you run the adhoc command or playbook again, you can see both the keys will successfully authenticate. You can increase the verbosity to check if the appropriate keys are used as per our input.

$ ansible -vvv all -m ping
Different Key Pairs
Different Key Pairs

Now you should have a good understanding of how key-based authentication works in ansible. It is important to understand the precedence when adding the parameter in different files. The command-line option takes higher precedence followed by the inventory file and ansible.cfg configuration file.

SSH Password-Based Authentication in Ansible

When you run any task, ansible will use the current user in the controller node to communicate with the managed nodes through SSH. You can use the shell module and run the "whoami" command to verify the user name in the managed nodes. In my case, the user name is "vagrant". The vagrant user is authenticating using keys that I have set up in the previous section.

$ whoami
vagrant
$ ansible all -m shell -a "whoami"
managed2 | CHANGED | rc=0 >>
vagrant
managed1 | CHANGED | rc=0 >>
vagrant

If you wish to connect to the managed nodes as a different user you can use --u or --user flag and pass username as the argument. If you see the below image, I try to use the user "karthick" which does not have an SSH key setup and no keys distributed to the managed nodes so the connectivity is getting failed.

$ ansible all -m shell -a "whoami" -u karthick
$ ansible all -m shell -a "whoami" --user karthick
Connectivity Error
Connectivity Error

To use password-based authentication, you can use the -k or --ask-pass flag. It will prompt you to enter the SSH password for the user(karthick). Make sure the password is the same across all the nodes for the user.

$ ansible all -m shell -a "whoami" -u karthick -k
$ ansible all -m shell -a "whoami" -u karthick --ask-pass
SSH Password Prompt
SSH Password Prompt

You can also store the password in a file and pass the file name as an argument to --connection-password-file or --conn-pass-file flag. This is not a recommended way since you are storing the password in a plain text file. You can use ansible vault to encrypt the password file but this is a separate topic to discuss.

$ ansible all -m shell -a "whoami" -u karthick --connection-password-file pass.txt
$ ansible all -m shell -a "whoami" -u karthick --conn-pass-file pass.txt

You can pass the username and password as parameters in the inventory file too. Again this is not the best way to store the password. Below is how my inventory file is now set up.

[ubuntu1]
managed1 ansible_ssh_private_key_file=/home/vagrant/.ssh/first_key ansible_user=vagrant

[ubuntu2]
managed2 ansible_user=karthick ansible_ssh_pass=password
$ ansible all -m shell -a "whoami" -u karthick

managed1 | CHANGED | rc=0 >>
vagrant
managed2 | CHANGED | rc=0 >>
karthick

Heads Up: I may be running the examples using adhoc commands but the same flags are applicable to the playbook too.

Privilege Escalation in Ansible

There are times your task requires elevated privilege(root) to run successfully. For example, package management. You can only install, remove or upgrade packages as a root user or with sudo privilege.

When you run the help flag along with ansible or ansible-playbook, you will find a privilege escalation section as shown in the image.

$ ansible --help
$ ansible-playbook --help
Privilege Escalation Options
Privilege Escalation Options

When you want to run any task with root privilege, you should use -b or --become flag.

$ ansible ubuntu1 -m service -a "name=sshd state=restarted" -b

managed1 | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": true,
    "name": "sshd",
    "state": "started",

By default, sudo will be used as a privilege escalation method. You can change the method by setting the --become-method flag. You can get the list of supported methods by running the following command.

$ ansible-doc -t become -l

ansible.netcommon.enable     Switch to elevated permissions on a network device                                                             
community.general.doas       Do As user                                                                                                     
community.general.dzdo       Centrify's Direct Authorize                                                                                    
community.general.ksu        Kerberos substitute user                                                                                       
community.general.machinectl Systemd's machinectl privilege escalation                                                                      
community.general.pbrun      PowerBroker run                                                                                                
community.general.pfexec     profile based execution                                                                                        
community.general.pmrun      Privilege Manager run                                                                                          
community.general.sesu       CA Privileged Access Manager                                                                                   
community.general.sudosu     Run tasks using sudo su -                                                                                  
runas                        Run As user                                                                                                   
su                           Substitute User                                                                                               
sudo                         Substitute User DO 

You may or may not give a sudo password for the managed nodes depending upon how the user is set up. In my case, I have set up the user vagrant to run sudo without prompting for passwords.

If your sudo user requires a password to work, then you should use the -K or --ask-become-pass flag which will prompt for the sudo password.

As you can see from the below error, when I try to run without providing a sudo password for the user "karthick", it throws me an error saying "Missing sudo password".

$ ansible ubuntu1 -m service -a "name=sshd state=restarted" -i inventory -u karthick -k -b

SSH password:
managed1 | FAILED! => {
    "msg": "Missing sudo password"
}
$ ansible ubuntu1 -m service -a "name=sshd state=restarted" -i inventory -u karthick -k -b -K
Use -K Flag For Sudo Password Prompt
Use -K Flag For Sudo Password Prompt

Sudo password can be stored in a file and passed as an argument to the --become-password-file or --become-pass-file flag. Ansible vault can be used to encrypt this file but is not a recommended practice.

$ ansible ubuntu1 -m service -a "name=sshd state=restarted" -i inventory -u karthick -k -b --become-password-file pass.txt

$ ansible ubuntu1 -m service -a "name=sshd state=restarted" -i inventory -u karthick -k -b --become-pass-file pass.txt

You can also include the "become" directive in the playbook at the task level or play level.

The below image represents how the "become" directive is used at the play level.

The "become" Directive At The Play Level
The "become" Directive At The Play Level

The below image represents how the "become" directive is used at the task level.

The Become Directive At The Task Level
The Become Directive At The Task Level

As you can see from the output, the ssh service is restarted fine in the managed1 node.

$ ansible-playbook restart_service.yml

PLAY [Restart SSHD service] ***************************************************************************

TASK [Restart SSHD in managed1.anslab.com] ************************************************************
changed: [managed1]

PLAY RECAP ********************************************************************************************
managed1 : ok=1 changed=1    unreachable=0     failed=0 skipped=0    rescued=0 ignored=0   

The "become" directive can also be set at ansible.cfg configuration file and the inventory file. But it is recommended to set the directive in the playbook. You can get the different parameters for ansible.cfg file from the link below.

If you want to run the task as a different user after connecting to the managed nodes, then you should use the --become-user flag. By default, it is set to root user.

Conclusion

In this article, we have seen how key-based and password authentication works in Ansible and different flags that are supported for authentication. We have also seen how privilege escalation works in Ansible.

To dive deep, you should have a fair understanding on how different privilege escalation method works and according to your needs set the environment without compromising the security.

You May Also Like

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. By using this site, we will assume that you're OK with it. Accept Read More