Automating Your Homelab with Ansible: A Getting Started Guide
By Alex Torres | March 4, 2026 | Infrastructure & Automation
A practical Ansible tutorial covering installation, inventory files, your first playbook, real-world homelab playbooks for Docker, user management (see our Docker for beginners), and container deployment, plus tips on roles, Ansible Galaxy, and keeping your homelab under control.
Why I Stopped SSHing into Every Box Manually
I have a confession. For the first year of running my homelab, I managed every server by hand. Need to update packages? SSH into each machine, run apt update && apt upgrade, confirm, wait, move on to the next one. New user account? SSH in, useradd, copy SSH keys, set permissions. Repeat for every host. It was tedious, error-prone, and honestly a little embarrassing for someone who writes about technology for a living.
Then I discovered Ansible, and everything changed. Tasks that used to take me thirty minutes of repetitive typing now take about ten seconds of running a single command. In this Ansible tutorial, I am going to walk you through the exact path I followed, from zero Ansible knowledge to a fully automated homelab. If you have been thinking about infrastructure automation but felt intimidated, this is the guide I wish I had when I started.
What Is Ansible (and Why It Is Perfect for Homelabs)
Ansible is an open-source configuration management and automation tool originally developed by Michael DeHaan and now maintained by Red Hat. What makes it different from tools like Puppet or Chef comes down to two key design decisions:
- Agentless architecture — Ansible does not require any software to be installed on the machines it manages. It connects over plain SSH (or WinRM for Windows hosts), runs its tasks, and disconnects. No daemons, no agents, no extra attack surface.
- Declarative YAML playbooks — you describe the desired state of your systems in simple YAML files called playbooks. Ansible figures out what needs to change and only makes the necessary adjustments. This concept is called idempotency, and it means you can run the same playbook ten times without breaking anything.
For a homelab, these properties are a perfect fit. You probably have a handful of machines — maybe a Proxmox host, a couple of Ubuntu VMs, a Raspberry Pi running Pi-hole, a NAS. You do not want to install and maintain agent software on each of them. You just want to point Ansible at your inventory, run a playbook, and walk away. That is exactly how it works.
Installing Ansible on Your Control Node
Ansible runs from a single control node — the machine where you write and execute your playbooks. This can be your daily-driver laptop, a dedicated management VM (see our Proxmox vs ESXi) (see our Pi-hole setup) (see our AI-driven automation), or even a container. The managed nodes (the servers Ansible configures) need nothing beyond an SSH server and Python, both of which are present on virtually every Linux installation by default.
Ubuntu / Debian
sudo apt update
sudo apt install -y ansible
If you want the latest version rather than what ships in your distro’s repositories, use the official PPA:
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install -y ansible
Fedora / RHEL / Rocky Linux
sudo dnf install -y ansible
macOS
brew install ansible
pip (Any Platform)
python3 -m pip install --user ansible
Verify the installation by running:
ansible --version
You should see output showing Ansible core 2.17 or newer (as of March 2026). That is all you need on the control node. No services to start, no configuration files to create yet. Ansible is ready to go.
Setting Up Your Inventory File
Before Ansible can manage your homelab, it needs to know what machines exist. This is where the inventory file comes in. An inventory is simply a list of hosts organized into groups. You can write it in INI format or YAML. I prefer INI for small homelabs because it is dead simple to read.
Create a project directory and an inventory file:
mkdir -p ~/ansible-homelab
cd ~/ansible-homelab
nano inventory.ini
Here is an example inventory that reflects a typical homelab setup:
[proxmox]
pve01 ansible_host=192.168.1.10
[docker_hosts]
docker01 ansible_host=192.168.1.20
docker02 ansible_host=192.168.1.21
[pihole]
pihole01 ansible_host=192.168.1.5
[raspberry_pis]
pihole01
[all:vars]
ansible_user=alex
ansible_ssh_private_key_file=~/.ssh/id_ed25519
A few things to notice here. Each section in square brackets is a group. A host can belong to multiple groups (notice pihole01 is in both pihole and raspberry_pis). The [all:vars] section defines variables that apply to every host, like the SSH user and key file. You can also set per-host or per-group variables to override these defaults.
Test connectivity to all your hosts with a quick ad-hoc command:
ansible all -i inventory.ini -m ping
If everything is set up correctly (SSH keys distributed, Python installed on each host), you will see green SUCCESS output for every machine. If a host fails, the error message will tell you exactly what went wrong — usually a missing SSH key or a firewall blocking port 22.
Your First Ansible Playbook: Update Everything
Let us start with the most universally useful playbook: updating all packages on every server. This is the task that first sold me on Ansible, because it replaced my painful SSH-into-each-box routine with a single command.
Create a file called update-all.yml:
---
- name: Update all packages on Debian/Ubuntu hosts
hosts: all
become: true
tasks:
- name: Update apt cache
ansible.builtin.apt:
update_cache: yes
cache_valid_time: 3600
- name: Upgrade all packages
ansible.builtin.apt:
upgrade: dist
autoremove: yes
autoclean: yes
- name: Check if reboot is required
ansible.builtin.stat:
path: /var/run/reboot-required
register: reboot_required
- name: Reboot if required
ansible.builtin.reboot:
msg: "Rebooting after package updates"
reboot_timeout: 300
when: reboot_required.stat.exists
Run it with:
ansible-playbook -i inventory.ini update-all.yml
Let me break down what is happening. The hosts: all line tells Ansible to run this playbook against every host in your inventory. The become: true directive means Ansible will use sudo to escalate privileges, since package management requires root. Each task uses a module — apt for package management, stat to check for a file, and reboot to restart the machine. The when conditional on the last task ensures Ansible only reboots if the system actually needs it.
This is the beauty of Ansible playbooks. You describe the desired end state, not a sequence of shell commands. Ansible handles the logic of checking current state and making only the necessary changes.
Practical Playbook: Install Docker on Your Servers
Almost every homelab eventually needs Docker. Here is a playbook that installs Docker Engine the official way, adds your user to the docker group, and starts the service. Save it as install-docker.yml:
---
- name: Install Docker Engine on Ubuntu hosts
hosts: docker_hosts
become: true
vars:
docker_user: alex
tasks:
- name: Install prerequisite packages
ansible.builtin.apt:
name:
- ca-certificates
- curl
- gnupg
- lsb-release
state: present
update_cache: yes
- name: Create keyrings directory
ansible.builtin.file:
path: /etc/apt/keyrings
state: directory
mode: "0755"
- name: Add Docker GPG key
ansible.builtin.get_url:
url: https://download.docker.com/linux/ubuntu/gpg
dest: /etc/apt/keyrings/docker.asc
mode: "0644"
- name: Add Docker repository
ansible.builtin.apt_repository:
repo: >-
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc]
https://download.docker.com/linux/ubuntu
{{ ansible_distribution_release }} stable
state: present
- name: Install Docker packages
ansible.builtin.apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-compose-plugin
state: present
update_cache: yes
- name: Add user to docker group
ansible.builtin.user:
name: "{{ docker_user }}"
groups: docker
append: yes
- name: Enable and start Docker service
ansible.builtin.systemd:
name: docker
enabled: yes
state: started
Notice how the playbook targets only the docker_hosts group from our inventory, not every machine. This is the power of inventory groups — you can selectively apply configurations to exactly the hosts that need them.
Practical Playbook: Configure Users and SSH Hardening
Security matters even in a homelab. Here is a playbook that creates a standard admin user, deploys your SSH public key, and locks down the SSH daemon configuration. Save it as configure-users.yml:
---
- name: Configure admin users and harden SSH
hosts: all
become: true
vars:
admin_users:
- name: alex
ssh_key: "ssh-ed25519 AAAAC3Nz... alex@workstation"
- name: backup-operator
ssh_key: "ssh-ed25519 AAAAC3Nz... backup@mgmt"
tasks:
- name: Create admin user accounts
ansible.builtin.user:
name: "{{ item.name }}"
shell: /bin/bash
groups: sudo
append: yes
create_home: yes
loop: "{{ admin_users }}"
- name: Deploy authorized SSH keys
ansible.posix.authorized_key:
user: "{{ item.name }}"
key: "{{ item.ssh_key }}"
state: present
exclusive: no
loop: "{{ admin_users }}"
- name: Harden SSH daemon configuration
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
validate: "sshd -t -f %s"
loop:
- { regexp: '^#?PermitRootLogin', line: 'PermitRootLogin no' }
- { regexp: '^#?PasswordAuthentication', line: 'PasswordAuthentication no' }
- { regexp: '^#?X11Forwarding', line: 'X11Forwarding no' }
- { regexp: '^#?MaxAuthTries', line: 'MaxAuthTries 3' }
notify: Restart SSH
handlers:
- name: Restart SSH
ansible.builtin.systemd:
name: sshd
state: restarted
This playbook introduces two important concepts. First, loops — the loop keyword lets you iterate over a list, so you can create multiple users without duplicating tasks. Second, handlers — the notify directive tells Ansible to trigger the “Restart SSH” handler, but only if the task actually changed something. If the SSH config is already correct, Ansible skips the restart. Efficient and safe.
Practical Playbook: Deploy Containers with Ansible
Once Docker is installed, you can use Ansible to deploy and manage your containers. Here is a playbook that deploys an Uptime Kuma monitoring stack and a Portainer instance. Save it as deploy-containers.yml:
---
- name: Deploy homelab containers
hosts: docker01
become: true
tasks:
- name: Create application directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: "1000"
group: "1000"
mode: "0755"
loop:
- /opt/uptime-kuma
- /opt/portainer
- name: Deploy Uptime Kuma
community.docker.docker_container:
name: uptime-kuma
image: louislam/uptime-kuma:1
state: started
restart_policy: unless-stopped
ports:
- "3001:3001"
volumes:
- /opt/uptime-kuma:/app/data
- name: Deploy Portainer
community.docker.docker_container:
name: portainer
image: portainer/portainer-ce:latest
state: started
restart_policy: unless-stopped
ports:
- "9443:9443"
- "8000:8000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /opt/portainer:/data
This playbook uses the community.docker.docker_container module, which is part of the community.docker collection. You will need to install it first:
ansible-galaxy collection install community.docker
The container module is idempotent, just like everything else in Ansible. If Uptime Kuma is already running with the exact configuration you specified, Ansible will report “ok” and move on. If you change a port mapping or volume, it will recreate the container with the new settings.
Organizing with Roles
Once you have more than a handful of playbooks, things start to get messy. This is where roles come in. A role is a structured way to package related tasks, variables, files, templates, and handlers into a reusable unit.
Create a role skeleton with:
ansible-galaxy role init roles/docker
This generates a standard directory structure:
roles/docker/
tasks/
main.yml # The actual tasks
handlers/
main.yml # Handler definitions
vars/
main.yml # Role variables
defaults/
main.yml # Default variable values (lowest priority)
templates/ # Jinja2 templates
files/ # Static files to copy
meta/
main.yml # Role metadata and dependencies
You would move the Docker installation tasks from our earlier playbook into roles/docker/tasks/main.yml, put default variable values into roles/docker/defaults/main.yml, and then reference the role in a clean, high-level playbook:
---
- name: Set up Docker hosts
hosts: docker_hosts
become: true
roles:
- docker
That is it. The playbook becomes a single clear declaration of intent, and all the implementation details live inside the role. When your homelab grows from five machines to fifteen, this organizational pattern keeps everything manageable.
My current homelab Ansible project looks like this:
ansible-homelab/
inventory.ini
site.yml # Master playbook that imports everything
group_vars/
all.yml # Variables shared across all hosts
docker_hosts.yml # Variables specific to Docker hosts
roles/
common/ # Base packages, NTP, timezone, MOTD
docker/ # Docker Engine installation
monitoring/ # Prometheus node exporter, Uptime Kuma
security/ # SSH hardening, fail2ban, UFW rules
containers/ # Application container deployments
Ansible Galaxy: Do Not Reinvent the Wheel
Ansible Galaxy is a public repository of community-contributed roles and collections. Before you write a role from scratch, check if someone has already built and battle-tested one. Chances are good, especially for common tasks.
Search for roles on the command line:
ansible-galaxy search docker --platforms Ubuntu
Or install a well-known role directly:
ansible-galaxy install geerlingguy.docker
Jeff Geerling’s roles are particularly popular in the homelab community and are an excellent reference for best practices. You can also define your role dependencies in a requirements.yml file:
---
roles:
- name: geerlingguy.docker
version: "7.5.0"
- name: geerlingguy.security
version: "3.1.0"
collections:
- name: community.docker
version: ">=3.12.0"
- name: ansible.posix
version: ">=1.6.0"
Install everything at once with:
ansible-galaxy install -r requirements.yml
This is the Ansible equivalent of a package.json or requirements.txt — it pins your dependencies so that anyone (including future you) can reproduce your exact setup.
Tips for Ansible in a Homelab
After running Ansible in my homelab for over a year, here are the lessons I have picked up that I wish someone told me from day one.
1. Use ansible.cfg to Save Typing
Create an ansible.cfg file in your project root so you do not have to pass -i inventory.ini every single time:
[defaults]
inventory = inventory.ini
remote_user = alex
host_key_checking = False
retry_files_enabled = False
stdout_callback = yaml
[privilege_escalation]
become = True
become_method = sudo
become_ask_pass = False
Now you can just run ansible-playbook update-all.yml without any extra flags.
2. Use –check and –diff Before You Commit
Ansible’s dry-run mode is your safety net. The --check flag simulates the playbook without making any changes, and --diff shows you exactly what would change in configuration files:
ansible-playbook configure-users.yml --check --diff
I run every new playbook in check mode at least once before letting it touch production. It has saved me from several self-inflicted outages.
3. Use Tags to Run Subsets of Tasks
When you only want to run part of a playbook, tags are invaluable:
---
- name: Common server setup
hosts: all
become: true
tasks:
- name: Install base packages
ansible.builtin.apt:
name:
- htop
- vim
- tmux
- curl
- git
state: present
tags: packages
- name: Set timezone
community.general.timezone:
name: America/New_York
tags: timezone
Run only the packages task:
ansible-playbook common.yml --tags packages
4. Encrypt Sensitive Data with Ansible Vault
Never put passwords, API keys (see our Uptime Kuma monitoring), or secrets in plain-text YAML files, even in a private homelab repository. Ansible Vault encrypts variables so they can live safely in version control:
ansible-vault create group_vars/all/vault.yml
Inside the vault file, define your secrets:
vault_pihole_password: "supersecretpassword"
vault_smtp_api_key: "SG.xxxxxxxxxxxx"
Reference them in your playbooks like any other variable, and pass the vault password at runtime:
ansible-playbook site.yml --ask-vault-pass
5. Keep Everything in Git
Your Ansible project directory is infrastructure as code. Treat it that way. Initialize a Git repository, commit every change, and write meaningful commit messages. When something breaks at 2 AM, you will be grateful you can git log to see exactly what changed and git revert to undo it.
cd ~/ansible-homelab
git init
echo "*.retry" >> .gitignore
echo ".vault_pass" >> .gitignore
git add -A
git commit -m "Initial homelab Ansible setup"
6. Start Small, Automate Incrementally
You do not need to automate your entire homelab in one weekend. Start with a single playbook that handles package updates. Then add user management. Then Docker installation. Each time you find yourself SSHing into a box and running commands manually, ask yourself: could this be a playbook? If the answer is yes, write it. Over time, your collection of playbooks becomes a complete, reproducible blueprint of your entire homelab infrastructure.
Putting It All Together: The Master Playbook
Once you have roles and playbooks for each concern, tie them together with a master site.yml that you can run to configure your entire homelab from scratch:
---
- name: Apply common configuration to all hosts
hosts: all
become: true
roles:
- common
- security
- name: Set up Docker hosts
hosts: docker_hosts
become: true
roles:
- docker
- containers
- name: Configure monitoring
hosts: all
become: true
roles:
- monitoring
Run the whole thing:
ansible-playbook site.yml
This single command will bring every machine in your homelab to its desired state. New server? Add it to the inventory, run site.yml, and it is configured identically to the rest. Need to rebuild after a disk failure? Same playbook, same result. That is the real promise of infrastructure automation — your servers become cattle, not pets.
Where to Go from Here
This Ansible getting started guide has covered the core concepts, but there is a lot more to explore as your homelab grows:
- Dynamic inventory — if you use Proxmox or Terraform, Ansible can automatically discover your VMs instead of relying on a static inventory file.
- Ansible Semaphore — a web UI for running playbooks on a schedule, with logging, notifications, and team access control. Think of it as a self-hosted Ansible Tower alternative.
- Molecule — a testing framework for Ansible roles. It spins up containers, applies your role, and verifies the result automatically. Overkill for a small homelab, but invaluable if you publish roles to Galaxy.
- AWX / Ansible Automation Platform — the upstream open-source project behind Red Hat’s commercial Ansible Tower. Full-featured but heavy for homelab use.
- Event-driven Ansible — trigger playbooks automatically based on events like a webhook, a monitoring alert, or a file change.
My honest advice is to resist the urge to over-engineer. The simple setup I described in this guide — a flat directory of roles, a static inventory, and a master site.yml — will serve a homelab of ten to twenty machines without breaking a sweat. Add complexity only when you feel the pain that justifies it.
Ansible turned my homelab from a collection of snowflake servers into a reproducible, version-controlled system that I can rebuild from scratch in minutes. It will do the same for yours. Start with that update playbook, get comfortable with the YAML syntax, and build from there. Your future self will thank you the first time a drive dies and you realize that rebuilding the server is just one command away.