Getting Started with Docker: A Complete Beginner's Guide

Getting Started with Docker: A Complete Beginner’s Guide

Getting Started with Docker: A Complete Beginner’s Guide

I remember the first time I tried to set up a self-hosted application on my home server. I spent an entire weekend
wrestling with dependency conflicts, broken Python versions, and configuration files that seemed to have a personal
vendetta against me. Then a friend said four words that changed everything: “Just use Docker.”

If you’ve been hearing about Docker and containerization but haven’t taken the plunge yet, this guide is for you.
I’m going to walk you through everything from installing Docker to running your first container, writing a
Dockerfile, and even setting up multi-container apps with Docker Compose. By the end of this docker tutorial,
you’ll have a solid foundation to start containerizing your own projects.

Why Containers Matter (Especially for Homelabbers)

Before we dive into the technical stuff, let me explain why Docker containers have become so essential, particularly
if you’re running a homelab or self-hosting applications.

Think of a container as a lightweight, self-contained package that includes everything an application needs to run:
the code, the runtime, libraries, and system tools. Unlike a virtual machine, a container doesn’t need its own
operating system, which makes it incredibly efficient. You can run dozens of containers on hardware that would
struggle with a handful of VMs.

For homelabbers and self-hosters, containerization solves some very real headaches:

  • No more dependency hell. Each container is isolated. Your media server’s Python 3.9 requirement won’t conflict with your home automation tool that needs Python 3.12.
  • Easy deployment. Most popular self-hosted apps (Nextcloud, Plex, Home Assistant, Pi-hole) offer official Docker images. Getting them running takes minutes, not hours.
  • Portability. Moving your entire setup to a new machine? Copy your Docker Compose files and volumes, and you’re done.
  • Clean rollbacks. Something broke after an update? Just pull the previous image version and restart the container.

Convinced? Good. Let’s get Docker installed.

Installing Docker

Docker runs on Linux, macOS, and Windows. I’ll focus on Linux here since that’s what most homelab servers run, but
the concepts apply everywhere.

Installing Docker on Ubuntu / Debian

Docker provides a convenience script that handles everything for you. Open a terminal and run:

# Update your package index
sudo apt update

# Install prerequisites
sudo apt install -y ca-certificates curl gnupg

# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the Docker repository
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker Engine
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Once the installation finishes, verify it’s working:

sudo docker --version
# Docker version 27.x.x, build xxxxxxx

Running Docker Without sudo

Typing sudo before every Docker command gets old fast. Add your user to the docker group
so you can run Docker commands directly:

sudo usermod -aG docker $USER

Log out and back in (or run newgrp docker) for the change to take effect. Now you can run
docker commands without sudo.

Running Your First Container

Let’s not overthink this. The traditional first step in any docker for beginners guide is to run the hello-world
container:

docker run hello-world

You should see a message confirming that Docker pulled the image, created a container, and ran it successfully.
That single command did a lot behind the scenes:

  1. Docker checked if the hello-world image existed locally.
  2. It didn’t, so Docker pulled it from Docker Hub (the default image registry).
  3. Docker created a new container from that image.
  4. Docker started the container, which printed a message and exited.

Now let’s do something more interesting. Let’s spin up an Nginx web server:

docker run -d -p 8080:80 --name my-webserver nginx

Let me break down those flags:

  • -d runs the container in detached mode (in the background).
  • -p 8080:80 maps port 8080 on your host to port 80 inside the container.
  • --name my-webserver gives the container a friendly name instead of a random one.
  • nginx is the image to use.

Open your browser and navigate to http://localhost:8080. You should see the Nginx welcome page.
Congratulations, you just deployed a web server in about three seconds.

Essential Docker Commands

Here are the commands you’ll use constantly. Bookmark this section:

# List running containers
docker ps

# List ALL containers (including stopped ones)
docker ps -a

# Stop a running container
docker stop my-webserver

# Start a stopped container
docker start my-webserver

# Remove a container (must be stopped first)
docker rm my-webserver

# List downloaded images
docker images

# Remove an image
docker rmi nginx

# View container logs
docker logs my-webserver

# Execute a command inside a running container
docker exec -it my-webserver bash

That last command is especially useful. It opens an interactive shell inside the container, letting you poke
around, check config files, or debug issues.

Writing Your First Dockerfile

Running pre-built images is great, but the real power of Docker comes when you build your own images. A
Dockerfile is a simple text file that tells Docker how to assemble an image, step by step.

Let’s build a custom image for a basic Node.js application. First, create a project directory and add a simple
app:

mkdir my-docker-app && cd my-docker-app

Create a file called app.js:

const http = require('http');

const server = http.createServer((req, res) => {
  res.writeHead(200, { 'Content-Type': 'text/plain' });
  res.end('Hello from my Docker container!\n');
});

server.listen(3000, () => {
  console.log('Server running on port 3000');
});

Create a package.json:

{
  "name": "my-docker-app",
  "version": "1.0.0",
  "main": "app.js",
  "scripts": {
    "start": "node app.js"
  }
}

Now create a file called Dockerfile (no extension) in the same directory:

# Start from the official Node.js 22 image
FROM node:22-alpine

# Set the working directory inside the container
WORKDIR /app

# Copy package.json first (for better layer caching)
COPY package.json .

# Install dependencies
RUN npm install

# Copy the rest of the application code
COPY . .

# Tell Docker this container listens on port 3000
EXPOSE 3000

# Define the command to run when the container starts
CMD ["npm", "start"]

A few things to notice about this Dockerfile:

  • We use node:22-alpine as our base image. The alpine variant is much smaller than the full image, keeping our container lean.
  • We copy package.json separately before the rest of the code. This takes advantage of Docker’s layer caching — if your dependencies haven’t changed, Docker won’t reinstall them on every build.
  • EXPOSE is documentation; it doesn’t actually publish the port. You still need -p when running the container.

Build and run your custom image:

# Build the image and tag it as "my-app"
docker build -t my-app .

# Run it
docker run -d -p 3000:3000 --name my-app-container my-app

Visit http://localhost:3000 and you should see “Hello from my Docker container!” Simple, clean, and
reproducible.

Persistent Data with Volumes

Here’s something that trips up a lot of Docker beginners: containers are ephemeral. When you remove a container,
any data stored inside it is gone. That’s a problem if you’re running a database or any app that needs to keep
data between restarts.

The solution is volumes. Volumes let you persist data outside the container’s filesystem.

# Create a named volume
docker volume create my-data

# Run a container with the volume mounted
docker run -d \
  -p 5432:5432 \
  --name my-postgres \
  -e POSTGRES_PASSWORD=mysecretpassword \
  -v my-data:/var/lib/postgresql/data \
  postgres:16

Now even if you stop and remove the my-postgres container, the database files are safely stored in
the my-data volume. Spin up a new container with the same volume, and all your data is right where
you left it.

You can also bind-mount a host directory directly:

docker run -d \
  -p 8080:80 \
  -v /home/alex/my-website:/usr/share/nginx/html \
  nginx

This mounts a folder from your host machine into the container. Edit files on your host, and the changes are
immediately reflected inside the container. I use this all the time during development.

Docker Compose: Managing Multi-Container Apps

Running a single container with a long docker run command is manageable. But what happens when your
application needs a web server, a database, and a cache? Typing out three long commands with all the right flags
every time you want to start your stack is painful and error-prone.

Enter Docker Compose. It lets you define your entire application stack in a single YAML file and
manage it with simple commands. If you installed Docker using the steps above, you already have the
docker compose plugin.

Let’s set up a practical example: a WordPress site with a MySQL database. Create a file called
docker-compose.yml:

services:
  wordpress:
    image: wordpress:latest
    restart: unless-stopped
    ports:
      - "8080:80"
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: wpuser
      WORDPRESS_DB_PASSWORD: wppassword
      WORDPRESS_DB_NAME: wordpress
    volumes:
      - wp-data:/var/www/html
    depends_on:
      - db

  db:
    image: mysql:8.0
    restart: unless-stopped
    environment:
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wpuser
      MYSQL_PASSWORD: wppassword
      MYSQL_ROOT_PASSWORD: rootpassword
    volumes:
      - db-data:/var/lib/mysql

volumes:
  wp-data:
  db-data:

Now bring the whole stack up with a single command:

# Start all services in the background
docker compose up -d

# Check the status of your services
docker compose ps

# View logs from all services
docker compose logs

# Follow logs in real time
docker compose logs -f

# Stop everything
docker compose down

# Stop everything AND delete the volumes (careful with this one)
docker compose down -v

That’s it. With one file and one command, you have a fully functional WordPress site with a MySQL backend. The two
containers can talk to each other using their service names (wordpress can reach the database at
db:3306) because Docker Compose creates a shared network for them automatically.

This is where Docker Compose really shines for homelabbers. You can define your entire self-hosted stack —
reverse proxy, media server, password manager, monitoring tools — in a set of Compose files and bring
everything up or down with a single command.

Best Practices for Docker Beginners

Before I send you off into the container wilderness, here are some lessons I’ve learned (sometimes the hard way):

1. Always Use Specific Image Tags

Don’t just use nginx or postgres. Use nginx:1.27 or
postgres:16-alpine. The latest tag can change without warning, and suddenly your
application breaks because a new version introduced a breaking change.

2. Keep Images Small

Use alpine-based images when possible. They’re typically 5-10x smaller than their full counterparts.
Smaller images mean faster pulls, less disk usage, and a smaller attack surface.

3. Don’t Store Sensitive Data in Images

Never put passwords, API keys, or certificates directly in your Dockerfile. Use environment variables or Docker
secrets instead. Anyone who can pull your image can inspect its layers and find hardcoded secrets.

4. Use .dockerignore

Just like .gitignore, a .dockerignore file prevents unnecessary files from being
included in your build context. This speeds up builds and keeps your images clean.

node_modules
.git
.env
*.md
Dockerfile
docker-compose.yml

5. One Process Per Container

Resist the temptation to run multiple services in a single container. Need a web server and a database? That’s two
containers. This makes your setup more modular, easier to debug, and simpler to scale.

Troubleshooting Common Issues

You will hit snags. Everyone does. Here are the most common ones I see from beginners:

  • “Permission denied” when running docker commands.
    You probably forgot to add your user to the docker group or haven’t logged out and back in yet.
  • “Port is already allocated.”
    Another process (or container) is using that port. Check with docker ps or
    sudo lsof -i :8080 and either stop the conflicting service or use a different port mapping.
  • Container starts and immediately exits.
    Check the logs with docker logs container-name. The application inside is probably crashing. Nine
    times out of ten, it’s a missing environment variable or a misconfigured path.
  • “No space left on device.”
    Docker images and containers accumulate over time. Run docker system prune -a to clean up unused
    images, containers, and networks. Be aware this removes everything not currently in use.

What’s Next?

You’ve covered a lot of ground in this guide. You can install Docker, run containers, write Dockerfiles, and
orchestrate multi-container applications with Docker Compose. That’s a genuinely solid foundation.

When you’re ready to level up, here are the topics I’d explore next:

  • Docker networking in depth — custom bridge networks, host networking, and connecting containers across hosts.
  • Reverse proxies with Traefik or Nginx Proxy Manager — essential for exposing multiple services on a single host with SSL.
  • Multi-stage builds — dramatically reduce image sizes by separating the build environment from the runtime environment.
  • Container orchestration with Docker Swarm or Kubernetes — for when a single host isn’t enough.
  • Health checks and restart policies — make your containers self-healing and production-ready.
  • CI/CD pipelines with Docker — automate building, testing, and deploying your containerized applications.

Containerization has fundamentally changed how I manage my homelab and development workflow. What used to take
hours of manual setup now takes minutes. What used to break every time I updated my OS now runs in perfect
isolation. Once you start thinking in containers, there’s really no going back.

Got questions or ran into an issue? Drop a comment below, and I’ll do my best to help. Happy containerizing!

— Alex Torres, IGNA Online

Enjoying this post?

Get more guides like this delivered straight to your inbox. No spam, just tech and trails.