← Back to blog

Docker Demystified: The Practical Guide for 2022

|8 min read
dockerdevopstutorialcontainers

I put off learning Docker for way too long. It felt like this big DevOps thing that I didn't need to worry about as a developer. Then I actually sat down and learned it — and realized it's not that complicated. The core concepts are surprisingly simple once you strip away the jargon.

This is everything I learned, organized into the guide I wish existed when I started.

What Even Is Docker?

Docker lets you package your app and all its dependencies into a container — a lightweight, isolated environment that runs the same everywhere. Your laptop, your teammate's machine, production. No more "works on my machine."

The two key concepts:

  • Image — a blueprint. Think of it like a class in OOP.
  • Container — a running instance of an image. Think of it like an object.

You find pre-built images on Docker Hub, like the official nginx image. You can use them directly or build on top of them.

Your First Container

Pulling an Image

docker pull nginx

This downloads the nginx image from Docker Hub. You can see all your local images with:

docker images

Running It

# Don't do this — it hangs your terminal
docker run nginx:latest

# Do this instead — "-d" detaches the process
docker run -d nginx:latest

The -d flag runs the container in the background and returns the container ID. But you still can't access it from your browser — containers are isolated by default.

Port Mapping

To expose the container to your host machine, map the ports:

docker run -d -p 8080:80 nginx:latest

This maps host port 8080 to container port 80. Now hit http://localhost:8080/ and you'll see the nginx welcome page. 🎉

Naming Containers

Docker auto-generates names like quirky_darwin. Give your containers real names:

docker run --name website -d -p 8080:80 nginx:latest

Container Lifecycle Commands

These are the commands you'll use every day:

# List running containers
docker ps

# Stop a container (keeps it around)
docker stop <container_id or name>

# Start a stopped container
docker start <container_id or name>

# Remove a container
docker rm <container_id or name>

# Nuclear option: remove ALL stopped containers
docker rm $(docker container ls -aq)

# Force remove everything (including running)
docker rm -f $(docker container ls -aq)

docker ps and docker container ls do the same thing. ps stands for "process status" — it's the one you'll actually type.

Volumes: Sharing Files with Containers

Volumes mount a directory from your host into the container. This is huge for development — you edit files locally and see changes instantly.

docker run --name web1 \
  -v /path/to/your/site:/usr/share/nginx/html:ro \
  -d -p 8080:80 nginx

Breaking down the -v flag: <host_path>:<container_path>:ro

  • The host path is your local directory
  • The container path is where it gets mounted inside the container
  • :ro means read-only — the container can read but not modify your files

Remove :ro if you want two-way syncing — changes inside the container will reflect on your host too.

Poking Around Inside a Container

You can open a shell inside any running container:

docker exec -it <container_name> bash

The -it flag gives you an interactive terminal. You can browse files, check configs, debug issues — everything you'd do in a normal terminal.

Dockerfile: Building Your Own Images

Mounting volumes every time is tedious. A Dockerfile defines how to build your image — step by step, repeatable, version-controlled.

Nginx Example

Create a Dockerfile in the root of your project:

FROM nginx:latest
ADD . /usr/share/nginx/html

That's it. Two lines. Build and run:

# Build the image (-t = tag)
docker build -t website:latest .

# Run it
docker run --name web2 -d -p 8081:80 website

Your image now contains your files — no volumes needed.

Node.js + Express Example

FROM node:latest
WORKDIR /app
ADD . .
RUN npm install
CMD node index.js

What each instruction does:

| Instruction | Purpose | |---|---| | FROM | Base image to build on | | WORKDIR | Sets the working directory inside the container (creates it if needed) | | ADD | Copies files from host into the container | | RUN | Executes a command during the build (e.g., installing deps) | | CMD | The default command when the container starts |

docker build -t service-api:latest .
docker run --name node-api -d -p 8082:3000 service-api

Port 3000 is the default Express port inside the container — we map it to 8082 on the host.

.dockerignore

Just like .gitignore, this keeps junk out of your image:

node_modules
Dockerfile
.git

Always add node_modules — you're running npm install inside the container anyway. Copying them in would just bloat the image and cause platform issues.

Caching and Layers: The Build Optimization Trick

Every instruction in a Dockerfile creates a layer. Docker caches layers and only rebuilds from the point where something changed.

Here's the problem with our Node.js Dockerfile:

FROM node:latest
WORKDIR /app
ADD . .              # ← Every source code change invalidates this
RUN npm install      # ← So this re-runs every time too
CMD node index.js

Change one line of code? Full npm install. That's minutes wasted on every build.

The fix: Copy dependency files first, install, then copy source code.

FROM node:latest
WORKDIR /app

ADD package*.json ./
RUN npm install       # Only re-runs when package.json changes

ADD . .               # Source code changes only invalidate from here
CMD node index.js

Rule of thumb: anything that takes a long time but rarely changes should go near the top of your Dockerfile. Anything that changes frequently goes at the bottom.

Alpine Images: Shrink Your Image Size

Default images are huge. The standard Node.js image is ~900MB. The Alpine variant? ~100MB.

Alpine is a minimal Linux distribution. Most images on Docker Hub have Alpine tags:

FROM node:alpine
WORKDIR /app
ADD package*.json ./
RUN npm install
ADD . .
CMD node index.js

Check Docker Hub for available tags — look for alpine, lts-alpine, etc. Use Alpine unless you have a reason not to.

Tags and Versioning

Tags let you version your images so you can roll back if something breaks:

# Build with latest
docker build -t myapp:latest .

# Tag it as version 1
docker tag myapp:latest myapp:1

# Later, after changes...
docker build -t myapp:latest .
docker tag myapp:latest myapp:2

Now if version 2 has a bug, you can spin up a container from myapp:1 instantly. Always tag your releases.

Debugging Containers

Logs — Your Best Friend

# View logs
docker logs <container_id>

# Follow logs in real-time
docker logs -f <container_id>

This shows everything your app writes to stdout — console.log, network requests, errors. The -f flag tails the output so you can watch it live.

Inspect

docker inspect <container_id>

Dumps a massive JSON blob with networking, mounts, env vars — everything about the container. Useful when something is misconfigured.

Interactive Shell

docker exec -it <container_id> /bin/sh

Drop into the container and poke around. Check if files are where you expect, verify environment variables, test network connectivity.

Docker Compose: Multi-Container Apps

Running individual docker run commands gets old fast when your app has an API, a database, and a frontend. Docker Compose lets you define everything in one docker-compose.yml file:

version: "3"
services:
  web:
    build: .
    ports:
      - "8080:80"
  api:
    build: ./api
    ports:
      - "8082:3000"
    depends_on:
      - db
  db:
    image: postgres:alpine
    environment:
      POSTGRES_PASSWORD: example

Then one command to rule them all:

# Start everything
docker-compose up -d

# Stop everything
docker-compose down

Docker Compose is ideal for local development. For production orchestration at scale, that's where Kubernetes comes in — but that's a whole different rabbit hole.

Cheat Sheet

| Command | What it does | |---|---| | docker pull <image> | Download an image | | docker run -d -p 8080:80 <image> | Run a container (detached, with port mapping) | | docker ps | List running containers | | docker stop <id> | Stop a container | | docker rm <id> | Remove a container | | docker build -t <name>:<tag> . | Build an image from a Dockerfile | | docker logs -f <id> | Follow container logs | | docker exec -it <id> /bin/sh | Shell into a container | | docker-compose up -d | Start all Compose services | | docker-compose down | Stop all Compose services |

Key Takeaways

Docker boils down to a few core ideas:

  1. Images are blueprints, containers are running instances
  2. Use volumes during development, Dockerfiles for everything else
  3. Order your Dockerfile to maximize layer caching — slow, stable stuff at the top
  4. Use Alpine images to keep things small
  5. Tag your versions so you can roll back
  6. Docker Compose for local multi-container setups, Kubernetes for production

Once you internalize these concepts, Docker stops being scary and starts being the tool you reach for on every project.


This post was based on the Docker tutorial by TechWorld with Nana on freeCodeCamp.