Free Ebook cover Docker for Beginners: Containers Explained with Simple Projects

Docker for Beginners: Containers Explained with Simple Projects

New course

12 pages

Docker Concepts and Workflow Foundations

Capítulo 1

Estimated reading time: 14 minutes

+ Exercise

Why Docker Feels Different: The Workflow Mindset

Docker is easiest to learn when you stop thinking in terms of “installing software on a machine” and start thinking in terms of “packaging an application with everything it needs, then running that package anywhere.” This chapter focuses on the concepts and workflow foundations that make Docker predictable: you build an image, you run a container from that image, you connect it to other containers and the outside world, and you manage changes through rebuilds rather than manual edits inside a running container.

A practical Docker workflow usually answers four questions:

  • How do I package my app and its dependencies? (Images and Dockerfiles)
  • How do I run it reliably? (Containers, runtime configuration, restart policies)
  • How do I persist data and share files? (Volumes and bind mounts)
  • How do I connect components? (Networking, ports, service discovery)

Once these are clear, Docker becomes a repeatable routine rather than a collection of commands.

Images vs Containers: The Build/Run Split

Images: immutable templates

An image is a read-only template that contains a filesystem snapshot plus metadata about how to run it (default command, environment variables, exposed ports, etc.). You can think of an image as a “recipe output”: once built, it should not change. If you want a different outcome, you rebuild a new image.

Key properties of images:

Continue in our app.

You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.

Or continue reading below...
Download App

Download the app

  • Layered: images are built from layers. Each Dockerfile instruction typically creates a new layer.
  • Content-addressed: layers are identified by hashes; Docker can reuse layers across images.
  • Portable: images can be stored in registries and pulled onto other machines.

Containers: runtime instances

A container is a running (or stopped) instance created from an image. Containers add a writable layer on top of the image, plus runtime configuration (ports, environment variables, mounts, network attachments, resource limits). Containers are meant to be disposable: you can stop and remove them and recreate them from the image at any time.

A common beginner mistake is to “fix” something by shelling into a container and installing packages manually. That change lives only in that container’s writable layer and is not part of the image. The reliable approach is: update the Dockerfile, rebuild the image, recreate the container.

Dockerfile Foundations: Declarative Builds

A Dockerfile is a set of instructions that describes how to build an image. It is declarative in the sense that it describes the desired result, and Docker builds it step-by-step, caching layers when possible.

Core instructions you’ll use constantly

  • FROM: selects a base image (e.g., a language runtime or minimal OS).
  • WORKDIR: sets the working directory for subsequent instructions.
  • COPY / ADD: copies files into the image (COPY is preferred for clarity).
  • RUN: executes commands at build time (install packages, compile code).
  • ENV: sets environment variables in the image.
  • EXPOSE: documents the port the app listens on (does not publish it by itself).
  • CMD / ENTRYPOINT: defines what runs when the container starts.

Build-time vs run-time: a crucial distinction

Dockerfile instructions happen at build time. They produce an image. Container configuration like published ports, environment overrides, and mounts are typically provided at run time.

Example mental model:

  • Build time: “Bake the cake.” (install dependencies, copy code)
  • Run time: “Serve the cake.” (choose the port, connect to a database, set secrets)

Example Dockerfile pattern (generic web app)

FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
ENV NODE_ENV=production
EXPOSE 3000
CMD ["node", "server.js"]

Even if you are not using Node.js, notice the workflow pattern: copy dependency manifests first (better caching), install dependencies, then copy the rest of the source code, then define the startup command.

Layer Caching: Faster Builds Through Smart Ordering

Docker caches layers. If a layer’s inputs haven’t changed, Docker can reuse it instead of rebuilding. This is why Dockerfile ordering matters.

Practical rule: put the least frequently changing steps first, and the most frequently changing steps last.

  • Good: copy dependency files, install dependencies, then copy application source.
  • Less good: copy all source first, then install dependencies (any code change invalidates the dependency install layer).

When builds feel slow, the first thing to check is whether you are accidentally invalidating cache layers by copying too much too early.

Registries and Image Names: Where Images Live

Images are identified by a name and optionally a tag, like myapp:1.0. If you don’t specify a registry, Docker assumes Docker Hub by default. In team workflows, you often push images to a registry (Docker Hub, GitHub Container Registry, a private registry) so other machines can pull the same artifact.

Important naming concepts:

  • Repository: the image name (e.g., myorg/myapp).
  • Tag: a label for a version (e.g., 1.0, 2026-01-14, latest).
  • Digest: an immutable content hash reference (strongest reproducibility).

For reproducible deployments, prefer explicit version tags or digests rather than relying on latest.

Runtime Configuration: Environment Variables, Commands, and Secrets

At runtime, you typically customize containers without modifying the image. The most common mechanism is environment variables.

Environment variables

Environment variables are a simple way to pass configuration like “which port to listen on,” “what log level to use,” or “how to reach a database.” You can set defaults in the Dockerfile with ENV and override them when running the container.

Practical guideline: keep environment-specific settings out of the image. The same image should run in development, staging, and production with different environment variables.

Command vs entrypoint (conceptual)

Docker uses a startup command to launch the main process. In practice, you’ll most often rely on the image’s default CMD and override it only for debugging or one-off tasks (like running migrations). The key idea is that a container should have one main process that stays in the foreground.

Secrets (workflow perspective)

Passwords and API keys should not be baked into images. In simple local workflows, you might pass them as environment variables. In more advanced setups, you use secret management (for example, Docker Compose secrets or orchestrator secrets). The foundational concept is: secrets are runtime inputs, not build-time contents.

Storage Foundations: Writable Layers, Volumes, and Bind Mounts

Containers have a writable layer, but it is not designed for durable storage. If you remove the container, that writable layer is gone. Docker provides two main ways to persist or share data: volumes and bind mounts.

Writable layer: good for temporary changes

The container’s writable layer is fine for ephemeral files like caches or temporary uploads, but you should not rely on it for anything you need to keep.

Volumes: Docker-managed persistence

A volume is managed by Docker and stored in a location Docker controls. Volumes are the default choice for persistent data (databases, uploads, application state) because they are portable across container recreations and don’t depend on a specific host path.

Typical use cases:

  • Database data directories
  • Shared state between containers (when appropriate)
  • Persisting application-generated files

Bind mounts: host-to-container file sharing

A bind mount maps a specific host directory into the container. This is common in development to live-edit code on your machine while the container runs it.

Trade-offs:

  • Bind mounts are convenient for development.
  • They can introduce host-specific behavior (file permissions, path differences).
  • They are less portable across machines compared to volumes.

Step-by-step: create and use a volume

This example shows the workflow without assuming any specific application.

  • Create a named volume.
  • Run a container that writes data into a mounted path.
  • Remove the container and start a new one using the same volume to confirm persistence.
# 1) Create a volume
docker volume create app-data

# 2) Run a container and mount the volume at /data
docker run --name writer -v app-data:/data alpine sh -c "echo hello > /data/message.txt && cat /data/message.txt"

# 3) Remove the container
docker rm writer

# 4) Start a new container with the same volume and read the file
docker run --rm -v app-data:/data alpine cat /data/message.txt

The key concept is that the data lives in the volume, not in the container.

Networking Foundations: Ports, Bridge Networks, and Service Discovery

Networking is where Docker stops feeling like “a process on my machine” and starts feeling like “a small, isolated environment.” Containers have their own network namespace. By default, Docker attaches containers to a bridge network and gives them private IP addresses.

Publishing ports: container vs host

Applications inside containers listen on container ports. To access them from your host (browser, curl, other tools), you publish a container port to a host port.

Conceptual mapping:

  • Inside container: app listens on containerPort
  • On host: you connect to hostPort

Example mapping: host port 8080 to container port 80. You would browse http://localhost:8080 and Docker forwards traffic to port 80 inside the container.

Bridge networks: container-to-container communication

When multiple containers are on the same user-defined bridge network, they can reach each other by container name (DNS-based service discovery). This is a foundation for multi-container apps: a web app container can connect to a database container using the database container’s name as the hostname.

Step-by-step: create a network and test name-based connectivity

# 1) Create a user-defined bridge network
docker network create app-net

# 2) Start a container named "server" on that network
docker run -d --name server --network app-net nginx:alpine

# 3) Start a second container on the same network and curl the first by name
docker run --rm --network app-net alpine sh -c "apk add --no-cache curl > /dev/null && curl -I http://server"

This demonstrates a core Docker workflow concept: containers on the same network can discover each other by name without you manually managing IP addresses.

Container Lifecycle and Idempotent Operations

Docker workflows become stable when you treat operations as repeatable and safe to run multiple times. This is often described as “idempotent” behavior: running the same steps again should produce the same result.

Lifecycle states you’ll manage

  • Create: a container is created from an image with configuration.
  • Start: the container process begins.
  • Stop: the process is stopped gracefully.
  • Remove: the container metadata and writable layer are deleted.

In practice, a common development loop is: rebuild image, remove old container, run new container. Data that must persist should be in volumes, not in the container.

Foreground vs detached mode

Running in the foreground is useful when you want to see logs directly and stop with Ctrl+C. Detached mode is useful for services that should run in the background. Regardless of mode, logs are accessible through Docker’s logging commands.

Observability Basics: Logs, Exec, and Inspect

When something goes wrong, you need a consistent debugging routine. Docker provides a few foundational tools that work across most containers.

Logs: first stop for debugging

Container logs usually capture stdout/stderr from the main process. Good containerized apps log to stdout rather than writing log files inside the container.

# View logs
docker logs my-container

# Follow logs (stream)
docker logs -f my-container

Exec: run a command inside a running container

docker exec is useful for checking files, environment variables, or connectivity from within the container’s network context.

# Open a shell (if available)
docker exec -it my-container sh

# Run a one-off command
docker exec my-container env

Use exec for diagnosis, but avoid making “permanent fixes” interactively. If a fix is needed, encode it in the Dockerfile or runtime configuration.

Inspect: see the truth of configuration

docker inspect reveals the container’s effective configuration: mounts, networks, IP addresses, environment variables, and the image it was created from. When behavior doesn’t match your expectations, inspect is often the fastest way to find out why.

# Inspect and filter key fields (example using a Go template)
docker inspect -f '{{.Name}} {{.Config.Image}} {{range .Mounts}}{{.Source}}:>{{.Destination}} {{end}}' my-container

Resource and Safety Foundations: Limits, Restarts, and Least Privilege

Even in beginner projects, it helps to understand that containers share the host kernel and compete for host resources. Docker lets you set boundaries and safer defaults.

Restart policies: keep services running

A restart policy tells Docker what to do if the container exits. For long-running services, you often want Docker to restart them automatically after failures or host reboots.

Common policies:

  • no: do not restart automatically.
  • on-failure: restart only when the process exits with a non-zero code.
  • always: always restart if it stops.
  • unless-stopped: restart unless you explicitly stopped it.

Resource limits: prevent one container from taking over

You can limit memory and CPU usage. This is especially useful when running multiple services locally.

# Example: limit memory and CPU
docker run --rm -m 256m --cpus 1.0 alpine sh -c "echo running"

Least privilege: don’t run as root when you don’t need to

Many base images run as root by default. For better safety, you can run as a non-root user when the application supports it. The foundational idea is: reduce privileges inside the container to reduce the impact of a compromise.

In Dockerfiles, this is often done with a USER instruction after creating a user and setting permissions appropriately.

Putting It Together: A Repeatable Local Workflow Pattern

Here is a practical, repeatable workflow you can apply to most beginner projects, regardless of language or framework. The goal is to separate build concerns from runtime concerns and keep everything reproducible.

Step-by-step workflow

  • Step 1: Define the build in a Dockerfile: base image, dependencies, copy source, set default command.
  • Step 2: Build the image with a clear name and tag.
  • Step 3: Run the container with runtime configuration: ports, environment variables, volumes, network.
  • Step 4: Observe using logs and inspect.
  • Step 5: Iterate by changing source/Dockerfile, rebuilding, and recreating the container.
# Step 2: Build
docker build -t myapp:dev .

# Step 3: Run (example with port + env + volume)
docker volume create myapp-state
docker run -d --name myapp \
  -p 8080:3000 \
  -e LOG_LEVEL=debug \
  -v myapp-state:/app/state \
  myapp:dev

# Step 4: Observe
docker logs -f myapp

# Step 5: Iterate (typical loop)
docker stop myapp
docker rm myapp
docker build -t myapp:dev .
docker run -d --name myapp -p 8080:3000 -e LOG_LEVEL=debug -v myapp-state:/app/state myapp:dev

Notice what stays stable: the image name, the volume name, and the runtime flags. This stability is what makes Docker workflows feel clean and predictable.

Common Conceptual Pitfalls (and the Correct Mental Model)

Pitfall: treating containers like virtual machines

Containers are not full VMs. They share the host kernel and are meant to be lightweight and disposable. Correct mental model: containers are isolated processes with their own filesystem view and networking, started from an immutable image.

Pitfall: storing important data in the container filesystem

If you store data only in the container’s writable layer, you will lose it when the container is removed. Correct mental model: persistent data belongs in volumes (or external services), not in the container.

Pitfall: “it works on my machine” Dockerfiles

Dockerfiles that rely on interactive steps, manual edits, or unclear versions lead to non-reproducible builds. Correct mental model: everything needed to build should be encoded in the Dockerfile with explicit versions when practical, and everything needed to configure should be provided at runtime.

Pitfall: confusing EXPOSE with publishing ports

EXPOSE documents intent inside the image; it does not make the service reachable from the host. Correct mental model: publishing ports is a runtime decision.

Compose as a Workflow Tool (Conceptual Overview)

As soon as you have more than one container (for example, an app plus a database), managing multiple docker run commands becomes tedious. Docker Compose is a tool that defines multi-container setups in a single file so you can start everything together with consistent configuration.

Even if you don’t use Compose yet, it helps to understand what it represents conceptually:

  • Multiple services (containers) defined together
  • Networks defined once and shared
  • Volumes defined once and reused
  • Environment variables centralized

The foundational idea is not the syntax; it is the workflow: describe the whole local environment as configuration so it can be started and recreated reliably.

Now answer the exercise about the content:

A containerized app needs a dependency update. What is the most reliable Docker workflow to ensure the change is reproducible?

You are right! Congratulations, now go to the next page

You missed! Try again.

Images are meant to be immutable templates. Manual changes made inside a running container live only in that container writable layer. For repeatable results, encode changes in the Dockerfile, rebuild the image, and recreate the container.

Next chapter

Images, Containers, and Registries in Practical Use

Arrow Right Icon
Download the app to earn free Certification and listen to the courses in the background, even with the screen off.