Skip to main content
Getting Started with Docker & Containers
Documentation

Getting Started with Docker & Containers

Package your applications into portable containers that run anywhere, from your laptop to the cloud, with consistency and speed

6 min read View on GitHub

Remember the days when deploying an application meant crossing your fingers and hoping it would run the same way in production as it did on your machine? Those “works on my machine” moments are becoming relics of the past, thanks to containerization. Docker has transformed how we build, ship, and run applications by packaging everything your app needs into a single, portable unit that runs consistently across any environment.

The promise is simple but powerful: build once, run anywhere. Whether you’re running on your laptop, a colleague’s workstation, or a massive cloud infrastructure, containers ensure your application behaves exactly the same way. This consistency has made Docker the de facto standard for modern application development and deployment.

Why Containers Matter

Containers solve fundamental problems that developers and operations teams face every day:

Consistency Across Environments - No more environment-specific bugs. Your application runs in an identical environment whether it’s on your laptop, a staging server, or production infrastructure.

Resource Efficiency - Unlike virtual machines, containers share the host operating system’s kernel, making them incredibly lightweight. You can run dozens of containers on a single machine without the overhead of multiple OS instances.

Fast Startup Times - Containers start in seconds, not minutes. This speed enables rapid iteration during development and quick scaling in production.

Isolation - Each container runs in its own isolated environment with its own filesystem, networking, and process space. Dependencies and configurations never conflict between applications.

Version Control for Infrastructure - Your Dockerfile serves as code that defines your application’s environment. You can version it, review it, and track changes just like application code.

Prerequisites

Before diving in, you’ll need Docker installed on your system. The installation process varies by platform:

macOS - Download Docker Desktop from docker.com. It provides a complete Docker environment with a GUI for managing containers.

Windows - Docker Desktop for Windows works on Windows 10/11 Pro, Enterprise, or Education with WSL 2 enabled.

Linux - Install Docker Engine directly through your distribution’s package manager. Most distributions have Docker available in their official repositories.

After installation, verify Docker is working by opening a terminal and running:

docker --version

You should see output showing the installed Docker version. If you see this, you’re ready to start containerizing.

Your First Container

Let’s start with the traditional programming hello: running a simple container. This will demonstrate how effortlessly you can pull and run pre-built images.

docker run hello-world

When you execute this command, Docker performs several operations:

  1. Checks if the hello-world image exists locally
  2. Downloads the image from Docker Hub if it’s not found
  3. Creates a new container from that image
  4. Runs the container, which prints a welcome message
  5. Exits when the container’s process completes

Let’s try something more interactive. Run an Ubuntu container:

docker run -it ubuntu bash

The -it flags make the container interactive (-i) and allocate a pseudo-TTY (-t), giving you a shell inside the container. You’re now inside a complete Ubuntu environment, isolated from your host system. Try running some commands:

cat /etc/os-release
ls /
apt update

Type exit to leave the container. When you exit, the container stops but still exists. You can see all containers (running and stopped) with:

docker ps -a

To remove stopped containers, use:

docker container prune

Building Your Own Image

The real power of Docker emerges when you create custom images for your applications. A Dockerfile is a text file containing instructions for building an image.

Let’s build a simple Node.js web application. Create a new directory and add a file named Dockerfile:

# Start from an official Node.js base image
FROM node:18-alpine

# Set the working directory inside the container
WORKDIR /app

# Copy package files first for better caching
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy application code
COPY . .

# Expose the port your app runs on
EXPOSE 3000

# Define the command to run your app
CMD ["node", "server.js"]

Each instruction in a Dockerfile creates a layer in the image. Docker caches these layers, so rebuilding images is fast when only later layers change.

Create a simple package.json:

{
  "name": "docker-demo",
  "version": "1.0.0",
  "dependencies": {
    "express": "^4.18.0"
  }
}

And a basic server.js:

const express = require('express');
const app = express();

app.get('/', (req, res) => {
  res.send('Hello from Docker!');
});

app.listen(3000, () => {
  console.log('Server running on port 3000');
});

Build your image:

docker build -t my-node-app .

The -t flag tags your image with a name. The . tells Docker to use the current directory as the build context.

Run your containerized application:

docker run -p 3000:3000 my-node-app

The -p flag maps port 3000 on your host to port 3000 in the container. Open your browser to http://localhost:3000 to see your app running.

To run the container in detached mode (background):

docker run -d -p 3000:3000 --name my-app my-node-app

View running containers:

docker ps

Check container logs:

docker logs my-app

Stop and remove the container:

docker stop my-app
docker rm my-app

Essential Commands Reference

Here are the Docker commands you’ll use most frequently:

Images

  • docker images - List local images
  • docker pull <image> - Download an image
  • docker rmi <image> - Remove an image
  • docker build -t <name> . - Build an image

Containers

  • docker ps - List running containers
  • docker ps -a - List all containers
  • docker run <image> - Create and start a container
  • docker stop <container> - Stop a running container
  • docker start <container> - Start a stopped container
  • docker rm <container> - Remove a container
  • docker exec -it <container> bash - Run a command in a running container

Cleanup

  • docker system prune - Remove unused data
  • docker container prune - Remove stopped containers
  • docker image prune - Remove unused images

What’s Next

You’ve learned the fundamentals of Docker and containers, but there’s much more to explore:

Docker Compose - Define and run multi-container applications using YAML configuration files. Perfect for applications that need databases, caches, and other services.

Container Registries - Push your images to Docker Hub or private registries to share them across teams and deploy to production environments.

Orchestration - Tools like Kubernetes and Docker Swarm manage containers at scale, handling deployment, scaling, and management of containerized applications across clusters of machines.

Best Practices - Learn about multi-stage builds to reduce image sizes, security scanning, and optimizing layer caching for faster builds.

The journey into containerization opens doors to modern DevOps practices, microservices architectures, and cloud-native development. Each container you build brings you closer to truly portable, scalable applications that run anywhere with confidence.

Share:

Learn, Contribute & Share

This guide has a companion repository with working examples and code samples.