Django excels at building web applications quickly, but deploying them consistently across different environments can be challenging.
Docker solves this by packaging your Django application and its dependencies into portable containers.
This tutorial shows you how to containerize a Django application using Docker. You'll learn how to:
- Package a Django application with its dependencies
- Create and build Docker images
- Use Docker Compose for multi-container deployments
- Implement proper production configurations
By Dockerizing your Django application, you'll get consistent deployments, isolated environments, and easier scaling capabilities.
To demonstrate the steps, we'll use a simple todo application with PostgreSQL as our example, walking through each step of the containerization process.
Let's get started!
Prerequisites
To ensure a smooth understanding and implementation of the tutorial, ensure that you have:
- Basic command-line skills.
- Basic familiarity with Python and Django concepts.
- Python3 and Pip installed on your computer.
Step 1 - Setting up the demo project
In this section, you'll set up a simple Django To-Do application on your machine and run it locally to ensure it works before proceeding to Dockerize it.
Start by forking the demo project to your GitHub account. Then, clone the repository to your computer:
git clone https://github.com/<username>/django-todo-app
Navigate into the project directory and check its structure:
cd django-todo-app
ls
The project structure should appear as shown below:
django_project manage.py requirements.txt todo_app venv
Here's a brief explanation of what each entry comprises:
django_project
: Main Django project directory containing core configurations.todo_app
: Directory with the To-Do application's files.venv
: Python virtual environment for dependency management.manage.py
: Django’s CLI tool for running commands.requirements.txt
: A list of required Python packages to run the project.
Before running the project, you'll need a PostgreSQL database where the todo items will be stored. Use Docker to start a PostgreSQL container based on the official postgres image:
docker run \
--rm \
--name django-todo-db \
--env POSTGRES_PASSWORD=admin \
--env POSTGRES_DB=django_todo \
--volume django-pg-data:/var/lib/postgresql/data \
--publish 5432:5432 \
postgres:bookworm
This command:
- Starts a PostgreSQL container named
django-todo-db
. - Sets the database name to
django_todo
andpassword
toadmin
. - Maps port 5432 to your host machine.
Once the container is running, you'll see the following output confirming that the database is ready to accept connections:
PostgreSQL Database directory appears to contain a database; Skipping initialization
2025-02-07 08:15:06.665 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
2025-02-07 08:15:06.666 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2025-02-07 08:15:06.666 UTC [1] LOG: listening on IPv6 address "::", port 5432
2025-02-07 08:15:06.678 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2025-02-07 08:15:06.693 UTC [30] LOG: database system was shut down at 2024-09-05 13:15:08 UTC
2025-02-07 08:15:06.703 UTC [1] LOG: database system is ready to accept connections
The next step is to create a .env
file at the root of your project:
code .env
Then add the following contents to it along with your PostgreSQL credentials:
DJANGO_DEBUG=True
DJANGO_SECRET_KEY=django-insecure-69k-#kmlre&rb4uhf2*d5foi+1ee)wsck_%9z*--wbit3_dk9e
DJANGO_ALLOWED_HOSTS=localhost
DJANGO_CSRF_TRUSTED_ORIGINS=http://localhost:8000
DATABASE_ENGINE=django.db.backends.postgresql
DATABASE_NAME=django_todo
DATABASE_USER=postgres
DATABASE_PASSWORD=admin
DATABASE_HOST=localhost
DATABASE_PORT=5432
This assumes that you're using the default postgres
user, and that your
password is admin
as configured when running the PostgreSQL container.
These variables are used in your settings.py
file as follows:
[django_project/settings.py]
. . .
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.getenv("DJANGO_SECRET_KEY")
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = (os.environ.get('DEBUG') == "True")
ALLOWED_HOSTS = os.getenv("DJANGO_ALLOWED_HOSTS","127.0.0.1").split(",")
CSRF_TRUSTED_ORIGINS = os.getenv("DJANGO_CSRF_TRUSTED_ORIGINS","https://127.0.0.1").split(",")
. . .
DATABASES = {
'default': {
'ENGINE': os.environ.get('DATABASE_ENGINE'),
'NAME': os.environ.get('DATABASE_NAME'),
'USER': os.environ.get('DATABASE_USER'),
'PASSWORD': os.environ.get('DATABASE_PASSWORD'),
'HOST': os.environ.get('DATABASE_HOST'), # For local development, use 'localhost' or '127.0.0.1'
'PORT': os.environ.get('DATABASE_PORT'), # Default PostgreSQL port is usually '5432'
}
}
. . .
Now, open a new terminal, navigate into the project root, and activate the Python virtual environment with the following command:
source venv/bin/activate # or source venv/bin/activate.fish if you use fish
You should see (venv)
in your terminal prompt:
You may now install the dependencies for the project by executing the following command:
pip3 install -r requirements.txt
If you encounter issues with psycopg2
, install the necessary system
dependencies:
sudo apt install libpq-dev python3-dev # Ubuntu, Debian
sudo dnf install libpq-devel python3-devel # Fedora, Red Hat
After installing the project dependencies, run the database migrations to set up the schema:
python3 manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions, todo_app
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying admin.0003_logentry_add_action_flag_choices... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
Applying auth.0010_alter_group_name_max_length... OK
Applying auth.0011_update_proxy_permissions... OK
Applying auth.0012_alter_user_first_name_max_length... OK
Applying sessions.0001_initial... OK
Applying todo_app.0001_initial... OK
You're now set to launch the development server at this stage. Run the command below to start the application on port 8000:
python3 manage.py runserver
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
June 08, 2023 - 19:36:49
Django version 4.2.1, using settings 'django_project.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
Navigate to http://localhost:8000/ in your browser to access the To-Do app:
To confirm that everything works, click Add Todo button at the top left corner of the homepage. Fill in the Title, Description, Due date, and Completion status.
Once you click the Save button, you should see the newly added item on the homepage:
At this point, you may quit the development server with Ctrl-C
.
Step 2 — Setting up Gunicorn and Caddy
While Django's built-in server is great for development, production environments require more robust solutions. This section will guide you through setting up Gunicorn as your application server and Caddy as your web server.
Setting up Gunicorn
Gunicorn offers several advantages over Django's development server:
- Parallel request handling through multiple worker processes across CPU cores.
- Fine-grained configuration options.
- Enhanced logging capabilities.
- Improved performance and security optimizations.
Since Gunicorn is already included in requirements.txt
, you can run it with:
gunicorn django_project.wsgi:application
This launches the application on port 8000 as before:
[2025-02-07 15:23:46 +0100] [704385] [INFO] Starting gunicorn 23.0.0
[2025-02-07 15:23:46 +0100] [704385] [INFO] Listening at: http://127.0.0.1:8000 (704385)
[2025-02-07 15:23:46 +0100] [704385] [INFO] Using worker: sync
[2025-02-07 15:23:46 +0100] [704405] [INFO] Booting worker with pid: 704405
However, when you visit http://localhost:8000
once again, you'll notice that
the styles are not loading:
This is because while Django's runserver
handles both dynamic content and
static files in a single process, Gunicorn is designed to only process
Python/WSGI requests.
It deliberately excludes static file handling because in production environments, these files are better served by specialized web servers like Nginx or Caddy, which are optimized for this purpose and can handle high volumes of static content requests more efficiently.
Handling static files with Caddy
Before you can serve static files with file server, you need to set up the
STATIC_ROOT
environmental variable which specifies where all the static files
for your application is located:
. . .
STATIC_URL = 'static/'
STATIC_ROOT = BASE_DIR / 'staticfiles'
. . .
Once you've set up STATIC_ROOT
, the next step is running the collectstatic
command which collects static files from all your applications into a single
directory, making them easy to serve in production:
python3 manage.py collectstatic
You should see the following output:
128 static files copied to '/home/user/django-todo-app/staticfiles'.
Once the static files have been copied over, you can proceed to set up Caddy through its Docker image by running the command below in a separate terminal (ensure port 80 isn't in use first):
docker run --rm --name django-todo-caddy -p 80:80 caddy:alpine
You'll see the following logs:
. . .
{"level":"info","ts":1738942780.2199092,"msg":"serving initial configuration"}
{"level":"info","ts":1738942780.2331204,"logger":"tls","msg":"cleaning storage unit","storage":
With the container running, visit http://localhost
in your browser. You'll
see:
Now return to your terminal, and exit the Caddy container by pressing Ctrl-C
,
then create a Caddyfile
at your project root:
code Caddyfile
Configure it as follows:
http://localhost {
# Serve static files from staticfiles directory
handle /static/* {
root * /srv/
file_server
}
# Proxy all other requests to Gunicorn
handle {
reverse_proxy http://localhost:8000
}
}
Then launch Caddy with Docker, mounting both your static files and Caddy configuration:
docker run \
--rm \
--name django-todo-caddy \
-v $(pwd)/staticfiles:/srv/static \
-v $(pwd)/Caddyfile:/etc/caddy/Caddyfile \
--network host \
caddy:alpine
This setup uses Caddy's file server for static content while proxying dynamic
requests to Gunicorn. The --network host
flag allows Caddy to communicate with
Gunicorn running on your host machine.
Your Django application should now be fully accessible through Caddy with
working styles and static files. Visit http://localhost
in your browser to
confirm this:
You may now quit the Caddy container, PostgreSQL container, and Gunicon process
by pressing Ctrl-C
in their respective terminals.
In the next step, you create a Docker image for your Django application.
Step 3 — Writing a Dockerfile for your Django app
Now that you have your Django application running with Gunicorn and Caddy, the
next step is to containerize your application. This involves creating a
Dockerfile
that will package your application and all its dependencies into a
reproducible Docker image.
First, create an entrypoint.prod.sh
script at your project root to handle
database migrations, static files, and start Gunicorn:
#!/usr/bin/env bash
python manage.py migrate --noinput
python manage.py collectstatic --noinput
python -m gunicorn --bind 0.0.0.0:8000 --workers 3 django_project.wsgi:application
Next, create a Dockerfile
in your project root with these instructions:
# Base image: Python 3.13 slim version for a minimal footprint
FROM python:3.13-slim AS builder
# Set working directory for all subsequent commands
WORKDIR /app
# Python environment variables:
# Prevents Python from writing .pyc files to disk
ENV PYTHONDONTWRITEBYTECODE=1
# Ensures Python output is sent straight to terminal without buffering
ENV PYTHONUNBUFFERED=1
# Upgrade pip to latest version
RUN pip install --upgrade pip
# Install system dependencies:
# libpq-dev: Required for psycopg2 (PostgreSQL adapter)
# gcc: Required for compiling some Python packages
RUN apt-get update \
&& apt-get -y install libpq-dev gcc
# Copy requirements file first to leverage Docker cache
COPY requirements.txt .
# Install Python dependencies without storing pip cache
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of application code to container
COPY . .
# Document that the container listens on port 8000
EXPOSE 8000
# Make the entrypoint script executable
RUN chmod +x /app/entrypoint.prod.sh
# Set the entrypoint script as the default command
# This will run migrations, collect static files, and start Gunicorn
CMD ["/app/entrypoint.prod.sh"]
Let's examine the key components of this Dockerfile
:
The
FROM
instruction selects a minimal Python base image that includes only essential components needed to run Python applications.WORKDIR /app
establishes the working directory inside the container where all subsequent commands will execute.The environment variables set by
ENV
statements optimize Python's behavior in containers:PYTHONDONTWRITEBYTECODE=1
prevents Python from creating.pyc
filesPYTHONUNBUFFERED=1
ensures Python output is sent directly to the terminal without buffering.
RUN pip install --upgrade pip
ensures we have the latest version of pip for package installation.The system dependencies installation combines two commands to minimize layers. Here,
libpq-dev
is required for PostgreSQL support throughpsycopg2
, whilegcc
is needed to compile certain Python packages.COPY requirements.txt .
copies just the requirements file first, allowing Docker to cache the dependency installation layer.RUN pip install --no-cache-dir -r requirements.txt
installs all Python dependencies without storing pip's cache.COPY . .
copies all remaining application code into the container after dependencies are installed.EXPOSE 8000
documents that the container listens on port 8000, though this is primarily for documentation purposes.Finally,
RUN chmod +x /app/entrypoint.prod.sh
makes the entrypoint script executable, andCMD ["/app/entrypoint.prod.sh"]
sets it as the default command when the container starts.
With the Dockerfile
complete, you're ready to build your Docker image in the
next step.
Step 4 — Building the Docker image
With the Dockerfile
created, you'll now build a Docker image for your Django
application. First, you'll set up a .dockerignore
file to exclude unnecessary
or sensitive files, then build and verify the image.
You can use the contents from this .dockerignore example for Python projects:
code .dockerignore
# See: https://gist.github.com/KernelA/04b4d7691f28e264f72e76cfd724d448
# Git
.git
.gitignore
.gitattributes
. . .
This file prevents sensitive data like environment variables from being included
in your Docker image, similar to how .gitignore
works for Git.
Now, go ahead and execute the command below to build the image:
docker build -t django-todo-app .
This command builds an image using the current directory as context and tags it
as django-todo-app
. The build process follows the instructions in your
Dockerfile
.
Once the image is built successfully, you should see the following output:
[+] Building 245.1s (11/11) FINISHED
. . .
You can verify that the image was created with:
docker image ls django-todo-app
You should see the newly built image in the output:
REPOSITORY TAG IMAGE ID CREATED SIZE
django-todo-app latest ec5473859174 24 minutes ago 502MB
Now that you have successfully built your Django application image, the next step will focus on launching your Django app alongside its Caddy and PostgreSQL dependencies through Docker Compose.
Step 5 — Deploying the Django application with Docker Compose
Docker Compose is a tool that simplifies the management of multi-container Docker applications. It allows you to orchestrate your Django application stack, including the web server, database, and other services.
Instead of manually creating and managing individual Docker containers using the
docker run
command, Compose lets you define and manage multi-container
applications in a single YAML file. This saves time and provides a structured
way to handle complex applications by specifying all the relevant services,
configurations, and dependencies.
Now that you have your individual services configured, you'll use Docker Compose to orchestrate your entire application stack.
Create a compose.yaml
file in your project root:
code compose.yaml
Then paste in the following contents:
services:
db:
image: postgres:bookworm
container_name: django-todo-db
environment:
- POSTGRES_DB=${DATABASE_NAME}
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
env_file:
- ./.env
ports:
- '5432:5432'
volumes:
- pg_data:/var/lib/postgresql/data
web:
build: .
container_name: django-todo-app
ports:
- '8000:8000'
environment:
- DJANGO_DEBUG=${DJANGO_DEBUG}
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- DJANGO_ALLOWED_HOSTS=${DJANGO_ALLOWED_HOSTS}
- DJANGO_CSRF_TRUSTED_ORIGINS=${DJANGO_CSRF_TRUSTED_ORIGINS}
- DATABASE_ENGINE=${DATABASE_ENGINE}
- DATABASE_NAME=${DATABASE_NAME}
- DATABASE_USER=${DATABASE_USER}
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
- DATABASE_HOST=${DATABASE_HOST}
- DATABASE_PORT=${DATABASE_PORT}
env_file:
- ./.env
depends_on:
- db
caddy:
image: caddy:alpine
container_name: django-todo-caddy
ports:
- 80:80
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- ./staticfiles:/srv/static
depends_on:
- web
volumes:
pg_data:
This configuration:
- Sets up PostgreSQL with persistent storage and health checks
- Builds and runs your Django application with environment variables
- Configures Caddy to serve static files and proxy requests to Django
- Establishes proper service dependencies and networking
Before you launch the Docker compose stack, modify your Caddyfile
and .env
as follows:
http://localhost {
# Serve static files from staticfiles directory
handle /static/* {
root * /srv/
file_server
}
# Proxy all other requests to Gunicorn
handle {
reverse_proxy http://django-todo-app:8000
}
}
DJANGO_DEBUG=True
DJANGO_SECRET_KEY=django-insecure-69k-#kmlre&rb4uhf2*d5foi+1ee)wsck_%9z*--wbit3_dk9e
DJANGO_ALLOWED_HOSTS=localhost
DJANGO_CSRF_TRUSTED_ORIGINS=http://localhost:8000
DATABASE_ENGINE=django.db.backends.postgresql
DATABASE_NAME=django_todo
DATABASE_USER=postgres
DATABASE_PASSWORD=admin
DATABASE_HOST=django-todo-db
DATABASE_PORT=5432
These changes ensure that the application components can communicate properly within the Docker network.
The next step is to launch the stack with:
docker compose up
It will bring the containers up and you should see a bunch of logs from each container in your terminal:
[+] Running 4/4
✔ Network django-todo-app_default Created 0.2s
✔ Container django-todo-db Created 0.1s
✔ Container django-todo-app Created 0.1s
✔ Container django-todo-caddy Created 0.1s
Attaching to django-todo-app, django-todo-caddy, django-todo-db
. . .
django-todo-db | 2025-02-10 11:12:20.747 UTC [29] LOG: redo done at 0/1912180 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
django-todo-db | 2025-02-10 11:12:20.759 UTC [27] LOG: checkpoint starting: end-of-recovery immediate wait
django-todo-db | 2025-02-10 11:12:20.843 UTC [27] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.026 s, total=0.096
s; sync files=2, longest=0.013 s, average=0.013 s; distance=0 kB, estimate=0 kB; lsn=0/19121B8, redo lsn=0/19121B8
django-todo-db | 2025-02-10 11:12:20.852 UTC [1] LOG: database system is ready to accept connections
django-todo-caddy | {"level":"info","ts":1739185941.3228838,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
. . .
django-todo-caddy | {"level":"info","ts":1739185941.344128,"msg":"serving initial configuration"}
django-todo-caddy | {"level":"info","ts":1739185941.3572319,"logger":"tls","msg":"cleaning storage unit","storage":"FileStorage:/data/caddy"}
django-todo-caddy | {"level":"info","ts":1739185941.3574784,"logger":"tls","msg":"finished cleaning storage units"}
django-todo-app | Operations to perform:
django-todo-app | Apply all migrations: admin, auth, contenttypes, sessions, todo_app
django-todo-app | Running migrations:
. . .
django-todo-app | 127 static files copied to '/app/staticfiles', 1 unmodified.
django-todo-app | [2025-02-10 11:12:23 +0000] [9] [INFO] Starting gunicorn 23.0.0
django-todo-app | [2025-02-10 11:12:23 +0000] [9] [INFO] Listening at: http://0.0.0.0:8000 (9)
django-todo-app | [2025-02-10 11:12:23 +0000] [9] [INFO] Using worker: sync
django-todo-app | [2025-02-10 11:12:23 +0000] [10] [INFO] Booting worker with pid: 10
django-todo-app | [2025-02-10 11:12:23 +0000] [11] [INFO] Booting worker with pid: 11
django-todo-app | [2025-02-10 11:12:23 +0000] [12] [INFO] Booting worker with pid: 12
django-todo-app | [2025-02-10 11:12:23 +0000] [13] [INFO] Booting worker with pid: 13
To confirm that everything works, visit http://localhost
in your browser. You
should be able to interact with the application in the same way as before:
Final thoughts
In this tutorial, you learned how to containerize a Django application using Docker and Docker Compose, enabling consistent deployments across environments and simplifying the development process.
To further enhance your Docker and Django skills, consider:
- Learning multi-stage builds and Docker networking
- Optimizing Docker images and build processes
- Implementing container logging and monitoring
- Following Docker security best practices
The complete code is available in our GitHub repository.
Happy coding!
Make your mark
Join the writer's program
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for us
Build on top of Better Stack
Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
community@betterstack.comor submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github