Side note: Monitor your Django application in production
Head over to Better Stack and start monitoring your Django endpoints in minutes. Get instant alerts when your application goes down.
Django excels at building web applications quickly, but deploying them consistently across different environments can be challenging.
Docker solves this by packaging your Django application and its dependencies into portable containers.
This tutorial shows you how to containerize a Django application using Docker. You'll learn how to:
By Dockerizing your Django application, you'll get consistent deployments, isolated environments, and easier scaling capabilities.
To demonstrate the steps, we'll use a simple todo application with PostgreSQL as our example, walking through each step of the containerization process.
Let's get started!
To ensure a smooth understanding and implementation of the tutorial, ensure that you have:
In this section, you'll set up a simple Django To-Do application on your machine and run it locally to ensure it works before proceeding to Dockerize it.
Start by forking the demo project to your GitHub account. Then, clone the repository to your computer:
Navigate into the project directory and check its structure:
The project structure should appear as shown below:
Here's a brief explanation of what each entry comprises:
django_project: Main Django project directory containing core
configurations.todo_app: Directory with the To-Do application's files.venv: Python virtual environment for dependency management.manage.py: Django’s CLI tool for running commands.requirements.txt: A list of required Python packages to run the project.Before running the project, you'll need a PostgreSQL database where the todo items will be stored. Use Docker to start a PostgreSQL container based on the official postgres image:
This command:
django-todo-db.django_todo and password to admin.Once the container is running, you'll see the following output confirming that the database is ready to accept connections:
The next step is to create a .env file at the root of your project:
Then add the following contents to it along with your PostgreSQL credentials:
This assumes that you're using the default postgres user, and that your
password is admin as configured when running the PostgreSQL container.
These variables are used in your settings.py file as follows:
Now, open a new terminal, navigate into the project root, and activate the Python virtual environment with the following command:
You should see (venv) in your terminal prompt:
You may now install the dependencies for the project by executing the following command:
If you encounter issues with psycopg2, install the necessary system
dependencies:
After installing the project dependencies, run the database migrations to set up the schema:
You're now set to launch the development server at this stage. Run the command below to start the application on port 8000:
Navigate to http://localhost:8000/ in your browser to access the To-Do app:
To confirm that everything works, click Add Todo button at the top left corner of the homepage. Fill in the Title, Description, Due date, and Completion status.
Once you click the Save button, you should see the newly added item on the homepage:
At this point, you may quit the development server with Ctrl-C.
Head over to Better Stack and start monitoring your Django endpoints in minutes. Get instant alerts when your application goes down.
While Django's built-in server is great for development, production environments require more robust solutions. This section will guide you through setting up Gunicorn as your application server and Caddy as your web server.
Gunicorn offers several advantages over Django's development server:
Since Gunicorn is already included in requirements.txt, you can run it with:
This launches the application on port 8000 as before:
However, when you visit http://localhost:8000 once again, you'll notice that
the styles are not loading:
This is because while Django's runserver handles both dynamic content and
static files in a single process, Gunicorn is designed to only process
Python/WSGI requests.
It deliberately excludes static file handling because in production environments, these files are better served by specialized web servers like Nginx or Caddy, which are optimized for this purpose and can handle high volumes of static content requests more efficiently.
Before you can serve static files with file server, you need to set up the
STATIC_ROOT environmental variable which specifies where all the static files
for your application is located:
Once you've set up STATIC_ROOT, the next step is running the collectstatic
command which collects static files from all your applications into a single
directory, making them easy to serve in production:
You should see the following output:
Once the static files have been copied over, you can proceed to set up Caddy through its Docker image by running the command below in a separate terminal (ensure port 80 isn't in use first):
You'll see the following logs:
With the container running, visit http://localhost in your browser. You'll
see:
Now return to your terminal, and exit the Caddy container by pressing Ctrl-C,
then create a Caddyfile at your project root:
Configure it as follows:
Then launch Caddy with Docker, mounting both your static files and Caddy configuration:
This setup uses Caddy's file server for static content while proxying dynamic
requests to Gunicorn. The --network host flag allows Caddy to communicate with
Gunicorn running on your host machine.
Your Django application should now be fully accessible through Caddy with
working styles and static files. Visit http://localhost in your browser to
confirm this:
You may now quit the Caddy container, PostgreSQL container, and Gunicon process
by pressing Ctrl-C in their respective terminals.
In the next step, you create a Docker image for your Django application.
Stop SSH-ing into servers to check logs. Better Stack aggregates logs from all your Docker containers into one dashboard with powerful filtering and search.
Now that you have your Django application running with Gunicorn and Caddy, the
next step is to containerize your application. This involves creating a
Dockerfile that will package your application and all its dependencies into a
reproducible Docker image.
First, create an entrypoint.prod.sh script at your project root to handle
database migrations, static files, and start Gunicorn:
Next, create a Dockerfile in your project root with these instructions:
Let's examine the key components of this Dockerfile:
The FROM instruction selects a minimal Python base image that includes only
essential components needed to run Python applications.
WORKDIR /app establishes the working directory inside the container where
all subsequent commands will execute.
The environment variables set by ENV statements optimize Python's behavior
in containers:
PYTHONDONTWRITEBYTECODE=1 prevents Python from creating .pyc filesPYTHONUNBUFFERED=1 ensures Python output is sent directly to the terminal
without buffering.RUN pip install --upgrade pip ensures we have the latest version of pip for
package installation.
The system dependencies installation combines two commands to minimize layers.
Here, libpq-dev is required for PostgreSQL support through psycopg2, while
gcc is needed to compile certain Python packages.
COPY requirements.txt . copies just the requirements file first, allowing
Docker to cache the dependency installation layer.
RUN pip install --no-cache-dir -r requirements.txt installs all Python
dependencies without storing pip's cache.
COPY . . copies all remaining application code into the container after
dependencies are installed.
EXPOSE 8000 documents that the container listens on port 8000, though this
is primarily for documentation purposes.
Finally, RUN chmod +x /app/entrypoint.prod.sh makes the entrypoint script
executable, and CMD ["/app/entrypoint.prod.sh"] sets it as the default
command when the container starts.
With the Dockerfile complete, you're ready to build your Docker image in the
next step.
With the Dockerfile created, you'll now build a Docker image for your Django
application. First, you'll set up a .dockerignore file to exclude unnecessary
or sensitive files, then build and verify the image.
You can use the contents from this .dockerignore example for Python projects:
This file prevents sensitive data like environment variables from being included
in your Docker image, similar to how .gitignore works for Git.
Now, go ahead and execute the command below to build the image:
This command builds an image using the current directory as context and tags it
as django-todo-app. The build process follows the instructions in your
Dockerfile.
Once the image is built successfully, you should see the following output:
You can verify that the image was created with:
You should see the newly built image in the output:
Now that you have successfully built your Django application image, the next step will focus on launching your Django app alongside its Caddy and PostgreSQL dependencies through Docker Compose.
Docker Compose is a tool that simplifies the management of multi-container Docker applications. It allows you to orchestrate your Django application stack, including the web server, database, and other services.
Instead of manually creating and managing individual Docker containers using the
docker run command, Compose lets you define and manage multi-container
applications in a single YAML file. This saves time and provides a structured
way to handle complex applications by specifying all the relevant services,
configurations, and dependencies.
Now that you have your individual services configured, you'll use Docker Compose to orchestrate your entire application stack.
Create a compose.yaml file in your project root:
Then paste in the following contents:
This configuration:
Before you launch the Docker compose stack, modify your Caddyfile and .env
as follows:
These changes ensure that the application components can communicate properly within the Docker network.
The next step is to launch the stack with:
It will bring the containers up and you should see a bunch of logs from each container in your terminal:
To confirm that everything works, visit http://localhost in your browser. You
should be able to interact with the application in the same way as before:
Transform your Django logs into actionable dashboards. Better Stack automatically detects errors, tracks response times, and helps you spot issues before users report them.
In this tutorial, you learned how to containerize a Django application using Docker and Docker Compose, enabling consistent deployments across environments and simplifying the development process.
To further enhance your Docker and Django skills, consider:
The complete code is available in our GitHub repository.
Happy coding!
We use cookies to authenticate users, improve the product user experience, and for personalized ads. Learn more.