Deploying an application Django with PostgreSQL and Docker

Deploying an application Django with PostgreSQL and Docker

You have a Django application (or more) and you lose yourself with Nginx configurations, or you plan to fully dockerise your application, or you're new in this world and try to understand what the hell I'm talking about? This article will try to answer all these questions.

What is Docker

Docker is an open source free containerising program. More simple: With docker you can isolate each process of your server in a completely isolated environment. That will avoid any conflict between your process. But Docker is way more than that. With Docker you can deploy, redeploy, scale, distribute your application very easily.

What is the plan

The plan of this article is to create a full Django website inside dockers to be able to deploy it and manage it easily. Yes I said dockerS. Because with Docker, we will need a docker for each process. One for your Gunicorn server, one for your database, one for your proxy, one for your static server. So the plan is to mount a full architecture to access your Django application from the web. We will use traeffik as a docker proxy, Nginx as a static server and web proxy, Gunicorn as an applicative server and Postgresql as a database.

What do we need

To realise this, you will need

Let's go

Make a DockerFile

The first thing we will do is to create a Dockerfile for our application. This file is here to explain docker how to create an image of the docker we will use.

Why do I need a custom image? Can I use a Django Image? The thing is: there is no Django docker image. Because every projects are different you don't need the same libraries, an image Django will just make it more complicated to install everything we need.

Here is a basic DockerFile for my Django Application:

FROM python:3.8.3-alpine

WORKDIR /app

ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

RUN apk update && apk add postgresql-dev gcc python3-dev 
# RUN apk add musl-dev freetype libpng libjpeg-turbo freetype-dev libpng-dev libjpeg-turbo-dev
RUN pip install --upgrade pip
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt

EXPOSE 8000

CMD ["gunicorn", "--bind", ":8000", "--workers", "3", "mainapp.wsgi:application"]

Let's explain a bit what we have here.

FROM python:3.8.3-alpine

This line explain that we will need the official python 3.8.3 image based on Alpine Linux which is a very light linux distribution, that will make our container very light

WORKDIR /app

This line explain that we will execute all the commands in the folder /app in the container

ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

These lines install environment vars in the container. These vars are used by python itself.

RUN apk update && apk add postgresql-dev gcc python3-dev

This line install the librairies python and Django will need to work properly

# RUN apk add musl-dev freetype libpng libjpeg-turbo freetype-dev libpng-dev libjpeg-turbo-dev

This line is commented. If you use Pillow in your project, uncomment the line you will need these libraries

RUN pip install --upgrade pip
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt

These lines upgrade pip, copy your file requirements.txt in the container then install all the requirements. Don't forget you'll need gunicorn in your requirements.

EXPOSE 8000

CMD ["gunicorn", "--bind", ":8000", "--workers", "3", "mainapp.wsgi:application"]

Finally we open the port 8000 on the container. And when the container will be mounted, we will run the command to run gunicorn. Think that you will have to change the last line and replace mainapp by the name of the main package of your app (the package containing the module wsgi.py)

Save this docker at the root of your app (next to manage.py)

Build the image

Now if you open a terminal in your root folder and type

docker build .

You should see a result like this Your screen might be slightly different

Use docker compose

Now we know we can build the image for the application we still have to make the other containers. Good news, you don't need to create the images for the other containers, the official images for these services already exist. But what we want is to mount all of them at the same time and be sure that they can connect to each other. For this, the better solution will be to create a docker compose file. Docker-compose is here to orchestrate all our containers and connect them together, give them few informations and expose them on the host. Remember we said we will need Nginx and postgres for our application. Docker compose will create them for us.

Let see what we will put in this file:

version: '3'

networks:
  intern:
    external: false

services:
  web:
    build: .
    container_name: myapp_web
    env_file:
      - ./.env
    volumes:
      - .:/app
    depends_on:
      - db
    networks:
      - intern

Ok lot of informations. But in fact it's not complicated at all.

  • We use the version 3 of docker-compose
  • we create a network internal (only between the dockers of this docker-compose file) named intern
  • We describe the services we have (for now only one)
  • The first service named web will use an image build from the dockerfile (the docker file has to be at the same level than the docker compose)
  • We give a name to this container
  • We ask docker compose to load the file .env (creation of the environment vars)
  • We map a volume (means that the folder /app of the container is the same than the folder . of the host. It's usefull to be able to change the code without entering the docker)
  • We specify that that the service web depends on the service db (we will do it later) this will ensure that the service db is started to start the service web
  • Finally we connect the service to the intern network (to be able to communicate with the database

But, the impatient ones who already try to launch it realized that ... it's not working at all. Of course it's not, Django will refuse to run without the database.

Let's add this service:

[...]
  db:
    image: "postgres:latest"
    container_name: myapp_db
    environment:
      POSTGRES_PASSWORD: mypassword
    networks:
      - intern
    volumes:
      - pgdb:/var/lib/postgresql/data
[...]
volumes:
   pgdb:

Hey there is something new here Yes indeed, 3 things. image. This is because we don't need to build a custom image. We want to use the official postgres image. Environment, this is quite clear, when the docker will be mounted, these environment vars will be added to the container. And volume. This is a bit more complicated. There is 3 ways to save data with a docker:

  • Inside the docker (but everytime the container will be updated we will loose the data)
  • with a binded folder, like we did for the web service. This map a host folder to a container folder like that both can edit and read it. And even if we destroy the container, the data will be safe. We use that when we need to be able to edit the files outside of the container
  • with a docker volume, this will create a persistent folder that will not be erased if the container is destroyed (we call that persistency). But we can not access it from outside a docker. That's the best way to persist data if you don't need to access it from outside of a docker.

In this case we created a volume named pgdb at the end of our file and in the service postgres, we map the volume to the data volume of the database container. Like that our data are safe.

Now it's time to add nginx to all of that to serve our statics, our medias and expose our app on the port 80:

[...]
  nginx:
    image: nginx:1.19-alpine
    container_name: myapp_nginx
    volumes:
      - ./<collectstatic folder>:/home/app/web/staticfiles
      - ./<medias folder>:/home/app/web/mediafiles
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
    depends_on:
      - web
    networks:
      - intern
    ports:
      - 80:80
[...]

Not a load of new things here. We did some mapping in the Nginx container to be able to deliver the statics and the medias through Nginx, and we did a mapping of the configuration of nginx to be able to edit it. We as well added the ports part. We say to docker to listen on the port 80 of our host and redirect everything to the port 80 of our container.

Wait what? Which Nginx configuration file?

The one we will do right now of course. Just add a file at the root of the project: nginx.conf containing:

server{
    listen 80;
    client_max_body_size 5G;
    location / {
        proxy_pass http://web:8000;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }

    location /static/ {
        alias /home/app/web/staticfiles/;
    }

    location /media/ {
        alias /home/app/web/mediafiles/;
    }
}

If you don't understand refer to Nginx documentation But long story short: We did redirect the requests to our web container on port 8000 (gunicorn) and redirected all the statics and medias request to the good folder, bypassing gunicorn.

Now your docker compose file should look like this:

version: '3'

networks:
  intern:
    external: false

services:
  web:
    build: .
    container_name: myapp_web
    env_file:
      - ./.env
    volumes:
      - .:/app
    depends_on:
      - db
    networks:
      - intern

  db:
    image: "postgres:latest"
    container_name: myapp_db
    environment:
      POSTGRES_PASSWORD: mypassword
    networks:
      - intern
    volumes:
      - pgdb:/var/lib/postgresql/data

    nginx:
    image: nginx:1.19-alpine
    container_name: myapp_nginx
    volumes:
      - ./<collectstatic folder>:/home/app/web/staticfiles
      - ./<medias folder>:/home/app/web/mediafiles
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
    depends_on:
      - web
    networks:
      - intern
    ports:
      - 80:80

volumes:
   pgdb:

Last thing, you have to change your settings to mach with your configuration like:

    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.postgresql',
            'NAME': postgres,  
            'USER': postgres,
            'PASSWORD': mypassword,
            'HOST': db, 
            'PORT': 5432,  
            'ATOMIC_REQUESTS': True,
        }
    }

Or edit your .env file.

Run it!

Time to run your architecture. In your terminal, in your root folder type:

docker-compose up -d

All the dockers will be built or downloaded everything will mount, everything will run and you just have to go on localhost:80 or your_ip_or_domain:80 to see that .... It's not working.

What? Are you kidding?

Technically your architechture is working well, the problem is that you didn't do your migrations neither your collect statics.

To do it, after running your docker-compose:

docker-compose exec -T web python manage.py migrate --noinput

and

docker-compose exec -T web python manage.py collectstatic --noinput

And now it should work.

I want do deploy a second app

Now you're application is up and runing on your server, you want to deploy a second one. Problem: The port 80 is already taken by the first app. At this point you have two possibilities:

  • Change the port of nginx for your second app (which will force you to access your app on my-app.com:8080 for example which is not nice)
  • Use a docker trafic proxy (better solution)

Of course we will describe here the better solution.

Use Traefik

Traefik is a proxy exclusively made for docker. It's not a webserver, Traefik will not deliver the pages of your application, it will only redirect the requests to the good docker on the good port. And that's what we want. Of course Traefik can run in a docker. To make it easier, we will use a docker compose file to deploy it. First stop your containers of your first app executing in the application folder:

docker-compose down

Let's organise our server. We will create a user in charge of running our dockerised applications:

sudo useradd -m docker_runner
sudo usermod -a -G docker docker_runner

We add the user to the docker group to be able to run the docker commands.

Then we will create a folder for each docker compose:

sudo mkdir /home/docker_runner/app1 /home/docker_runner/app2 /home/docker_runner/traefik
sudo chown -R docker_runner:docker_runner /home/docker_runner/*

And let's connect as this user:

sudo su - docker_runner

I will pull my first app in the good folder, and same for my second app. Now time to run Traefik. Before creating our Traefik container we will create few configuration files. First let's create a traefik.toml file:

[entryPoints]
  [entryPoints.web]
    address = ":80"
    [entryPoints.web.http.redirections.entryPoint]
      to = "websecure"
      scheme = "https"

  [entryPoints.websecure]
    address = ":443"

[api]
  dashboard = true

[certificatesResolvers.lets-encrypt.acme]
  email = "my@mail.com"
  storage = "acme.json"
  [certificatesResolvers.lets-encrypt.acme.tlsChallenge]

[providers.docker]
  watch = true
  network = "extern"

[providers.file]
  filename = "traefik_dynamic.toml"

This file is the general configuration of Traefik. I created 2 entry points: web and websecure. web listen on port 80 and websecure on port 443. I added to the web entrypoint a redirection, like that any request comming on port 80 will be redirected to port 443 in https.

[entryPoints]
  [entryPoints.web]
    address = ":80"
    [entryPoints.web.http.redirections.entryPoint]
      to = "websecure"
      scheme = "https"

  [entryPoints.websecure]
    address = ":443"

As well I added in the api section the dashboard. Traefik provide a good dashboard to check easily what is happening.

[api]
  dashboard = true

In the certificate section I provide my mail, the resolver for the SSL certificates (I chose LetsEncrypt) and how to store the keys (json file named acme.json)

[certificatesResolvers.lets-encrypt.acme]
  email = "my@mail.com"
  storage = "acme.json"
  [certificatesResolvers.lets-encrypt.acme.tlsChallenge]

Finally in the provider section, I ask Traefik to watch the docker network called "extern". Doing that, any docker added on this network will be caught by Traefik and configured in the proxy. I precise as well that the specific configurations for this provider is in a file named traefik_dynamic.toml.

[providers.docker]
  watch = true
  network = "extern"

[providers.file]
  filename = "traefik_dynamic.toml"

We will create now this dynamic configuration in the same place, in a file named traefik_dynamic.toml. In this file we will need to provide the encrypted password for the access to the dashboard. Let's get this password:

sudo apt-get install apache2-utils
htpasswd -nb your_username your_password

Copy the output line we will need it in the traefik_dynamic.toml file. Let's edit it:

[http.middlewares.simpleAuth.basicAuth]
  users = [
    "paste_your_encrypted_password_line_here"
  ]

[http.routers.api]
  rule = "Host(`your.subdomain.domain.com`)"
  entrypoints = ["websecure"]
  middlewares = ["simpleAuth"]
  service = "api@internal"
  [http.routers.api.tls]
    certResolver = "lets-encrypt"

This file is quite easy to understand. We provide a username and password for the access to the Traefik dashboard, then we create a router matching the URL we want for our dashboard, we force the entrypoint with https and precise the resolver LetsEncrypt.

All the config files are done, it's time to create our docker-compose file:

version: "3"

services:
  web:
    image: traefik:v2.2
    container_name: Traefik
    ports:
      - 80:80
      - 443:443
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - $PWD/traefik.toml:/traefik.toml
      - $PWD/traefik_dynamic.toml:/traefik_dynamic.toml
      - $PWD/acme.json:/acme.json
      - $PWD/certs:/certs
networks:
  default:
    external:
      name: extern

Nothing new in this docker file. Maybe the $PWD which basicly is replaced by the current folder during execution. It's the same than ./ We map few files including the docker socket, like that Traefik can see the new dockers mounted. We open the ports 80 and 443 to receive the requests and ... We have a new network called extern. We already talked about this network earlier.

We can now run our docker-compose file, but as the network extern is an external network we need to create it by hand:

docker network create extern
docker-compose up -d

You will not have to recreate the network next time.

Now if you access the page your.subdomain.domain.com you should see the Traefik dashboard.

Excellent Traefik is ready to work. Now we have to link our apps to traefik. It's quite easy to do. Reopen the docker-compose file for our first app:

version: '3'

networks:
  intern:
    external: false

services:
  web:
    build: .
    container_name: myapp_web
    env_file:
      - ./.env
    volumes:
      - .:/app
    depends_on:
      - db
    networks:
      - intern

  db:
    image: "postgres:latest"
    container_name: myapp_db
    environment:
      POSTGRES_PASSWORD: mypassword
    networks:
      - intern
    volumes:
      - pgdb:/var/lib/postgresql/data

    nginx:
    image: nginx:1.19-alpine
    container_name: myapp_nginx
    volumes:
      - ./<collectstatic folder>:/home/app/web/staticfiles
      - ./<medias folder>:/home/app/web/mediafiles
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
    depends_on:
      - web
    networks:
      - intern
    ports:
      - 80:80

volumes:
   pgdb:

In this docker compose file, the only container we want exposed to traefik is the nginx one. db and web can sty internal it's our Nginx which will answer the requests.

So we will add the network extern (remember the one Traefik is listening) to the file:

[...]
networks:
  intern:
    external: false
  extern:
    external: true
[...]

we add external: true because this network has to be accessible out of our docker-compose containers.

And we will connect nginx service to this network:

[...]
nginx:
    image: nginx:1.19-alpine
    container_name: myapp_nginx
    volumes:
      - ./<collectstatic folder>:/home/app/web/staticfiles
      - ./<medias folder>:/home/app/web/mediafiles
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
    depends_on:
      - web
    networks:
      - intern
      - extern
    ports:
      - 80:80
[...]

And finally we will give more informations to traefik to understand what to do:

[...]
nginx:
    image: nginx:1.19-alpine
    container_name: myapp_nginx
    volumes:
      - ./<collectstatic folder>:/home/app/web/staticfiles
      - ./<medias folder>:/home/app/web/mediafiles
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
    depends_on:
      - web
    networks:
      - intern
      - extern
    labels:
      - traefik.http.routers.nameyourrouter.rule=Host(`app1.mydomain.com`)
      - traefik.http.routers.nameyourrouter.tls=true
      - traefik.http.routers.nameyourrouter.tls.certresolver=lets-encrypt
      - traefik.port=80
[...]

Ok this one needs more explanations:

  • We removed the ports section because nginx will not be mapped with the host port now, traefik will do
  • Traefik use labels for his auto configuration. We added 4 labels and all on one router. Traefik use routers to route the trafic (logic right). Each app has a routeur which is described in these labels. My router is named "nameyourrouter" and will be created when Traefik wil see this docker. You can name it as you want of course. So as I said we gave Traefik 4 labels:
    traefik.http.routers.nameyourrouter.rule=Host(`app1.mydomain.com`)
    
    This label tell which URL has to be routed to this application. it can be a domain, a subdomain or an IP but it can not be the same than any other container.
    traefik.http.routers.nameyourrouter.tls=true
    
    This label tell that we want to use HTTPS and not HTTP
    traefik.http.routers.nameyourrouter.tls.certresolver=lets-encrypt
    
    This label tell that we will use let's encrypt for our certificates
    traefik.port=80
    
    This one tells which port of the container has to receive the requests (our nginx listen on port 80)

And that's it.

Run them all

Let's try our configuration. First start your traefik (as the docker runner user):

sudo su - docker_runner
cd /home/docker_runner/traefik
docker-compose up -d

Then let start our app1

cd /home/docker_runner/app1
docker-compose up -d

Wait few seconds for them to start, and you should see on the Traefik dashbord a new router, and if you try to access app1.mydomain.com it will redirect you in HTTPS on your application. The certificates have been done automatically by Traefik