Testing your Django apps live on the docker container


Docker changed the way we thought about environments. Suddenly with containers,maintaining separate spaces for development, staging and production seem to be a pain. I prefer setup my containers once for the project and then keep making changes to the code. Updates are pushed to GitHub and then deployment there on is just about running a couple of commands on target machine.

Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps.

Schematic of the development setup using Docker container

Instance of MongoDB running on the docker machine actually accesses the physical data from the store on the machine on the cloud. Testing a web server (Django based application running on Gunicorn) in isolation here is not very straightforward as it would have been running application on a simple machine

Lets have a look at what happens if we did not have the Docker container itself. 

./manage.py runserver

You could have opened up the Django test server (on the cloud machine, without any container running) with a simple command running on current directory( same as the project folder, If you have not done Django before on Linux, I might as well write another blog exclusively for that.)

Here one needs to remember that though the data is on the same machine (cloud machine), there is no Mongo instance running on the machine. Recollect that Docker container is the one who has Mongo installed , so in which case you have the data but no means to access the data.

Counter argument could be – for testing purpose Mongo can be thought of being installed on the cloud machine as well.  This from my perspective defeats the entire purpose of containerization. One cannot assume host machine (cloud machine) to have any or all parts of docker images replicated.

Here’s how you can get things done

Expose port 8000 (or any convenient )  from within the docker container

Run the docker container

Get inside the docker container using :

$ docker exec -it <docker container id> /bin/bash

Stop the Gunicorn process  – Your Django test server is better suited for debugging

$ pkill  <gunicorn process id>

$ ./manage.py runserver 

This gets the Django test server out of the shrouding into the open, remember to close it back when testing is done

This actually exposes the test Django server outside onto the cloud , and you can test fire your API @ port 8000.  API can access the database since the container is running + you can now even use pdb.set_trace() to debug line by line.

Once you are done testing :

Stop Django test server

Stop container

Change image of container , remove  EXPOSE 8000 

Bind Gunicorn to socket file and let Nginx access that as upstream proxy

For more on how to : In my other blog

So once after testing is done , you are shrouding Gunicorn again behind Nginx and exposing only one port 8080 for Nginx.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: