Build the twelve factor Application with Docker
In this article we will go thought the concepts of twelve factor application and we would learn how to setup full software development process as well as dockerize the application. The website 12factor.net explains the concepts of the twelve-factor app.

Website (12factor.net) explains the concepts of the 12-Factor App.
Article will take you thought the revision control system and managing the config and dependencies of your application and teach you the DevOps of your application with Nginx and Docker.
Objectives
Setup proper version control techniques.Managing configuring within the environment.Setting up proper build/deploy/run processes.Docker best practices.Scale the dockerize application.Let’s start with GIT flow as a Reliable Version Control Model, you can read this article from here.
Manage Configuration Values with Environment VariablesIn this section, let’s create a simple Node.js script that connects to a locally running MongoDB instance. We’ll install the MongoDB module with Yarn, and then start our script with Node. We will see that we can successfully connect to our MongoDB instance.
configuration doesn’t allow for the connection string to be changed on the fly. Moving configuration values into environment variables has many benefits. Environment variables are referenced by process.env, followed by the name of our environment variable in caps. In this case, we’ll name it Mongo_URI.
https://medium.com/media/ad69e74aa26e37207591687aff5954de/hrefProxy Requests for Local and Remote Service ParityIn this section we will setup a reverse proxy that directs image path requests and routes them through a defined server URL. By doing so, we decouple server requests for images which allows for easy switching from locally-served image assets to a CDN by simply updating an environment variable.
We have an Express app that serves up a simple request of an image of Docker+Node. Let’s set up a proxy for our /images path. Route requests through a content delivery network dependent on a specified URL.
BASE_IMAGE_URL=https://d262ilb51hltx0.cloudfront.net/max/1600/1*_MtS4HqN2srTcrSyet61DQ.jpeg
https://medium.com/media/8a96762a758e35cfd331ff238e5afbb8/hrefBuild, Release and Run Containers with Docker ComposeIn this section we will cover proper version tagging when building Docker images, creating a docker-compose.yml file for proper release management, and running the tagged release by using Docker Compose to launch app resources as Docker containers.
Once you have written the Docker file, you can create an executable bundle of your app with the docker build command. This will pull in your app dependencies, and compile binaries and assets into a process that can later be ran as a tagged release.
Type docker build, then a -t. which follows is the repo name for the build, which is typically the vendor name, followed by a slash, then the app name. That said, it can also be an arbitrary value. After the repo name is a colon, followed by the tag reversion number. (e.g repo/application:1.0)
docker build -t repo/application:1.0.
It may take a few moments to run the build process, depending on the size of your base image, install time of dependencies, and asset size and number files being copied to the bundle. After the bundle has been built, we will use docker compose to run our build.
docker-compose up -d app
Create a file named docker-compose.yaml. This will contain your release configuration. Here’s a simple config for our Node.js app. The most important takeaway here is the image property, which should reference the specific tag of the image we just built.
This config file, when combined with our tag build, creates our release. Note that we can also define other aspects relating to this release in our config file, such as setting environment variables. After saving this file, we should use version control to commit this release.
Before committing, ensure you have both .dockerignore and .gitignore that are set to ignore the .env file in the Node modules directory. Let’s add all the files to Git, then commit the release.
We can confirm the containers are running by typing docker ps, and follow the log output by running docker logs -f foo_app_1. Upon opening a web browser to the URL localhost:8080.
docker psRun Stateless Docker Containers
docker logs -f buildreleaseandruncontainerswithdockercompose_app_1
Before starting with new images we will kill the previous container.
1.) Check the running containers
docker ps
2.) Kill the docker images
docker kill
.3.) We will remove the previous containers
docker-compose rm -f
Docker containers should be designed to be stateless, meaning that they can survive system reboots and container terminations gracefully, and without the loss of data. Designing with stateless containers in mind will also help your app grow and make future horizontal scaling trivial.
we will review an app that saves uploaded files to the filesystem. Then we will learn how to setup a persistent volume for the uploaded files so they can survive Docker container halts, restarts, stops and respawns.
The Docker file for this app is simple. We are simply setting up our current working directory, copying over assets, making an uploads directory, running Yarn to install prerequisites, exposing port 8080, and then starting our web server.
https://medium.com/media/edcb4652135d36eafced55da47c32fcb/hrefWe are using Docker compose to run our containers. For our main app service, we will simply build our app from the local directory’s Docker file, and also bind port 8080 from the app to the host. We will start our app up with docker-compose up. Let’s test out the file upload functionality by uploading an image.
We can see that Image is successfully uploaded to the uploads folder, but we have a small problem here. Let’s stop our app, remove our containers, and start our app back up. Then let’s refresh our browser window.
Uploaded Images did in fact die when our container died. This shows that Docker’s file system is ephemeral, and that we need to design systems that can persist container terminations.
Let’s name this App Data. Make sure to suffix the name with a colon. Next, let’s go into our app servers and add a volumes property. Since there can be many volumes set up, we prefix our volumes entries with a dash.
Then we specify the name of the volume we want to use, in this case, App Data. Let’s precede that with a colon, and then specify a folder to be persistent, in this case, /serv/uploads. When the container starts, this volume path will be mounted from our volume into the container at this directory.
https://medium.com/media/75c3a307f56b792acfcd3be0b656f063/hrefThis is enough to persist data between container deletions and respawns. Let’s remove our current app containers to ensure we are running from a clean state, then start our app again with compose.
Scale Docker Horizontally with Nginx Load BalancingNode.js apps built with Docker cannot scale horizontally by themselves. In this lesson we will spawn out multiple Node.js processes/containers, and move the incoming request handler to an Nginx proxy to load balance and properly scale our Node.js app.
Let’s make a directory called nodejs that contains our app files. Within that, we will create a simple Node.js app that responds, “Hello world from server,” followed by the name of our server, which will define from an environment variable.
Let’s then create a Docker file that simply kicks off our Node.js process. Then we will build our app image with “docker build -t app-nodejs .”. Let’s start two Node.js processes. We’ll start the first server with a server name of foo, and a name we can reference later, foo.
docker run -d -e “SERVER_NAME=foo” — name=foo app-nodejs
We’ll do something similar with our second server, but with the name bar for both. Note that Nginx will be handling our external requests, so we do not need to bind any ports of the app containers back to the host.
docker run -d -e “SERVER_NAME=bar” — name=bar app-nodejs

Our containers will only be accessible from other Docker containers. Since these containers will not be directly accessible by a public network, this will also add an additional level of security to our app. Let’s create a new Nginx directory in the root of our project and enter it.
In this directory, we will create a new file to contain our Nginx configuration, named Nginx.conf. The purpose of our Nginx process is to load balance requests to our Node.js processes. Let’s create a server block with a location block underneath, preceded with a slash.
Within this block, define a proxy_pass directive, followed by http://, followed by any arbitrary value. We’ll use app here, followed by a semicolon. What we’re telling Nginx to do here is to listen at the root path of our web server, and pass all requests through a proxy named app.
Let’s go ahead and define that proxy. Create a new upstream block, followed by the name of our proxy, app. Next, we will follow it with a line started with server, followed by the name of our first server, foo, and the default port, 8000.
We will repeat the line again, but this time with the bar server. The upstream block we define here tells Nginx which server to proxy requests to. We can define as many lines here as we want. Nginx will treat requests defined within this group with a round robin balancing method. You can even define weights on the servers with the weight option.
Next, let’s create an Nginx Docker image that just copies over our nginx.com file to the default configuration file location. Let’s build this image, and name it app-nginx. The final step is to start our Nginx container, and map port 8080 on the host to port 80 on the container.
https://medium.com/media/b7c719aa37fe677fc632daac14463662/hrefdocker build -t app-nginx .
docker run -d -p 8080:80 — link foo — link bar app-nginx

We will then use the link directive to link our foo and bar servers, making them accessible within the container. If we use curl to hit our Nginx server on port 8080, we will see that Nginx is properly routing requests to our foo and bar Node.js servers in a round robin fashion.

Ensure Containers Run with High-AvailabilityA properly scaled Docker architecture should be able to kill off random containers at any time, and continue to run by implementing a crash-only design methodology. We will learn how to setup our architecture to auto-spawn new Docker containers when other containers are deemed unhealthy or in a terminated state. We will also learn how to scale containers easily with Compose in the event we need to quickly scale horizontally.
Best way to ensure high availability with your containers is to use docker-compose.
Create a docker-compose YAML file, and we will define a simple configuration for our Hello, World! App. Make sure to add a restart flag with the value of Always to your configuration. Next, we can easily scale this app with docker-compose up followed by the scale flag. Let’s tell compose to start Hello, World! With three instances. We will see that there are still three containers available and always restarting.
https://medium.com/media/458ee1f1f540dd5196fff79134d08013/hrefdocker-compose up --scale helloword=3Pipe Log Output to STDOUT with Docker
Since Docker filesystems are ephemeral, it’s important to take into account anything that writes or appends to disk, such as log files. It is standard practice when running Docker containers to pipe log file output to the console, to allow for simple diagnostic and to avoid append-only files from consuming disk space.
In this section, we’ll show you how to append output to STDOUT, and how to view log output from your running containers. In this case, we will write all calls to console.log to a file named debug.log. If we run this script for a few moments and then check the output of debug.log, we will see that the script is running and correctly writing to debug.log.
https://medium.com/media/b53cbfb1a687b3b941b63c464f11261c/hrefWhat we will do here is use ln -sf to create a symlink to /dev/standardout from our debug.log file. This simple line will take whatever would normally be written to our debug.log file and pipe it right to standard out.
https://medium.com/media/eb7398e70adbcd92456e055271e7eced/hrefBuild image
docker build -t logapp .
Run image in Daemon mode.
docker run -d — name=logapp logapp

docker exec -it logapp cat /debug.logConclusion
And that’s a wrap on 12 Factor application. I hope you found this useful. I’ll hopefully post a series of Docker articles that provide deep dives into some of these components.
Thanks for reading. Did this article help you in any way? If it did, I hope you consider sharing it. You might help someone out. Thank you!

Build the twelve factor Application with Docker was originally published in DXSYS on Medium, where people are continuing the conversation by highlighting and responding to this story.


