Category Archives: DevOps

Into Docker: Layer caching for development

When you create your docker file, you don’t want to just copy your whole directory in a folder. As everytime you run your docker image because you changed your source code, it will keep on installing fresh dependency packages all the time. And this will become slow.

Remember every RUN you make on docker, creates a layer on top of the image. This layer get cache. So everytime you build your container if nothing has change from your package requirements , it will use the one from the cache making it faster to rebuild.

This technique works for any projects. Like python(flask) or nodejs(package.json). Footnote though and what to watch out for. If your package requirements always points to the latest, you might want to have a fixed version number otherwise if you like to get the latest version of the packages. You might need to clear your cache to get a fresh updated copy in your container.

Project/CMS Development Workflow

Lately I been pondering about CMS development since I work alone on projects. It didnt matter but then what if I dont work alone? What if we use WP or Drupal on our website. I read a lot of articles how it can be done.

I keep getting swept away about this article and how they are doing it. Then I realised , well I am already learning Docker/Vagrant. But its just fixing one issue anyways. Here I most of the issue encounters when developing a project be it a CMS or a stand alone one.

  • Setting up development-server ENV. This includes the server, this is where Docker comes in.
  • Setting up development set-up , the stacks you use for your project SPA, a plugin for CMS, theme , etc. (try to include unit testing)
  • Database migration set up. You cant just develop a database driven project without thinking how you can easily migrate your database schema, required data , etc. It is as important as your main application itself.
  • Using version control. You also have to think what files needs to be included to your repository. If its a public repo and your project is a public facing site, you might want to have .gitignore and exclude config files and other folders like node_modules etc.
  • Above has although bullet-ed does not mean straight forward. There is a lot of separation that needs to be done to make your workflow manageable and understandable for you(or your team).
  • Deployment. And high overview how its going to work after we set up the development side. We DEVELOP -> PUSH TO GIT -> AUTOMATION SERVER (eg. Jenkins) -> FEEDBACK TO DEVELOPER(testing etc) -> CREATE IMAGE FROM DOCKER -> DEPLOY TO PRODUCTION (CLOUD)
  • There is a lot in deployment , I mean a LOT of moving parts. But in the end, you do it often and hopefully like driving it becomes second nature until another NEW KID in town comes along. Such life for us Technologizt.

Learning Python: Gunicorn

Just putting this on here little quick.

What is Gunicorn? Apparently its sort of like a middle man between your user and application. Python is good and all handling request but its not made to be a web server handling 1000s or more of requests. For eg. Flask application can handle several request at ones locally. Web servers are good with that.

So the workflow pretty much like this:


Very intuitive indeed!

Blue and Green deployment technique

I just read this somewhere and thought what the heck is it. This will be a short one as I just need to put this on here for references and hopefully I dont forget it 🙂

Basically 2 live instances of production server which maybe different from each other. Similar to staging server -> production server. Instead both server are live,1  is serving production content and 1 is idle and not doing anything.

You push your latest update to the idle server and test. Once everything is a-ok. You point your live application to the idle one. And the previous one becomes idle this time.

It reduces risk and down time. What happens if something is to go wrong with your latest update? Then you point it back to your previous one. Until you can fix whats going on with your other server. Very handy to know indeed

Docker, Docker-machine & Docker-compose

So here we are in the DevOps world. Well not quite, so what is this docker thing here that will help us be productive as a developer?

Docker is like a virtual machine but is’nt. It is an image or an instance of an mini operating system that act as a host to your application and what not. This image or template creates a container that uses this template to create an environment.

You can either use Dockerfile or Docker Compose (yaml file) . Dockerfile lets you create your custom image. Docker compose manage multiple docker images to simplify the commands.

Here is a link to a cheat sheet for Dockerfile:

Docker command line reference.

Basic Dockerfile configuration:

ARG MY_VARIABLE=hello world (can be reference inside dockerfile by eg. FROM python:${MY_VARIABLE})
FROM: <imagename>[:<tag or version>] (You can have multiple from. Means multiple container. If you omit the tag , you get latest version)
MAINTAINER: <author name>
ADD: <source from build context local path> <destination in the container> (add can be used to extract files from path similar to COPY)
COPY: <source from local machine> <destination> Always use copy for copying from local machine(windows) relative to the build context to container

ENTRYPOINT: <command to run after container starts for eg: ENTRYPOINT [“ping”] >
CMD: <command to feed to entry point> So from above entry point of “ping”. If you have a command like this eg: CMD [“localhost”] then the executing command will be “ping localhost”

This is a good description over the difference between ENTRYPOINT and CMD: The ENTRYPOINT specifies a command that will always be executed when the container starts. The CMD specifies arguments that will be fed to the ENTRYPOINT

(Difference between CMD and ENTRYPOINT )
You can also check the difference between RUN,ENTRYPOINT and CMD in here:

RUN: <cmd to call> Its normally used for installing packages.
WORKDIR: <working directory for RUN, ENTRYPOINT and CMD>
ENV: <key> <value> set environment variable in the container
VOLUME: [‘/var/www’]  or “docker create or run -v source:destination” (Lets  you mount a directory to the host machine, this persist even the container is removed.This volume can be used by the container to read and write to , when the container is stopped or removed the data volume in the host will still be there. The HOST(path):CONTAINER(path) only applies in the command line)

Volume is a bit confusing to wrap your head around at least for me , please see this link:

To build the image, go to where your dockerfile folder is located. And type the command: “docker build .”  take note of the dot after the build word. This is specifying where your dockerfile is located. In our case we are in  that folder.

The basic command to start and run docker container by issuing:

docker start [docker container] This will start an already existing container. To create a new container you must build the image and do docker run etc.
docker exec [options] [docker container] [cmd]  This is when you already have a container and running and you need to execute a command within that container. Usually used for logging into the shell. eg: docker exec -it myalpinecontainer sh (sh for alpine containers otherwise bash)

Basic docker-compose yaml config
Full Reference:

Start with:
___[SERVICE NAME – 1]:
______image: (image name) If BUILD is not specify this will just download the image from the internet.
______build: [context or below]
________context: ./dir
________dockerfile: Dockerfile-alternate
___________buildno: 1 [accessible in your dockerfile as $buildno]
______command: [“bundle”, “exec”, “thin”, “-p”, “3000”] (overrides the CMD in your dockerfile)
______ports: (“host:container”)
________  – “3000:8000”
________ -[service-name] ( any service name that this service links to)
________ -[service-name] ( any service this service depends on)
______expose: (exposes/open a port on a container , can be use for other application that is serve using this port)
________ – “3000” (port numbers)
______networks: (group it belongs)
________- Network1
networks: (list of networks/groups the image belong)
______- Network1
These are the basic simplification configurations for docker. See full reference above for more information. Take note there is also a DEPLOY option only for docker version 3. This deploy option is for deploying to docker swarm which is a little bit advanced for me at the moment.

Common mistakes setting up docker:


Docker machine is created so old mac and windows can use docker. The machine that is created (VM) will hold the docker engine that will power up our containers. I normally use docker-machine as I got a laptop that has Windows Home only.

Tips#1: If you like to specify where your docker-machine files will live. Create an environment variables called “MACHINE_STORAGE_PATH” and set it to the location you like.

Then try creating a new machine if you dont already have one. The default this is normally the default machine to use by typing “docker-machine create default“.

Tips#2: If you like to mount a (local) folder to your docker-machine.
Check link below:

Note: Tip#2 to persist the volume mounting of your OS to the HOST OS. In the boot2docker folder create a file named “” and add the commands:

mkdir /home/docker/projects
sudo mount -t vboxsf -o uid=1000,gid=50 share_name_from_virtualbox /home/docker/projects

Tips#3: When starting your “docker-machine start name” and the certificate is messed up. Just regenerate it by typing “docker-machine regenerate-certs“. Then restart your docker-machine