Category Archives: The Cloud and Networking

Into Docker: Layer caching for development

When you create your docker file, you don’t want to just copy your whole directory in a folder. As everytime you run your docker image because you changed your source code, it will keep on installing fresh dependency packages all the time. And this will become slow.

Remember every RUN you make on docker, creates a layer on top of the image. This layer get cache. So everytime you build your container if nothing has change from your package requirements , it will use the one from the cache making it faster to rebuild.

This technique works for any projects. Like python(flask) or nodejs(package.json). Footnote though and what to watch out for. If your package requirements always points to the latest, you might want to have a fixed version number otherwise if you like to get the latest version of the packages. You might need to clear your cache to get a fresh updated copy in your container.

Project/CMS Development Workflow

Lately I been pondering about CMS development since I work alone on projects. It didnt matter but then what if I dont work alone? What if we use WP or Drupal on our website. I read a lot of articles how it can be done.

I keep getting swept away about this article and how they are doing it. Then I realised , well I am already learning Docker/Vagrant. But its just fixing one issue anyways. Here I most of the issue encounters when developing a project be it a CMS or a stand alone one.

  • Setting up development-server ENV. This includes the server, this is where Docker comes in.
  • Setting up development set-up , the stacks you use for your project SPA, a plugin for CMS, theme , etc. (try to include unit testing)
  • Database migration set up. You cant just develop a database driven project without thinking how you can easily migrate your database schema, required data , etc. It is as important as your main application itself.
  • Using version control. You also have to think what files needs to be included to your repository. If its a public repo and your project is a public facing site, you might want to have .gitignore and exclude config files and other folders like node_modules etc.
  • Above has although bullet-ed does not mean straight forward. There is a lot of separation that needs to be done to make your workflow manageable and understandable for you(or your team).
  • Deployment. And high overview how its going to work after we set up the development side. We DEVELOP -> PUSH TO GIT -> AUTOMATION SERVER (eg. Jenkins) -> FEEDBACK TO DEVELOPER(testing etc) -> CREATE IMAGE FROM DOCKER -> DEPLOY TO PRODUCTION (CLOUD)
  • There is a lot in deployment , I mean a LOT of moving parts. But in the end, you do it often and hopefully like driving it becomes second nature until another NEW KID in town comes along. Such life for us Technologizt.

Learning Python: Gunicorn

Just putting this on here little quick.

What is Gunicorn? Apparently its sort of like a middle man between your user and application. Python is good and all handling request but its not made to be a web server handling 1000s or more of requests. For eg. Flask application can handle several request at ones locally. Web servers are good with that.

So the workflow pretty much like this:

[USER] -> [LOAD BALANCER (eg. nginx) ] -> [WEB SERVER LIKE (GUNICORN) ] -> [FLASK APP]

Very intuitive indeed!

Blue and Green deployment technique

I just read this somewhere and thought what the heck is it. This will be a short one as I just need to put this on here for references and hopefully I dont forget it 🙂

Basically 2 live instances of production server which maybe different from each other. Similar to staging server -> production server. Instead both server are live,1  is serving production content and 1 is idle and not doing anything.

You push your latest update to the idle server and test. Once everything is a-ok. You point your live application to the idle one. And the previous one becomes idle this time.

It reduces risk and down time. What happens if something is to go wrong with your latest update? Then you point it back to your previous one. Until you can fix whats going on with your other server. Very handy to know indeed

TTY – TeleTYpewriter on Linux

I came accross a tutorial talking about TTY command in linux. Basically its pertaining to terminals in the container. Lets drill down the terminology first.

– Terminal is just a term use pointing to a dumb machine connected to the main computer. Consist of a display and a keyboard.

– Console is use to describe a TERMINAL physically connected to the computer. Let say like a personal computer connected with a keyboard and monitor. Or like xbox console and ps4.

– TTY and PTY . TTY is technology that handles input and output to a display and the program it executes.It is a virtual console refer above to communicate with the host. Most terminal in linux is PTY pseudo-tty. Meaning a fake or technology that acts similar to tty.  Ssh terminal is type of PTY.

Using command tty, shows device name you are currently on. tty0 usually refers to your current terminal.

What are virtual terminals and when to use it?
“A Virtual Terminal is a full-screen terminal which doesn't run inside an X window (unlike the terminal window on your graphical desktop). Virtual terminals are found on all GNU/Linux systems, even on systems which don't have a desktop environment or graphical system installed.

Virtual terminals can be accessed on an Ubuntu system by pressing Ctrl+Alt+F1 till F6. To come back to the graphical session, press Ctrl+Alt+F7.“

So all in all my understanding is when you SSH to a server, you are using a pseudo-tty to the server giving you a virtual terminal to manage the machine. This also gives you a interactive shell. When you spawn a connection to sshd, it mounts a /dev/pts/* dynamically. Making it look like a real terminal is connected or physical device. And you can use this to refer to your connection, by using tty command on your terminal. Pseudo because you are emulating tty functionality, instead of actually connected to the server physically.

In the olden days terminal are real physical device connected to the pc. Linux didnt have a GUI before. To manage it, you remote to it and creates a “virtual terminal” or “virtual console”  like a real physical device terminal.

The tty 1-6 ctrl + alt + f1-6 are basically the same as a virtual console or terminal. The f7+ keys shortcut is when you have a gui terminal on the server. Gui terminal refers to let say, your ubuntu is your server. It can open a terminal in its window as you do. Then pressing keys above f(1-6) opens tty terminal without the gui as if you are back in a gui-less server and use your keyboard and monitor to manage it. Take note pressing these keys only works when you are physically logged-in on the console. But your terminal is still virtual, see below.

Additional info, when you are executing a command to connect to your server using ssh -t or -T flag remotely.

-t means to provide an interactive terminal or TTY terminal to execute commands.

-T to disable any interactivity.

Upon saying this. When you open a terminal connection via ssh. You most likely want tty/interactivity, that can execute commands etc. tty is just the technology that creates this connection between your remote pc keyboard to a server. A linux box by itself is a console. Connect a keyboard and a monitor to it and will generate a virtual console or terminal to administer it.

Taken from red-hat website

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/sn-guimode-virtual-consoles-ppc

“A virtual console is a shell prompt in a non-graphical environment, accessed from the physical machine, not remotely. Multiple virtual consoles can be accessed simultaneously.”

Docker & Docker-compose

So here we are in the DevOps world. Well not quite, so what is this docker thing here that will help us be productive as a developer?

Docker is like a virtual machine but is’nt. It is an image or an instance of an mini operating system that act as a host to your application and what not. This image or template creates a container that uses this template to create an environment.

You can either use Dockerfile or Docker Compose (yaml file) . Dockerfile lets you create your custom image. Docker compose manage multiple docker images to simplify the commands.

Here is a link to a cheat sheet for Dockerfile:

https://github.com/wsargent/docker-cheat-sheet

Basic Dockerfile configuration:
http://blog.flux7.com/blogs/docker/docker-tutorial-series-part-3-automation-is-the-word-using-dockerfile

ARG MY_VARIABLE=hello world (can be reference inside dockerfile by eg. FROM python:${MY_VARIABLE})
FROM: <imagename>[:<tag or version>] (You can have multiple from. Means multiple container. If you omit the tag , you get latest version)
MAINTAINER: <author name>RUN: <command to run for extra provisioning>
ADD: <source from build context local path> <destination in the container> (add can be used to extract files from path similar to COPY)
COPY: <source from local machine> <destination> Always use copy for copying from local machine(windows) relative to the build context to container
ENTRYPOINT: <command to run after container starts eg: ping>
CMD: <command to feed to entry point>
(Difference between CMD and ENTRYPOINT https://stackoverflow.com/questions/21553353/what-is-the-difference-between-cmd-and-entrypoint-in-a-dockerfile?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa )
WORKDIR: <working directory for RUN, ENTRYPOINT and CMD>
ENV: <key> <value> set environment variable in the container
VOLUME: [‘/var/www’]  or “docker create or run -v source:destination” (Lets  you mount a directory to the host machine, this persist even the container is removed.This volume can be used by the container to read and write to , when the container is stopped or removed the data volume in the host will still be there. The HOST(path):CONTAINER(path) only applies in the command line)

Volume is a bit confusing to wrap your head around at least for me , please see this link: https://www.digitalocean.com/community/tutorials/how-to-work-with-docker-data-volumes-on-ubuntu-14-04

Basic docker-compose yaml config
Full Reference:

https://docs.docker.com/v17.09/compose/compose-file/#compose-file-structure-and-examples

Start with:
VERSION: 3
SERVICES:
___[SERVICE NAME – 1]:
______image: (image name) If BUILD is not specify this will just download the image from the internet.
______build: [context or below]
________context: ./dir
________dockerfile: Dockerfile-alternate
________args:
___________buildno: 1 [accessible in your dockerfile as $buildno]
______command: [“bundle”, “exec”, “thin”, “-p”, “3000”] (overrides the CMD in your dockerfile)
______ports: (“host:container”)
________  – “3000:8000”
______links:
________ -[service-name] ( any service name that this service links to)
______depends_on:
________ -[service-name] ( any service this service depends on)
______expose: (exposes/open a port on a container , can be use for other application that is serve using this port)
________ – “3000” (port numbers)
______networks: (group it belongs)
________- Network1
networks: (list of networks/groups the image belong)
______- Network1
These are the basic simplification configurations for docker. See full reference above for more information. Take note there is also a DEPLOY option only for docker version 3. This deploy option is for deploying to docker swarm which is a little bit advanced for me at the moment.

Common mistakes setting up docker:


https://runnable.com/blog/9-common-dockerfile-mistakes

 

CentOS Administration

List of administration tasks on a CentOS machine with VirtualMin/WebMin installed

SQL Server remote access

I just found a gem of a post in how to make Sql Server talk to anyone in your network!

Source: https://stackoverflow.com/questions/11278114/enable-remote-connections-for-sql-server-express-2012

I will definitely go back to this everytime I want access to a database in our network.

Basically, this will show us how to set up SQL Server so that any program on the network is allowed access to its content given they have the right user access to it. It has something to do with enabling Sql Server browser and enabling TCP/IP connection.

Understanding DNS, Nameservers and Record types

Are you a web developer, who mainly focuses on developing your application? Do you only just update your nameserver and point to your hosting company at the registry and think everything will just work with no hiccups and go on your business coding? If yes then dont be that developer as I was.

Nameservers comes first! 

You have to understand this bit to understand everything else in the DNS realm (https://webmasters.stackexchange.com/questions/16297/which-comes-first-dns-or-name-servers link to the article) .  I have spent my night trying to uncover this puzzle basically there is 2 entities that you will likely come across when dealing with DNS, the Domain Registry(where you bought your domain) and your Web Host DNS manager(your webhost).

When you change your nameservers to your new hosting, you are basically saying to delegate all DNS records to that server thus allowing your new web host to handle your DNS or RR(Resource Records). Your Domain Registry will be free of responsibility when it comes to other records you have on it. I am saying the A , AAAA, MX records etc. These records will now be handled by your web host dns manager.

Very important if you have existing RR in your domain registry make sure you leave them be until you actually know whats happening and make sure you read this blog post before committing. So what you can do is to copy all your resource records from your original DNS to your hosting company, which to be honest is I dont like doing because a lot things can go awry when you get it wrong and will take time for it to get right.

OR

Create an A record for your domain.com to point to the IP address of your hosting company. Then create an MX record for that domain.com to point to your current mail.domain.com DNS, then do the rest for every RR you have, that includes FTP, SFTP sub-domains.

Source: https://serverfault.com/questions/149509/changing-domain-name-dns-to-redirect-web-traffic-to-one-server-and-leave-mail-t

So there you have it! If you are wondering what those records do here is a quick rundown below.

A records – translate domain name to IP address eg. domain.com -> 118.123.9.12

MX records – for email service that only points to domain name. eg mail.domain.com -> A record.

CNAME record – an alias to another name record or domain name.

AAAA record – A record but points to IPv6 instead of IPv4

More info:
CNAME https://www.web24.com.au/tutorials/cname-records-used

What is ‘@’ record used for:
http://forums.devshed.com/dns-36/mean-setting-dns-settings-636502.html