Blue and Green deployment technique

I just read this somewhere and thought what the heck is it. This will be a short one as I just need to put this on here for references and hopefully I dont forget it 🙂

Basically 2 live instances of production server which maybe different from each other. Similar to staging server -> production server. Instead both server are live,1  is serving production content and 1 is idle and not doing anything.

You push your latest update to the idle server and test. Once everything is a-ok. You point your live application to the idle one. And the previous one becomes idle this time.

It reduces risk and down time. What happens if something is to go wrong with your latest update? Then you point it back to your previous one. Until you can fix whats going on with your other server. Very handy to know indeed

All about React and boilerplate

Here I am going to delve right deeper in the React world but in a very basic term. As a full-stack developer (please dont take this as being know everything do everything this is just what I see myself do nowadays) I keep going in and out of front-end and back-end side of things, I tend to forget this nitty gritty details. Anyways.

React on its own resemble this

// without Babel JSX
var hello = React.createElement("div",{property: value},"Inner html with ${}" }
// with Babel JSX
let Hello = (props) => <div>Hello</div>;

Most of the time we want to develop with JSX and Babel is pretty much the defacto transpiler for it. So if you plan to use React in JSX on your page you make sure you include react , react-dom and babel-jsx in your script.

Boilerplate React Development

Now with boilerplate, React is very good with SPA(Single Page App) and Hybrid (React with your HTML page). We will focus on SPA of what we are going to need to start developing using NPM as our package manager.

  • React – this is the core react library
    • npm install --save react react-dom
    • [optional] - if you are looking for more react exclusive functionality for example the routing capability etc. Install these to your package.
      npm install --save react-router-dom 
      react-router-dom: is the v4 of react router. No need to use "react-router" as this is v3
  • Babel – the core transpiler for JSX
    • npm install --save-dev babel-core babel-loader
  • Babel-presets – babel use to have all features in one package <version 6. In babel 6 they have separated all these features and plugins hence. We now need to install each feature separately. babel-preset-2015 supports ES6 and React
    • npm install --save-dev babel-preset-env babel-preset-react
      babel-preset-env: new support for ecmascript+ down to ES5.
      babel-preset-react: to allow react syntax to be transpiled.
  • Webpack – now our module bundler that will tie everything together. Webpack-dev-server is for hot reloading while developing. Very handy indeed
    • npm install --save-dev webpack-cli webpack
      npm install -g webpack-dev-server 
      web-dev-server: must be global
  • Loaders – loaders are a way to bundle your static assets into few different things used in webpack
    • npm install --save-dev url-loader style-loader file-loader css-loader resolve-url-loader
      url-loader: makes your assets like images embedded inside your JS using base64 encoding. Useful for smaller files. Set the limit and let file-loader do the rest for bigger files
      file-loader: it handles assets and emits(create a separate file) to another location. Good for bigger files.
      css-loader: the css loader takes the CSS and convert it to a string. eg. var css = require('css-loader!./css/mycss.css'); it just loads the string of css as javascript/node dont know how to parse .css file.
      style-loader: usually used conjunction with css-loader and inserts it into the page by add a <style>...</style> tag to the javascript.
      resolve-url-loader: resolves url() relative path in your css, specially for scss. When you are importing another .SCSS file to your base scss, css-loader context path is the base importer. This will screw up any url in the importee path. Use this to make any imported scss file folder context as if the imported scss is the parent.
    • [optional] npm install --save-dev sass-loader  node-sass postcss-loader autoprefixer
      postcss-loader: by itself it doesnt do anything, but it pretty much a foundation for all its plugin. So if you want to use autoprefixer, you will need to install postcss-loader first. 
      sass-loader: sass loader adds support for sass files and then to css then to your normal css-loaders. 
      node-sass: is required for peer-dependency (plugin-that-needs-another-plugin)
    • Production Package
      • npm install --save-dev mini-css-extract-plugin
        mini-css-extract-plugin: works with webpack 4, allows to separate CSS to its own file. [Important Note] This plugin dont work with webpack-dev-server. Use webpack itself to build for production.
    • Plugins [optional]
      • npm install --save-dev html-webpack-plugin 
        html-webpack-plugin: This lets you process the main index.html and injects assets into it automatically. It is required if you like to hash your assets for busting caching browsers.

Learning Python: Arguments, *args and **kwargs and Decorators

While reading through python. I came across some python only terms. And a very odd syntax using * and **.  And then there is this @ symbol called decorator.

Positional and Keyword Arguments 

Lets start with positional argument. This is pertaining to how a normal function with arguments is define.

# POSITIONAL ARGUMENT (conventional)
def myfunc(a,b,c):
   print a + b + c
# Output: 123
In this example a normal function is called , with their argument in order of a ,b ,c. This is positional and very common in any programming languages.
def myfunc(a,b,c)
  print a + b + c
# Output: 123
In this example same function definition, but the arguments are reflecting the argument names in the function thus the same output regardless of order.

*args and **kwargs (unpacking)

Now lets look at *args, in other programming languages we can have optional arguments like in C# eg. main(string[] arg) or C++ main(int argc, char [] *argv) or main(int argc ,char ** argv).

So its basically similar with python see below:

def myfunc(*argv)
  argc = len(argv)
  for x in argv:
    print (x)
# output: 

** args is a bit similar above. It lets the caller specify the name of the argument. Confused? See below

def myfunc(**kwargs)
  print (a + b + c)
# Output: 123
# See what happens there? another way to access is
def myfunc(**kwargs)
  for key in kwargs:
    print (kwargs[key])
# Output: 
# You can also use to unpack collections and provide to the function
# using *args for a list or tuple
def myfunc(arg1, arg2, arg3):
  print (arg1 + arg2 + arg3)
mytup = (1,2,3)
# Output: 123
# using **args for dictionary
def myfunc(**args):
  print (arg1 + arg2 + arg3)
mydict = {"arg1":1, "arg2":2, "arg3":3 }
# Output: 123

@ Decorators

We will start with an example first below as this is the best way to explain it.

def mydecor(another_func):
    print ("Im in first")
    print ("Im in last")
def thisfunc():
    print ("I am in the middle")
# Note: Do not call it like this thisfunc() check this for information:
#What is happening is another_func() will call the "thisfunc" function thus you dont need the "()" when calling it.
# Output:
Im in first
Im am in the middle
Im in last

Learning Python: List, Tuple, Dictionary and Set

Taken from These are collections in Python

  • List is a collection which is ordered and changeable. Allows duplicate members.
  • Tuple is a collection which is ordered and unchangeable. Allows duplicate members.
  • Set is a collection which is unordered and unindexed. No duplicate members.
  • Dictionary is a collection which is unordered, changeable and indexed. No duplicate members.
a_list = ["I","am","list"] # a_list[0]
for x in a_list:
   print x
a_tuple= ("I","am","tuple") # a_tuple[0]
// Loop save as above
a_set  = {"I" , "am" , "set"} # to access you need to iterate through it eg. get_i = next(iter(a_set))
for x in a_set:
   print x
a_dict = {"say":"I am", "respond": "dict"}  # a_dict["say"]
for name, value in a_dict.items():
   print(name + ' says ' + str(value))

This is a quick overview how to use these array like objects. I know this is very basic but since in the real world development we will most likely use one of these collections.

Learning with Python: Class Object, Inheritance and Comments

I’m going to start learning python as a side hobby. So with all upcoming python related topics will have the title “learning”. Anyways the syntax of python is fairly straight forward but there are some of those bits that is not “fairly” straight forward. Like declaring a class object.

In python  depending on the version , there is an old style class and new style class. Old style don’t have inheritance but new style do.

# old style
class myClass():
# new style
class myClass2(Object) # all class inherits Object
# inheritance
class myClass3(myClass, myClass2)

The new style class in here has “Object” as base class. All class inherits from Object.

Thats it for class.

By the way comments in python is # and multi-line comments is a little weird but like this

# One line comment
 Im a multi-line comment, and please dont indent this block as it becomes an error.

TTY – TeleTYpewriter on Linux

I came accross a tutorial talking about TTY command in linux. Basically its pertaining to terminals in the container. Lets drill down the terminology first.

– Terminal is just a term use pointing to a dumb machine connected to the main computer. Consist of a display and a keyboard.

– Console is use to describe a TERMINAL physically connected to the computer. Let say like a personal computer connected with a keyboard and monitor. Or like xbox console and ps4.

– TTY and PTY . TTY is technology that handles input and output to a display and the program it executes.It is a virtual console refer above to communicate with the host. Most terminal in linux is PTY pseudo-tty. Meaning a fake or technology that acts similar to tty.  Ssh terminal is type of PTY.

Using command tty, shows device name you are currently on. tty0 usually refers to your current terminal.

What are virtual terminals and when to use it?
“A Virtual Terminal is a full-screen terminal which doesn't run inside an X window (unlike the terminal window on your graphical desktop). Virtual terminals are found on all GNU/Linux systems, even on systems which don't have a desktop environment or graphical system installed.

Virtual terminals can be accessed on an Ubuntu system by pressing Ctrl+Alt+F1 till F6. To come back to the graphical session, press Ctrl+Alt+F7.“

So all in all my understanding is when you SSH to a server, you are using a pseudo-tty to the server giving you a virtual terminal to manage the machine. This also gives you a interactive shell. When you spawn a connection to sshd, it mounts a /dev/pts/* dynamically. Making it look like a real terminal is connected or physical device. And you can use this to refer to your connection, by using tty command on your terminal. Pseudo because you are emulating tty functionality, instead of actually connected to the server physically.

In the olden days terminal are real physical device connected to the pc. Linux didnt have a GUI before. To manage it, you remote to it and creates a “virtual terminal” or “virtual console”  like a real physical device terminal.

The tty 1-6 ctrl + alt + f1-6 are basically the same as a virtual console or terminal. The f7+ keys shortcut is when you have a gui terminal on the server. Gui terminal refers to let say, your ubuntu is your server. It can open a terminal in its window as you do. Then pressing keys above f(1-6) opens tty terminal without the gui as if you are back in a gui-less server and use your keyboard and monitor to manage it. Take note pressing these keys only works when you are physically logged-in on the console. But your terminal is still virtual, see below.

Additional info, when you are executing a command to connect to your server using ssh -t or -T flag remotely.

-t means to provide an interactive terminal or TTY terminal to execute commands.

-T to disable any interactivity.

Upon saying this. When you open a terminal connection via ssh. You most likely want tty/interactivity, that can execute commands etc. tty is just the technology that creates this connection between your remote pc keyboard to a server. A linux box by itself is a console. Connect a keyboard and a monitor to it and will generate a virtual console or terminal to administer it.

Taken from red-hat website

“A virtual console is a shell prompt in a non-graphical environment, accessed from the physical machine, not remotely. Multiple virtual consoles can be accessed simultaneously.”

BLOG: NodeJS – Server Side Rendering

Its a technique in the web development world most of your front end rendering(javascript) is rendered by the server, hence the title.

Basically what its saying is most front-end javascript processing happens in the browser. But with server side rendering, your javascript is pre-compiled and delivered with values pre-populated by the server stack. In this case NodeJS.

Im not sure how it works on Apache server yet. Or any other platform stack that does not use javascript. Here is a video about React – Server side scripting works using express/webpack combination.


Meet Uncle Yammy, or Yamel or Yamil?

I been using JSON and XML as my standard go to objects for as long as I can remember. They are both very capable. But I think I need to give YAML a try too as its good for storing very simple to very complex representation of your data.

The syntax? [ my_yaml.yml ]


Thats it. Yaml is also structured with indentation. Its actually pretty straight forward. Theres also parsers available already to use yml type files.

Please see video below by Giraffe Academy for more syntax specification

Docker & Docker-compose

So here we are in the DevOps world. Well not quite, so what is this docker thing here that will help us be productive as a developer?

Docker is like a virtual machine but is’nt. It is an image or an instance of an mini operating system that act as a host to your application and what not. This image or template creates a container that uses this template to create an environment.

You can either use Dockerfile or Docker Compose (yaml file) . Dockerfile lets you create your custom image. Docker compose manage multiple docker images to simplify the commands.

Here is a link to a cheat sheet for Dockerfile:

Basic Dockerfile configuration:

ARG MY_VARIABLE=hello world (can be reference inside dockerfile by eg. FROM python:${MY_VARIABLE})
FROM: <imagename>[:<tag or version>] (You can have multiple from. Means multiple container. If you omit the tag , you get latest version)
MAINTAINER: <author name>RUN: <command to run for extra provisioning>
ADD: <source from build context local path> <destination in the container> (add can be used to extract files from path similar to COPY)
COPY: <source from local machine> <destination> Always use copy for copying from local machine(windows) relative to the build context to container
ENTRYPOINT: <command to run after container starts eg: ping>
CMD: <command to feed to entry point>
(Difference between CMD and ENTRYPOINT )
WORKDIR: <working directory for RUN, ENTRYPOINT and CMD>
ENV: <key> <value> set environment variable in the container
VOLUME: [‘/var/www’]  or “docker create or run -v source:destination” (Lets  you mount a directory to the host machine, this persist even the container is removed.This volume can be used by the container to read and write to , when the container is stopped or removed the data volume in the host will still be there. The HOST(path):CONTAINER(path) only applies in the command line)

Volume is a bit confusing to wrap your head around at least for me , please see this link:

Basic docker-compose yaml config
Full Reference:

Start with:
___[SERVICE NAME – 1]:
______image: (image name) If BUILD is not specify this will just download the image from the internet.
______build: [context or below]
________context: ./dir
________dockerfile: Dockerfile-alternate
___________buildno: 1 [accessible in your dockerfile as $buildno]
______command: [“bundle”, “exec”, “thin”, “-p”, “3000”] (overrides the CMD in your dockerfile)
______ports: (“host:container”)
________  – “3000:8000”
________ -[service-name] ( any service name that this service links to)
________ -[service-name] ( any service this service depends on)
______expose: (exposes/open a port on a container , can be use for other application that is serve using this port)
________ – “3000” (port numbers)
______networks: (group it belongs)
________- Network1
networks: (list of networks/groups the image belong)
______- Network1
These are the basic simplification configurations for docker. See full reference above for more information. Take note there is also a DEPLOY option only for docker version 3. This deploy option is for deploying to docker swarm which is a little bit advanced for me at the moment.

Common mistakes setting up docker:


2018 © Ideas, designs and algorithms