Maintainership of html-to-react

image_pdfimage_print

I’m happy to announce that I’ve taken over maintainership of the popular NPM package html-to-react, which currently has about 3600 downloads per month. This package has the ability to translate HTML markup into a React DOM structure.

An example should give you an idea of what html-to-react brings to the table:

var React = require('react');
var HtmlToReactParser = require('html-to-react').Parser;
 
var htmlInput = '<div><h1>Title</h1><p>A paragraph</p></div>';
var htmlToReactParser = new HtmlToReactParser();

// Turn HTML into a bona fide React component that we can integrate into our React tree
var reactComponent = htmlToReactParser.parse(htmlInput);

MuzHack has a blog

image_pdfimage_print

MuzHack now has its own blog!

Zero Downtime Docker Deployment with Amazon ECS

image_pdfimage_print

Earlier, I wrote about zero downtime docker deployment with Tutum. I have recently started experimenting with Amazon ECS (EC2 Container Service), which is Amazon’s offering for the orchestration of Docker containers (and thus a competitor to Tutum). I don’t have a lot of experience with ECS yet, but it looks pretty solid so far.

I found there was a real lack of documentation on how to deploy updated Docker images to Amazon, however, and was forced to do some research in order to figure out how it should be done. Through some diligent experimentation of my own with the AWS ECS CLI and not least some valuable help on the Amazon forums, I was able to come up with a script and requirements for the ECS configuration that together lead to streamlined deployment. As it turns out, you even get zero downtime deployment for free (provided that you use an elastic load balancer, mind)! It’s more magical than in the Tutum case, as ECS by default upgrades one node after the other in the background (hidden by the load balancer) so long as it is allowed to take down at least one node as required.

Requirements

The requirements one must follow in setting up the ECS cluster are:

  • An elastic load balancer
  • An idle ECS instance, or a service deployment configuration with minimumHealthyPercent
    of less than 100, (so that ECS always has an extra node to deploy the new task definition to)

Additionally, if you are using a private Docker image registry, you must add authentication data to /etc/ecs/ecs.config in your ECS instances and reboot them. In the following example, I assume tutum.co as the private registry:

ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"tutum.co":{"auth":"<auth-string>","email":"<my-email>"}}

You are also required to log into your Docker image registry and configure the AWS CLI on the machine where you are to run the deploy script.

Script

The procedure implemented by my script (deploy-ecs) looks as follows:

  1. Tag Docker images corresponding to containers in the task definition with the Git revision.
  2. Push the Docker image tags to the corresponding registries.
  3. Deregister old task definitions in the task definition family.
  4. Register new task definition, now referring to Docker images tagged with current Git revisions.
  5. Update service to use new task definition.

MuzHack featured in Electronic Beats

image_pdfimage_print

Electronic Beats Magazine did a quick writeup on my MuzHack project. Nice that people are paying attention!

Uploading Files to Amazon S3 Directly from the Web Browser

image_pdfimage_print

Amazon S3 is at the time of writing the premier file storage service, and as such an excellent choice for storing files from your Web application. What if I were to tell you these files never need to touch your application servers though?

The thing to be aware of is that the S3 API allows for POST-ing files directly from the user’s Web browser, so with some JavaScript magic you are able to securely upload files from your client side JavaScript code directly to your application’s S3 buckets. The buckets don’t even have to be publicly writeable.

The way this is accomplished is through some cooperation between your application server and your client side JavaScript, although your server can remain oblivious to the files themselves.

Server Side Implementation

Uploading files to S3 from the browser to a non-publicly writeable bucket requires a so-called policy document in order to authenticate the client. It is your application server’s responsibility to generate said document, which purpose is to serve as a temporary security token and to define what the token holder is allowed to do.

Generating the policy document can be a tricky exercise, especially regarding authenticating the POST request with the AWS Signature Version 4 authentication scheme that is currently required by S3. Therefore, you might want to use a library to handle this for you. I chose to use the aws-s3-form Node package for this purpose, which generates the HTML form fields, including policy and AWS signature, that the client must send to S3 with its POST request. In my MuzHack application, I have a REST API method that returns the necessary form fields for a successful POST request to S3 to the client:

let AwsS3Form = require('aws-s3-form')

[...]

// A hapi.js server route
server.route({
  method: ['GET',],
  path: '/api/s3Settings',
  config: {
    auth: 'session',
    handler: (request, reply) => {
      let {key,} = request.query

      let keyPrefix = `u/${request.auth.credentials.username}/`
      let region = process.env.S3_REGION
      let s3Form = new AwsS3Form({
        accessKeyId: process.env.AWS_ACCESS_KEY,
        secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
        region,
        bucket,
        keyPrefix,
        successActionStatus: 200,
      })
      let url = `https://s3.${region}.amazonaws.com/${bucket}/${keyPrefix}${key}`
      let formData = s3Form.create(key)
      reply({
        bucket,
        region,
        url,
        fields: formData.fields,
      })
    },
  },
})

Client Side Implementation

Given the above REST API method for obtaining POST request form fields, implementing the client side upload itself is quite simple. You simply need to obtain S3 metadata from said REST method, then construct a corresponding FormData object and POST it to your S3 bucket URL:

let R = require('ramda')

let ajax = require('./ajax')

class S3Uploader {
  constructor({folder,}) {
    this.folder = folder
  }

  send(file) {
    let key = `${this.folder}/${file.name}`
    return ajax.getJson(`s3Settings`, {key,})
      .then((s3Settings) => {
        let formData = new FormData()
        R.forEach(([key, value,]) => {
          formData.append(key, value)
        }, R.toPairs(s3Settings.fields))
        formData.append('file', file)

        return new Promise((resolve, reject) => {
          let request = new XMLHttpRequest()
          request.onreadystatechange = () => {
            if (request.readyState === XMLHttpRequest.DONE) {
              if (request.status === 200) {
                resolve(s3Settings.url)
              } else {
                reject(request.responseText)
              }
            }
          }

          let url = `https://s3.${s3Settings.region}.amazonaws.com/${s3Settings.bucket}`
          request.open('POST', url, true)
          request.send(formData)
        })
      }, (error) => {
        throw new Error(`Failed to receive S3 settings from server`)
      })
  }
}

Zero Downtime Docker Deployment with Tutum

image_pdfimage_print

In this post I will detail how to achieve (almost) zero downtime deployment of Docker containers with the Tutum Docker hosting service. Tutum is a service that really simplifies deploying with Docker, and it even has special facilities for enabling zero downtime deployment, i.e. the Tutum team has a version of HAProxy that can switch seamlessly between Docker containers in tandem with Tutum’s API. There is the caveat though, that at the time of writing, there is slight downtime involved with HAProxy’s switching of containers, typically a few seconds. I have been assured though, that this will be improved upon before Tutum goes into General Availability.

Basically, the officially sanctioned approach to implement zero downtime deployment on top of Tutum is so-called Blue-Green Deployment as detailed in this blogpost at Tutum. The concept underpinning blue-green deployment is pretty simple, the idea is to have a “blue” and a “green” version of your service, which are both behind a reverse proxy, which exposes exactly one at a time. When upgrading to a new version of your service, you deploy it to the container that is currently hidden, and make the reverse proxy switch to it. Then the new version will be publicly visible and the old version hidden.

Stack Definition

So how does one do this in practice? Basically you define a Tutum stack, with two app services, e.g. blue and green, and an HAProxy service to reside in front of the former two. The app services should have different deployment tags, so they deploy to different nodes. The HAProxy service needs to be linked to one of the two app services, i.e. the one that should initially be exposed. Later on, that link will be updated dynamically.

This is my current stack definition, wherein HAProxy is configured to enable HTTPS and redirect HTTP requests to HTTPS:

muzhack-green:
  image: quay.io/aknuds1/muzhack:omniscient
  tags:
    - muzhack-green
  environment:
    - FORCE_SSL=yes
  restart: always
  deployment_strategy: high_availability
muzhack-blue:
  image: quay.io/aknuds1/muzhack:omniscient
  tags:
    - muzhack-blue
  environment:
    - FORCE_SSL=yes
  restart: always
  deployment_strategy: high_availability
lb:
  image: tutum/haproxy
  tags:
    - muzhack-lb
  environment:
    - EXTRA_BIND_SETTINGS=redirect scheme https code 301 if !{ ssl_fc }
    - DEFAULT_SSL_CERT
    - HEALTH_CHECK=check inter 10s fall 1 rise 2
    - MODE=http
    - OPTION=redispatch, httplog, dontlognull, forwardfor
    - TCP_PORTS=80,443
    - TIMEOUT=connect 10s, client 1020s, server 1020s
    - VIRTUAL_HOST=https://*
  restart: always
  ports:
    - "443:443"
    - "80:80"
  links:
    - muzhack-green
  roles:
    - global

I’m not 100% certain yet about the HAProxy settings, which I’ve based off others’ advice. It appears to work however. The DEFAULT_SSL_CERT environment variable should contain the contents of an SSL .pem file.

Making the Switch

Implementing the deployment process itself caused me some bother initially, since it involves f.ex. knowing what is the currently active service (blue or green). I found I was able to script everything though, thankfully, through Tutum’s comprehensive API. In the end I wrote a Python script to deploy a new version to the currently inactive service, and make HAProxy switch to it:

#!/usr/bin/env python3
import argparse
import subprocess
import json
import sys


parser = argparse.ArgumentParser()
args = parser.parse_args()


def _info(msg):
    sys.stdout.write('* {}\n'.format(msg))
    sys.stdout.flush()


def _run_tutum(args):
    try:
        subprocess.check_call(['tutum',] + args, stdout=subprocess.PIPE)
    except subprocess.CalledProcessError as err:
        sys.stderr.write('{}\n'.format(err))
        sys.exit(1)


_info('Determining current production details...')
output = subprocess.check_output(['tutum', 'service', 'inspect', 'lb.muzhack-staging']).decode(
    'utf-8')
data = json.loads(output)
linked_service = data['linked_to_service'][0]['name']
_info('Currently linked service is \'{}\''.format(linked_service))

if linked_service == 'muzhack-green':
    link_to = 'muzhack-blue'
else:
    assert linked_service == 'muzhack-blue'
    link_to = 'muzhack-green'

_info('Redeploying service \'{}\'...'.format(link_to))
_run_tutum(['service', 'redeploy', '--sync', link_to,])

_info('Linking to service \'{}\'...'.format(link_to))
_run_tutum(['service', 'set', '--link-service', '{0}:{0}'.format(link_to),
    '--sync', 'lb.muzhack-staging',])
_info('Successfully switched production service to {}'.format(link_to))

This script does the following:

  1. Find out which service is currently active (green or blue)
  2. Redeploy inactive service so that it updates to the newest version
  3. Link HAProxy service to inactive service, thus making it the active one

Help Functionality Added to MuzHack’s Markdown editors

Markdown Help
image_pdfimage_print

The Markdown editors in MuzHack are now sporting a help button, to display documentation about the supported Markdown syntax. Early user feedback suggested this was a much desired feature, as many users aren’t familiar with the Markdown syntax.

Markdown Help

Markdown Help