Route Handling Framework for Choo

I’ve made a complementary route handling framework for Yoshua Wuyts’ JavaScript SPA (Single Page Application) framework Choo: choo-routehandler. For those unfamiliar with Choo, it’s a view framework similar to React and Marko, although with a strong focus on simplicity and hackability and working directly with the DOM instead of a virtual DOM (unlike React).

Basically, the motivation for making the framework was that I found myself implementing the same pattern in my Choo apps: To load data before rendering a view corresponding to a route and to require authentication before accessing certain routes. I didn’t find any good way to handle this baked into Choo, even though it has a rather good routing system built into it. It was also tricky to use Choo’s standard effects/reducers system to implement handling of route change, since these are asynchronously triggered and I could end up handling the same route change several times as a result.

After some trial and failure and good old-fashioned experimentation, I found that I could use the standard MutationObserver API to detect that a new route has been rendered, and take necessary action (i.e. require authentication/load data) subsequently.

The flow of events after a route change has been detected by the framework depends on whether the user is logged in or not and whether the route requires authentication. If the route does require authentication and the user is not logged in, the user is redirected to the login page. Otherwise, the data loading flow commences so that the view’s function for loading data is invoked asynchronously and a loading view is rendered until data is fully loaded. Then the view corresponding to the route is rendered, given the loaded data. It may also be the case that the view has no function for loading data, in which case it’s rendered directly and no data loading takes place.

Event Catalogue for Berlin Released

I’m finally ready to share what I’ve been working on for the last few months, an online catalogue of underground cultural events in the city of Berlin: Experimental Berlin. My best effort to date I think! For this project I’ve been using Yoshua Wuyts’ excellent JavaScript framework Choo.

Maintainership of html-to-react

I’m happy to announce that I’ve taken over maintainership of the popular NPM package html-to-react, which currently has about 3600 downloads per month. This package has the ability to translate HTML markup into a React DOM structure.

An example should give you an idea of what html-to-react brings to the table:

var React = require('react');
var HtmlToReactParser = require('html-to-react').Parser;
 
var htmlInput = '<div><h1>Title</h1><p>A paragraph</p></div>';
var htmlToReactParser = new HtmlToReactParser();

// Turn HTML into a bona fide React component that we can integrate into our React tree
var reactComponent = htmlToReactParser.parse(htmlInput);

MuzHack has a blog

MuzHack now has its own blog!

Zero Downtime Docker Deployment with Amazon ECS

Earlier, I wrote about zero downtime docker deployment with Tutum. I have recently started experimenting with Amazon ECS (EC2 Container Service), which is Amazon’s offering for the orchestration of Docker containers (and thus a competitor to Tutum). I don’t have a lot of experience with ECS yet, but it looks pretty solid so far.

I found there was a real lack of documentation on how to deploy updated Docker images to Amazon, however, and was forced to do some research in order to figure out how it should be done. Through some diligent experimentation of my own with the AWS ECS CLI and not least some valuable help on the Amazon forums, I was able to come up with a script and requirements for the ECS configuration that together lead to streamlined deployment. As it turns out, you even get zero downtime deployment for free (provided that you use an elastic load balancer, mind)! It’s more magical than in the Tutum case, as ECS by default upgrades one node after the other in the background (hidden by the load balancer) so long as it is allowed to take down at least one node as required.

Requirements

The requirements one must follow in setting up the ECS cluster are:

  • An elastic load balancer
  • An idle ECS instance, or a service deployment configuration with minimumHealthyPercent
    of less than 100, (so that ECS always has an extra node to deploy the new task definition to)

Additionally, if you are using a private Docker image registry, you must add authentication data to /etc/ecs/ecs.config in your ECS instances and reboot them. In the following example, I assume tutum.co as the private registry:

ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"tutum.co":{"auth":"<auth-string>","email":"<my-email>"}}

You are also required to log into your Docker image registry and configure the AWS CLI on the machine where you are to run the deploy script.

Script

The procedure implemented by my script (deploy-ecs) looks as follows:

  1. Tag Docker images corresponding to containers in the task definition with the Git revision.
  2. Push the Docker image tags to the corresponding registries.
  3. Deregister old task definitions in the task definition family.
  4. Register new task definition, now referring to Docker images tagged with current Git revisions.
  5. Update service to use new task definition.

MuzHack featured in Electronic Beats

Electronic Beats Magazine did a quick writeup on my MuzHack project. Nice that people are paying attention!

Uploading Files to Amazon S3 Directly from the Web Browser

Amazon S3 is at the time of writing the premier file storage service, and as such an excellent choice for storing files from your Web application. What if I were to tell you these files never need to touch your application servers though?

The thing to be aware of is that the S3 API allows for POST-ing files directly from the user’s Web browser, so with some JavaScript magic you are able to securely upload files from your client side JavaScript code directly to your application’s S3 buckets. The buckets don’t even have to be publicly writeable.

The way this is accomplished is through some cooperation between your application server and your client side JavaScript, although your server can remain oblivious to the files themselves.

Server Side Implementation

Uploading files to S3 from the browser to a non-publicly writeable bucket requires a so-called policy document in order to authenticate the client. It is your application server’s responsibility to generate said document, which purpose is to serve as a temporary security token and to define what the token holder is allowed to do.

Generating the policy document can be a tricky exercise, especially regarding authenticating the POST request with the AWS Signature Version 4 authentication scheme that is currently required by S3. Therefore, you might want to use a library to handle this for you. I chose to use the aws-s3-form Node package for this purpose, which generates the HTML form fields, including policy and AWS signature, that the client must send to S3 with its POST request. In my MuzHack application, I have a REST API method that returns the necessary form fields for a successful POST request to S3 to the client:

let AwsS3Form = require('aws-s3-form')

[...]

// A hapi.js server route
server.route({
  method: ['GET',],
  path: '/api/s3Settings',
  config: {
    auth: 'session',
    handler: (request, reply) => {
      let {key,} = request.query

      let keyPrefix = `u/${request.auth.credentials.username}/`
      let region = process.env.S3_REGION
      let s3Form = new AwsS3Form({
        accessKeyId: process.env.AWS_ACCESS_KEY,
        secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
        region,
        bucket,
        keyPrefix,
        successActionStatus: 200,
      })
      let url = `https://s3.${region}.amazonaws.com/${bucket}/${keyPrefix}${key}`
      let formData = s3Form.create(key)
      reply({
        bucket,
        region,
        url,
        fields: formData.fields,
      })
    },
  },
})

Client Side Implementation

Given the above REST API method for obtaining POST request form fields, implementing the client side upload itself is quite simple. You simply need to obtain S3 metadata from said REST method, then construct a corresponding FormData object and POST it to your S3 bucket URL:

let R = require('ramda')

let ajax = require('./ajax')

class S3Uploader {
  constructor({folder,}) {
    this.folder = folder
  }

  send(file) {
    let key = `${this.folder}/${file.name}`
    return ajax.getJson(`s3Settings`, {key,})
      .then((s3Settings) => {
        let formData = new FormData()
        R.forEach(([key, value,]) => {
          formData.append(key, value)
        }, R.toPairs(s3Settings.fields))
        formData.append('file', file)

        return new Promise((resolve, reject) => {
          let request = new XMLHttpRequest()
          request.onreadystatechange = () => {
            if (request.readyState === XMLHttpRequest.DONE) {
              if (request.status === 200) {
                resolve(s3Settings.url)
              } else {
                reject(request.responseText)
              }
            }
          }

          let url = `https://s3.${s3Settings.region}.amazonaws.com/${s3Settings.bucket}`
          request.open('POST', url, true)
          request.send(formData)
        })
      }, (error) => {
        throw new Error(`Failed to receive S3 settings from server`)
      })
  }
}