PromCon 2024
This year marks my debut as conference speaker! During PromCon EU 2024, which took place in Berlin September 11-12, I got two occasions to co-speak!
During the first day of the conference, my colleague Jesús Vázquez and I gave the talk Practical OpenTelemetry with Prometheus 3.0, where we shared with the audience all the work we put in towards making Prometheus 3.0 a 1st class OpenTelemetry (AKA OTel) metrics back-end.
Some of the points from our talk:
Grafana Mimir Launched
I am very happy to be able to share with the world what I have been working on since July last year: Grafana Mimir! Grafana Mimir is an open source, horizontally scalable, highly available, multi-tenant, long-term storage for Prometheus. It’s a continuation of the Cortex project, but licensed under AGPL. Please see the launch post for the details!
Practical Networked Applications in Rust, Part 2: Networked Key-Value Store
Welcome to the second installation in my series on taking the Practical Networked Applications in Rust course, kindly provided by the PingCAP company, where you develop a networked and multithreaded/asynchronous key-value store in the amazing Rust language. You may see my previous post in this series here.
In the previous, and initial, post I implemented the course module of making the fundamental key-value store functionality, based around the Bitcask algorithm, which would only allow for local usage on your own computer. In the second module of my course work, I add networking functionality, dividing the application into a client/server architecture so that clients can connect to servers across the network.
Practical Networked Applications in Rust, Part 1: Non-Networked Key-Value Store
The PingCAP company, makers of the TiDB NewSQL database and the TiKV key-value store, have kindly made publicly available, as well as open-sourced, a set of training courses that they call the "PingCAP Talent Plan". These courses train programmers in writing distributed systems in the Go and Rust languages. They are originally intended by PingCAP to train students, new employees and new contributors to TiDB and TiKV and focus as such on subjects relevant to those projects, but are still appropriate to anyone with an interest in learning to make distributed systems in Go and/or Rust.
An Example Substrate Runtime Module
Substrate is a framework for making custom blockchains, made available by Berlin based Parity Technologies, who are until now the best known for making the second-most popular (after the official one, Go Ethereum) client for the Ethereum blockchain, also called Parity. Parity Technologies is led by Gavin Wood, one of the inventors of Ethereum and as such one of the true authorities in the blockchain space.
Based on their significant experience in developing the Ethereum blockchain, Parity devised Substrate as a framework/toolkit for those who wish to design their own blockchains as opposed to having to build on top of e.g. Ethereum, while not having to reinvent all the basic building blocks, such as consensus logic. Like the Parity client, Substrate is written in the Rust language.
JavaScript/Web Assembly Binding for Indy Crypto
At the behest of the excellent Quorum Control company, I have carried out my first foray into both Rust and Web Assembly (abbr. Wasm), making a JavaScript/Wasm binding of the Hyperledger Indy Crypto library. The reason behind this undertaking is that Quorum Control (and others) need BLS cryptographic signature verification abilities in JavaScript applications (Web and Node.js), and Indy Crypto provides a solid implementation of this although in the Rust language.
Luckily, there are facilities for converting Rust code into Web Assembly, i.e. the wasm-bindgen tool. While still a fledgling project, it allows us to expose a JavaScript API for the BLS module of Indy Crypto. In practice, this is done by writing a set of Rust functions tagged so that they get exported by the binding generated by wasm-bindgen. Additionally, we produce two variants of the binding: One for Node.js and one for ECMAScript modules aware systems (such as the Webpack bundler), in order for it to work both server side and in the browser.
Experimental Berlin Mentioned in The Guardian
My project Experimental Berlin received a nice mention in British newspaper The Guardian!
Official Elasticsearch/Fluentd/Kibana Add-On for Kubernetes updated to 5.5
My pull request for updating the official Elasticsearch/Fluentd/Kibana logging add-on for Kubernetes to version 5.5.1 of Elasticsearch and Kibana was recently approved and merged into the master branch! Users of the popular EFK/ELK stack can now enjoy the latest version with their Kubernetes clusters!
Installing Elasticsearch/Kibana 5.5 within Kubernetes cluster on AWS
In my previous blog post I showed how to use the Kops tool to create a production ready Kubernetes cluster on Amazon Web Services (AWS). In this follow-up post I will show how to install Elasticsearch and its graphical counterpart Kibana in the cluster, in order to be able to collect and store logs from your cluster and search/read them. We will also install Fluentd as this component is responsible for transmitting the standard Kubernetes logs to Elasticsearch. This is generally known as the ELK stack, which stands for Elasticsearch, Logstash (precursor to Fluentd) and Kibana.
Creating a Highly Available Secured Kubernetes Cluster on AWS with Kops
Today I will be talking about Kops, which is an official tool for creating Kubernetes clusters on AWS, with support for GCE and VMware vSphere in alpha. It takes a whole lot of the pain out of setting up a Kubernetes cluster yourself, but still presents many challenges to overcome and a great degree of freedom in how you configure the cluster.
I have recently created a Kubernetes cluster on AWS for a client, where I used the Kops tool for the very first time and I will here present what I learnt about implementing best practices with this technology stack. Given the rapid development of Kubernetes itself, and how relatively young Kops is, it proved to be far from a walk in the park to create a production-grade cluster. Documentation is often relatively poor, or just plain missing. The intention of this blog post is to make it easier for others going down the same route.