Cloud cost models are sometimes weird, and billing sometimes is not quite transparent. The cost model can also change at will.
The Medium story reported by Home Automation is an extreme example, and contains a non-trivial amount of naiveté on their side, but underlines the importance of being spread through more than one cloud provider and having an exit strategy. Which is kind of a dud, if you are using more than simple IaaS – if you tie yourself to a database-as-a-service offer, you can’t really have an exit strategy at all.
TL;DR: Firebase accidentally wasn’t billing some traffic, and fixed that (the billing). They did not communicate the change, they did not update their status panels to report the increased traffic, and they did not measure the billing impact of their change to find extreme cases before the change and contact them.
The customer, Home Automation, has close to zero clue to using TLS correctly, was using connection inefficiently and kind of maximised overhead, ran into the worst case scenario for the change, got fucked. They would want out, but also had zero strategy for that, because DBaaS fuckup.
In the cloud you don’t need operations. Until you do.
Dockercon 2017 Talk by Brendan Gregg
Tracing, Flamegraphs, Netflix Titus. Also, them being Netflix, they do not have a high degree of transactional work going on in the first place: Stateless, non-transactional scaleout is trivial.
Rob Landley, of Busybox/Toybox fame, spoke four years ago about the Toybox project in the context of Android and whatever else was recent back then. The talk contains a brilliant deconstruction of the problems that GPL v3 has, and why it is in decline.
It also shows a lot of vision re containers, and what is needed in this context. If you are deploying Alpine today, with musl and toybox in it, here is why and how it came to be.
When dealing with Kubernetes, you will inevitably have to deal with config and data that is in JSON format.
jq is a cool tool to handle this, but while the man page is complete, it is also very dry. A nice tutorial can be found at The Programming Historian, which uses some real world use cases. My personal use case is Converting JSON to CSV, and the inverse of that. There also is a mildly interesting FAQ.
Learning jq takes about one quiet afternoon of time.
It does not utilize the ptrace(2) kernel facility, though, but its own interface. This interface picks up data in the kernel and writes it into a ring buffer.
A userspace component extracts this data, interprets, filters and formats it, and then shows it. If the data source outpaces the userspace, the ring buffer overflows and events are lost, but the actual production workload is never slowed down.
I was having two independent discussions recently, which started with some traditional Unix person condemning software installing with curlbash (“curl https://… | bash”), or even “curl | sudo bash”.
I do not really think this to be much more dangerous than the installation of random rpm or dpkg packages any more. Especially if those packages are unsigned or the signing key gets installed just before the package.
The threat model really became a different one in the last few years, and the security mechanism have had to change as well. And they have, UIDs becoming much less important.
Desktop containers and Sandboxes have become much more important, and segregation happens now at a much finer granularity (the app level) instead of the user level.
»The other day, my daughter sidled into my office, and asked me, “Dearest Father, whose knowledge is incomparable, what is Kubernetes?”
And I responded, “Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users’ declared intentions. Using the concepts of “labels” and “pods”, it groups the container which make up an application into logical units for easy management and discovery.”
And my daughter said to me, “Huh?”
And so I give you…«
Right on the heels of the Openshift Commons and co-located with them, Kubecon 2017 happened at the BCC in Berlin. Supposedly 1500 people attended, which was straining BCC’s capacity to the limit, especially on the A-level. Room A03, which hosted the “Deep Dive track” was continuously overcrowded and could not accommodate all interested people.
Also, this was the most noisy event I have been attending in a long time, especially in the vendor booth setup in B01/B02. On the other hand, the hallway track was exceptionally useful, especially if one escaped out the door, weather permitting, or upstairs.
Quite a bit of content was a duplicate from the Openshift Commons Gathering preceding the Kubecon, but the inclusion of rkt and containerd as CNCF projects have been news and are very welcome.
Especially rkt will be useful, as Docker is not doing very many useful things in the context of Kubernetes and rkt kind of restricts itself to doing only these useful things and not having any other, less useful (in the K8s context) code.
At the CoreOS booth I learned that rkt is right now not yet a drop-in replacement for Docker, but may well be soon – work is being done, and quickly.
So I have been to Berlin this week, for the Openshift Commons Gathering and Kubecon, and of course to meet a few Berliners.
Openshift is Redhats distribution of Google Kubernetes, plus their own enhancements. It is available on your own machines as Openshift Origin (the GPL version) or OCP (Open Container Project). Redhat also operates dedicated and public clouds based on this. The Openshift Commons Gathering is a meeting of the Openshift Users Community, Commons.
Commons was a nice and fine gathering in the basement level of the BCC, a single track event with a nice mix of users reporting back their experience with Kubernetes and Openshift. In fact, Commons already had quite a bit of the content later duplicated in Kubecon, but in a smaller and less noisy setting.