Skip to content

Category: Containers and Kubernetes

Scaling, automatically and manually

There is an interesting article by Brendan Gregg out there, about the actual data that goes into the Load Average metrics of Linux. The article has a few funnily contrasting lines. Brendan Gregg states

Load averages are an industry-critical metric – my company spends millions auto-scaling cloud instances based on them and other metrics […]

but in the article we find Matthias Urlichs saying

The point of “load average” is to arrive at a number relating how busy the system is from a human point of view.

and the article closes with Gregg quoting a comment by Peter Zijlstra in the kernel source:

This file contains the magic bits required to compute the global loadavg figure. Its a silly number but people think its important. We go through great pains to make it work on big machines and tickless kernels.

Let’s go back to the start. What’s the problem to solve here?

Leave a Comment

What is a Service Mesh? What is the purpose of Istio?

Service Mesh

An article by Phil Calçado explains the Container Pattern “Service Mesh” and why one would want that in a really nice way.

Phil uses early networking as an example, and explains how common functionality needed in all applications was abstracted out of the application code and moved into the network stack, forming the TCP flow control layer we have in todays networking.

A similar thing is happening with other functionality that all services that do a form of remote procedure call have to have, and we are moving this into a different layer. He then gives examples of the ongoing evolution of that layer, from Finagle and Proxygen through Synapse and Nerve, Prana, Eureka and Linkerd. Envoy and the resulting Istio project of CNCF are the current result of that development, but the topic is under research, still.

Leave a Comment

An abundance of IOPS and Zero Jitter

Two weeks ago, I wrote about The Data Center in the Age of Abundance and claimed that IOPS are – among other things – a solved problem.

What does a solved problem look like? Here is a benchmark running 100k random writes of 4K per second, with zero Jitter, at 350µs end-to-end write latency across six switches.

Databases really like reliably timed writes like these.

Maximum queue depth would be 48, the system is not touching that.

6 Comments

Threads vs. Watts

So I have been testing, again.

My hapless test subject this time is a Dell Box, an R630.

It has a comfortable 384GB of memory, one of two 25 GBit/s ports active, and it comes with two E5-2690v4 CPUs. That gives it 14 cores per die, 28 cores in total, or with hyperthreading, 56 threads.

$ cat /proc/cpuinfo | grep 'model name' | uniq -c
56 model name : Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
$ ./mprime -v
Mersenne Prime Test Program: Linux64,Prime95,v28.10,build 1
5 Comments

The Data Center in the Age of Abundance

We are currently experiencing a fundamental transition in the data center. In recent discussions, it occured to me how little this is understood by people in the upper layers of the stack, and how the implications are not clear to them.

In the past, three fundamentally scarce resources limited the size of the systems we could build: IOPS, bandwidth and latency. All three of them are gone to a large extent, and the systems we are discussing now are fundamentally different from what we had in “The Past™”, with “The Past” being a thing five to ten years ago.

6 Comments

Google Next 2017, Amsterdam Edition

On June, 21 there was the “Google NEXT” conference, 2017 edition, in the Kromhouthal in Amsterdam. Google had a dedicated ferry running to ship people over to the IJ north side, delivering directly at the Kromhouthal.
 
The event was well booked, about 1400 people showing up (3500 invites sent). That is somewhat over the capacity of Kromhouthal, actually, and it showed in the execution in several places (Toilet, Catering, and room capacity during keynotes).
 
The keynotes were the expected self-celebration, but if you substract that, they were mostly useful content about the future of K8s, about Googles Big Data offerings and about ML applications and how they work together with Big Data.
 
For the two talk slots before the lunch, I attended K8s talks. After lunch, I switched to the Big Data track. I did not attend any ML stuff, and I missed the last talk about Spanner because I got sucked into a longer private conversation.
Leave a Comment

Scaleway now with 2FA

Cloud Provider Scaleway now has ARM64 based bare metal in Amsterdam. They are also now offering 2FA auth based on Google Authenticator (or other, compatible 2FA apps).

No U2F token support, yet, though (but still a better solution than steam).

This blog is hosted on a Scaleway instance.

Leave a Comment

“Usage Patterns and the Economics of the Public Cloud”

The paper (PDF) is, to say it in the words of Sascha Konietzko, eine ausgesprochene Verbindung von Schlau und Dumm (“a very special combination of smart and stupid”)

The site mcafee.cc is not related to the corporation of the same name, but the site of one of the authors, R. Preston McAfee.

The paper looks at the utilization data from a number of public clouds, and tries to apply some dynamic price finding logic to it. The authors are surprised by the level of stability in the cloud purchase and actual usage, and try to hypothesize why is is the case. They claim that a more dynamic price finding model might help to improve yield and utilization at the same time (but in the conclusion discover why in reality that has not happened).

Leave a Comment

A case for IP v6

So when companies talk about IP V6, it is very often at the scope of “terminating V6 at the border firewall/load balancer and then lead it as V4 into the internal network. Problems that arise there are most often tracking problems (»Our internal statistics can’t handle V6 addresses in Via: headers from the proxy«).

But when you do containers, the need for V6 is much more urgent and internal. Turns out that Docker Port Twiddling is exactly the nuisance that it looks like and networkers strongly urge you to surgically remove all traces of native Docker networking bullshit and go all in on IP-per-Container. Mostly, because that’s what IPs are for: Routing packets, determining their destination and stuff. Networkers have ASICs and protocols that are purpose-built for this stuff.

Now, let us assume you have a modern 40- or 56-core machine that you are running stuff on in your Kubernetes cluster. It means that you will easily at least 30 and up to 100 pods per machine. In a moderately sized cluster with some 100 nodes you get to use 100×100, 10.000 IPs to handle that. And because IP space is not handed out in sets of one, but in the form of subnets per node, you will have need for more than 10k addresses. Expect to consume a /17 or /16 to handle this.

Even if you are digging into 10/8 for internal addressing here, this is going to be a problem – it’s unlikely that you will be able to use all of 10/8, because non-cluster things exist, too, in your environment, and you will likely have more than one cluster.

With V6, things are becoming a complete non-issue, with the minor issue of getting V6 running on the inside of your organisation.

Leave a Comment