Skip to content

The Isoblog. Posts

Scaling, automatically and manually

There is an interesting article by Brendan Gregg out there, about the actual data that goes into the Load Average metrics of Linux. The article has a few funnily contrasting lines. Brendan Gregg states

Load averages are an industry-critical metric – my company spends millions auto-scaling cloud instances based on them and other metrics […]

but in the article we find Matthias Urlichs saying

The point of “load average” is to arrive at a number relating how busy the system is from a human point of view.

and the article closes with Gregg quoting a comment by Peter Zijlstra in the kernel source:

This file contains the magic bits required to compute the global loadavg figure. Its a silly number but people think its important. We go through great pains to make it work on big machines and tickless kernels.

Let’s go back to the start. What’s the problem to solve here?

Leave a Comment

And the cost of energy storage?

Kittner, Lill and Kammen made use of a model similar to the one for PV pricing in their Paper Energy storage deployment and innovation for the clean energy transition (PDF) to model and predict pricing for batteries.

A deeply decarbonized energy system research platform needs materials science advances in battery technology to overcome the intermittency challenges of wind and solar electricity. […] Here we analyse deployment and innovation using a two-factor model that integrates the value of investment in materials innovation and technology deployment over time from an empirical dataset covering battery storage technology. […] We find and chart a viable path to dispatchable US$1 W−1 solar with US$100 kWh−1 battery storage that enables combinations of solar, wind, and storage to compete directly with fossil-based electricity options.

Leave a Comment

Evaluating the Changing Causes of Photovoltaics Cost Reduction

Evaluating the Changing Causes of Photovoltaics Cost Reduction

Why is PV Solar Energy getting cheaper and cheaper?

We find that increased module efficiency was the leading low-level cause of cost reduction in 1980-2001, contributing almost 30% of the cost decline. The most important high-level mechanism was R&D in these earlier stages of the technology. After 2001, scale economies became a more significant cause of cost reduction, approaching R&D in importance. Policies that stimulate market growth have played a key role in enabling the cost reduction in PV, through privately-funded R&D and economies of scale, and to a lesser extent learning-by-doing

Leave a Comment

So Merkel was right?

Angela Merkel, November 2014:

“Wer wird einen fahrerlosen Autoverkehr organisieren, wenn er nicht weiß, ob ihm mitten zwischen Frankfurt und Mannheim irgendwann das Datennetz zusammenbricht, weil gerade Millionen von E-Mails Vorrang haben und die Sicherheit der Datenübermittlung nicht gewährleistet ist?”

Which is roughly »Who will organize autonomous traffic, if they can’t be sure that the net won’t crash between Frankfurt and Mannheim because millions of mails have precedence and traffic isn’t guaranteed to make it?« That’s nonsense, of course – an autonomous car is controlled by the onboard computer and not remotely, and especially not remotely in real time.

But, Tesla. So with Autopilot 2.0 in June this year, they gathered owner permission to collect video data from the onboard cameras, and will occasionally upload clips. Observed traffic is in the Gigabytes, now and then. And of course, it’s not real time remote control, but batched, non-time critical upload of learning data for Tesla’s Machine Learning training engine.

No need to panic over net neutrality, this is the lowest bulk traffic class and the opposite direction.


The company as a social engine

So why is everything so complicated? At work, I mean.

Think of a small company. A single person, a founder, is building her business. She knows her way around, it’s all in her head: The plan, the things that are important and why, and how they are to be executed. Also, tradeoffs to be corrected later, potential opportunities for later and a lot of other meta: Stuff that does not get executed right now, but that informs decisions, priorities and preferences. Things work with some precision, though, like a well programmed wetware CPU.

The moment that stuff becomes too large for a single person to handle, more people are involved and things need to be verbalized, written down, given form.

At that point, things change quite a bit:


What is a Service Mesh? What is the purpose of Istio?

Service Mesh

An article by Phil Calçado explains the Container Pattern “Service Mesh” and why one would want that in a really nice way.

Phil uses early networking as an example, and explains how common functionality needed in all applications was abstracted out of the application code and moved into the network stack, forming the TCP flow control layer we have in todays networking.

A similar thing is happening with other functionality that all services that do a form of remote procedure call have to have, and we are moving this into a different layer. He then gives examples of the ongoing evolution of that layer, from Finagle and Proxygen through Synapse and Nerve, Prana, Eureka and Linkerd. Envoy and the resulting Istio project of CNCF are the current result of that development, but the topic is under research, still.

Leave a Comment