Skip to content

The Data Center in the Age of Abundance

We are currently experiencing a fundamental transition in the data center. In recent discussions, it occured to me how little this is understood by people in the upper layers of the stack, and how the implications are not clear to them.

In the past, three fundamentally scarce resources limited the size of the systems we could build: IOPS, bandwidth and latency. All three of them are gone to a large extent, and the systems we are discussing now are fundamentally different from what we had in “The Past™”, with “The Past” being a thing five to ten years ago.

In The Past we had disk.

A disk of rotating rust is a data store than can, at best, perform 200 disk seeks (I/O Operations per second, IOPS), so databases tended to be special computers with very many small disks, data spread out across as many spindles as possible. The entire system was fake-accelerated with caches, with accepted writes into battery buffered memory of limited size, and woe if the working set of your data ever exceeded the cache size.

Five years ago we converted all customer facing databases to systems on SSD with about 20.000 to 50.000 IOPS, and with current NVME based storage we are about to see millions of IOPS. Fundamentally, IOPS are not a limited resource any more: With the techniques we learned from the rotating rust age, we can build machinery with an arbitrary amount of random write capability.

In the past, databases and frontends have been connected to the world using network adapters with a Gigabit per second capability.

About five years ago, we converted the first systems to 10 GBit/s at scale, and today we routinely build systems with about 400 MBit/s to 1 GBit/s per Thread (so a 50 core system gets a dual-25 GBit/s network card).

Companies like Mellanox have switches with a large two digit number of 100 GBit/s Interfaces. We have leaf-and-spine architectures available that allow us to build data pathes between tens of thousands of computers inside a single data center with no chokepoints – we are actually getting the 1 GBit/s per thread on the entire path between any thread and any disk in our data center, concurrently.

In the past, latency along these pathes used to be on the upside of 500 µs (1/2000 of a second) and more likely in the low milliseconds.

But with current cards, offloading and other trickery we can get at or below 200 µs. Add scary stuff such as RDMA/RoCE to the mix, and we may be able to routinely crack the 100 µs barrier. That makes writes to the data center sized fabric as fast or faster than writes to a slow local SSD.

Today we are at an inflection point, because what we have today is true abundance: Each of the three limiters, IOPS, bandwidth and latency, have been throughly vanquished.

We can now build a system where the data center sized fabric at scale provides bandwidth and latency comparable to a system bus of a slow home computer (and is consecutively faster the smaller the domain gets). We can build machines the size of a data center, up and past one million cores, that provide essentially enough coupling to be able to act as a single machine.

The building blocks are Open Compute Racks at 12 kW a piece.
The operating system of the machine is Kubernetes.
The units of work are container images.
The local API is the Linux Kernel API.

Do you still own Microsoft stock?

Published inContainers and Kubernetes

6 Comments

  1. Harald

    For the future of Ethernet beyond ROCE/RDMA, see what Infiniband does today. There is a convergence in features and performance limits that is ongoing, but if you want extra scary, Infiniband is the way to go.

    • Peter Buschman

      Ethernet and Infiniband have shared a lot of the same electrical DNA ever since 40Gb Ethernet adopted the same 4-lane structure as FDR 56Gb IB. Both use the same IEEE standardized QSFP connectors but Ethernet wisely added the ability to break-out individual lanes, something that is not possible with IB. Mellanox VPI NICs can even support both protocols with a simple firmware switch. IB is still great if you are building a green-field HPC cluster but if you have to deal with the messiness of older systems in a modern data center, 25/100Gb Ethernet gets you a lot of the good things from IB while still integrating smoothly with the older parts of the estate. Going forward, expect the convergence to continue with IB and Ethernet making the same jumps in speed at the same time.

  2. Val

    You forgot about cross-domain cloud load-balancing. Existing load-balancing solutions are largely fragmented.

  3. Alex

    Please ignore my weeping, while i crawl back to the museum that is considered a data center here.

    We are saving so much money by running outdated software on outdated hardware with outdated architecture – but top-notch controlling

  4. knorke

    “Fundamentally, IOPS are not a limited resource any more”
    I strongly disagree.
    The technology may be there but the costs are (from my perpective) insane. E.g. AWS has a cap of 20.000 IOPS for io2 EBS and you will pay approx. $1600 per month for 4GB (size) at 20k. Self hosting is not an alternative, although the hardware costs much less. But then you have to built the entire infrastructure on your own.

    Scaling IOPS is IMO more expensive than scaling CPU or RAM. YMMV.

    • kris kris

      And that is why, where I work, we build our own data centers . There is cost, but it’s for stuff you get to own, and in the end it delivers what we need.

      And yes, there is cost, but that’s just money. There is not really a fundamental technical barrier any more, so storage basically transforms from a tech problem into a procurement problem. That’s a phase change.

Leave a Reply

Your email address will not be published. Required fields are marked *