Centralization of Servers: A Cyclical History!

Every few years, computing undergoes a kind of cyclical change. You’ve likely not noticed it, as this change mostly affects larger infrastructure trends, but it’s there nonetheless. The cycle of centralization.

In the beginning of computing, computers were MASSIVE. There weren’t really “computers” in a place so much as “a computer,” especially when they would take up entire rooms and require constant maintenance and punch cards to program. These eventually got “smaller,” and would come to be known as “mainframes.” They were accessed from individual terminals that were basically only smart enough to print text and accept keyboard input to send back to the mainframe. Over time those terminals got even smarter, and mainframes went from the sole computing resource to a partner storing and helping to communicate amongst machines, allowing newer kinds of workflow. This became the internet we know today! But even within the world of servers, this cycle is active.

Servers were still exceptionally power computers, as they needed to be fast enough to keep up with multiple computers at once. Over time, servers were grouped into clusters, appearing from the outside as a monolith, but divvying up tasks amongst the machines so none were overworked. Suddenly scaling up was easy!

The new hotness to surpass this clustering idea is “virtualization.” You’d get one very powerful computer and use it to make a bunch of less powerful fake computers inside it, each with their own allocated resources of the larger machine. This let you separate applications, such that if one failed it didn’t necessarily break everything. One upfront purchase of hardware would serve for years and give far fewer mechanical components to have to keep working.

But again, the constant decrease of cost of components would modify this again, and those hosts of virtual servers were soon clustered themselves, so that you would have many cheaper pieces of hardware that would work together to share the load and take over if one failed, such that the cluster was not dependent on any one physical piece of hardware and let you spread costs around while still keeping applications separated.

This again was supplanted by the idea of “containers” which are effectively a way to run multiple applications on one host in a way that won’t touch each other. This lessens the overhead required of virtual servers by sharing the common parts of the operating system. It also allows you to copy those applications easily and automatically, so that if you’re bombarded by requests, you can quickly add more resources to a problem, and then get rid of them when the requests die down. Already though, this was both a centralization, as it brought higher power hardware again in favor, but also led immediately to decentralization, as it could be simple to use many weaker machines to create these containers.

One of the more interesting trends today that larger companies like Google and Facebook are doing is treating hardware AS a container. They’ve created custom servers using standard components, and specialized cases such that maintenance is easy. They have entire racks that are powered off until they are needed, and each physical “node” is easily swappable, such that they have people in massive datacenters whose job is to go around and quickly swap out a server reporting an error with a fresh one and then have someone troubleshoot it at a depot.

If you’re not a massive company, your servers are likely behaving as a hub for company activities, but the odds are good that the way they look reflect the time they were purchased, or maybe a tailing trend from a few years ago. I wouldn’t be surprised if more small and medium businesses started adopting the “black box” module approach in a few years, where you basically have a small rack of components and swap out what (previously) would be entire servers every few years to upgrade them, but the “server” itself remains the same, like a digital Ship of Theseus.