Nowadays, ARM CPUs are the predominant choice for everything outside the data center. Even some PCs are now based on this type of CPU. Several reasons for this exist, low power consumption, good-enough computing power, System-on-Chip (SoC) designs, ecosystem, ARM licensing model, and more. On the other hand, Intel lost this battle a long time ago, but X86 CPUs won hands down in the data center. Things are changing though, and ARM is becoming more attractive for data center workloads as well.
Workloads and Applications Are Changing
Intel CPUs are simply unbeatable for some workloads. Single thread tasks run faster and better in x86 cores. These CPUs have huge memory caches, are good for several workloads, and give very predictable results. The way we design and develop new applications is changing though, micros-services and small containers are highly parallel, very small, and require minimal resources to run. Having multiple small tasks running in CPUs with few cores, albeit fast cores, is not very efficient, the memory cache is less important and context switching is a toll to pay, no matter the efficiency of the CPU.
Smaller CPUs, with many cores, can run multiple tasks concurrently and efficiently. Maybe, some of you remember Sun SPARC T CPUs. This CPU was ahead of its time and particularly awful at running a single task, but for highly multithreaded code, they were a killer. Unfortunately, at that time, multithreaded code and microservices were still a utopia.
Hardware is Changing
General-purpose CPUs are OK at everything but, let’s be honest here, they aren’t good at anything. In recent years we saw many new specialized processors (often called accelerators) getting traction in the data centers:
- GPUs for intensive graphic, media transcoding, and AI/ML
- FPGAs for signal processing, voice recognition, cryptography, networking, some AI tasks
- Tensor CPUs: specialized accelerators for AI and ML
- And even ASICs are back for certain tasks
When servers are equipped with these accelerators and run the workloads for which they are designed, the CPUs power is less relevant to improve the total execution time of a task. At this point having a good-enough and power-efficient CPU is better than a power-hungry one.
Furthermore, hyper-scalers started to design their own hardware a long time ago. They have specialized hardware now, designed for their specific needs. More efficient, serviceable, resilient, and less expensive than standard rack servers. With better control over the entire design, including the CPU, they can get these characteristics up another level. Even more so, smaller servers also means smaller failure domains and better service availability.
It’s not only hyper-scalers that are looking at ARM CPUs, also tier-2 service providers are active in this regard. Packet, for example, always had some ARM instances and several PaaS and SaaS providers are investigating the potential of this option.
Are Enterprises Changing?
Short answer? Not so much. Enterprises are still forced to use X86-based hardware and standard servers. They do not have the numbers and honestly the interest to move to ARM.
Traditional enterprise applications are not ready, and even though we talk every day about Kubernetes, it will take a long time to have a critical mass of container-based applications to think about switching to new hardware architectures. Honestly, at that time, it is highly likely that on-prem IT will be only a component of a larger hybrid environment with applications floating between the public cloud and on-premises infrastructures, and I don’t know if somebody will still bother to look too much into something that won’t really make a big difference in terms of TCO.
Edge computing could be another story. If Kubernetes will become a prevalent system to deploy and manage applications in remote sites, power efficiency and other benefits of ARM architectures could play an important role, leading to a major change in how edge infrastructures are designed. And it is not only Kubernetes, VMware is on the same wavelength with its ESXi for ARM.
Key Takeaways
Cloud and service providers are designing their data centers for the best efficiency. They are big hardware spenders and every improvement they are able to make can easily result in big savings. Hyperscalers such as Microsoft and Amazon are already working hard in this direction and many others, even smaller ones, are doing the same.
Startups like Bamboo Systems, for example, are working to make efficient ARM servers available to a wider audience, and the number of standard ARM-based designs are growing as well.
Looks like specialized CPUs and accelerators have more potential than in the past to win the data center. Intel needs new ideas, new CPU designs, and new products, to fight competitors like AMD with Epyc, NVIDIA GPUs, and ARM. They already did it in the past, will they be able to do it again?