Enterprise development has always been one of the most exciting fields of software engineering – however, the last decade has been a particularly fascinating period. The 2010s have seen highly distributed microservices gradually replace classic three-tier architectures, with the almost limitless resources of cloud-based infrastructure pushing heavyweight application servers towards obsolescence.
Microservices can do without heavyweight runtime environments and therefore use significantly fewer resources. One of the best ways to deploy microservices is via containers, which we can think of as a small virtual machine. The most important difference between a container and a virtual machine is that a container does not contain an operating system, but rather it runs in the userspace on the operating system kernel. This makes a container a quasi-virtualisation on the operating system side, which means that several containers can operate on one host.
SEE ALSO: Software Development Management Drives Business Value
Alongside all this, a distributed architecture with services within a container landscape requires more than just one container per service. This means we often have a vast number of containers that must also be started and operated in a coordinated manner. This is a considerable administrative effort – imagine having to start every container manually from the command line. Thankfully, this is also where Kubernetes comes into play.
Configuring a Kubernetes-native environment
Kubernetes orchestrates the relationships between all of the containers in a system, such as communication and resource allocation. A good analogy to use to explain the role of Kubernetes is that of a harbourmaster at a seaport, with both being responsible for ensuring that lots of independent ‘ships’ move concurrently in a limited space.
This is a complicated job. Thankfully, though, developers who want to make applications fit for Kubernetes don’t need to set up endless parameter lists. Instead developers can set up a Kubernetes cluster by writing out text configuration files in YAML format, with each specification of an object in the cluster defined in detail in the config file. The risk at writing the YAML config file comes from human error, which is considerable given that the file is manually typed. Despite this risk, YAML still remains the leader for the used config file type, although Kubernetes can now work with JSON-formatted files.
Compared to classic enterprise software development, this all feels like a new world. With a lot of abstraction, we can compare going Kubernetes-native – that is, making full use of the capabilities of Kubernetes – with the well-known Java EE application server. This is because both Kubernetes-native and Java EE setups run their application parts on the distributed physical hardware.
However, this comparison between going Kubernetes-native and Java EE is only admissible at first glance, since containers and Kubernetes systems care comparatively little about the application requirements themselves. Kubernetes or containers only provide support for the configuration and abstraction of hardware – they do not provide support for transactions or other programming interfaces.
Hardware for local Kubernetes configuration
Up to now, the hardware requirements for the operation of a Kubernetes cluster have been left out. In order to take advantage of the benefits of high scalability and reliability provided by a Kubernetes cluster, developers need to allocate sufficient system resources to it.
If one assumes a cluster has two master nodes with 2 gigabytes of RAM and 4 cores, and two worker nodes with 1 gigabyte of RAM and 2 cores, then a Kubernetes cluster needs a minimum requirement of 6 gigabytes of ram and 12 cores. That’s definitely not something that can be easily run on most desktops. Although running on a desktop environment isn’t the goal of Kubernetes, developers rightfully should want to find a way to develop with Kubernetes locally.
Fortunately, there are now a number of smaller learning environments that make this possible. These include MiniKube, MicroK8s, and the OpenShift CodeReady Containers, which all are single-node clusters that bring Kubernetes to the desktop. Depending on the chosen environment, such an installation can be completed in a few minutes and a developer can start working locally.
To test a service in more complex environments, including interaction, there is often no way around a real Kubernetes cluster. However, there are tools like Code Ready Containers that can make a developer’s life easier, which includes an all-tools-included and single-node installation of a Kubernetes cluster.
SEE ALSO: Adopting a multi-cloud strategy? 5 reasons why your enterprise should
Going Kubernetes-native is a different world
The developer experience with Kubernetes is very different from the old fully integrated world of application servers. While going Kubernetes-native is the logical next step towards simplification for the developer, it does mean a leap into a more abstract and complex world.
The Kubernetes-native world is also a much more flexible one. With the aids and tools for productive Kubernetes-native development still in their infancy, it also presents a broad set of fascinating new challenges for developers.
The post How developers can go Kubernetes-native appeared first on JAXenter.
Source : JAXenter