The practice of building and shipping software has been under a steady iterative change over the last decade. The overall arc of this journey carries us from monolithic applications running on our own metal and our own premises to building applications from modular, loosely coupled services running in the cloud. Cloud native technologies are no longer bleeding edge buzzwords that we all expect to learn more about in the coming months and years, they are here now and they are here to stay.
If you are working in a shop that is late to the party, your current battle plan likely revolves around dragging your monoliths to the cloud by any means necessary as your on-premises metal starts to rust. From this position, it’s easy to feel hopelessly behind. How do we break down our monolithic stacks into services, and how do we move forward without getting stuck in the details? Just as important to consider is how we migrate from a world that includes an IT budget and teams of sysadmins to small development teams with credit cards and cloud budgets.
Kubernetes to the rescue?
Kubernetes to the rescue, right? Of course, this can be the answer. Kubernetes is at the center of a robust cloud native ecosystem and is the de facto standard platform of the cloud. Thousands of users deploy tens of thousands of services on countless clusters every day. There is little doubt that if you plan to run software in the cloud, containers are the primitive and Kubernetes is the platform.
Let’s take a moment to reflect on what role Kubernetes plays in the journey to cloud native development. As we break down larger scale deployments and monolithic applications into smaller-scale microservices, containers are a near-perfect match in response to the need for an execution environment. However, factoring our services for container lifecycles is only half of the need, which is where Kubernetes comes in. It abstracts compute, networking and storage infrastructure resources while providing a framework which allows us to group our containers into meaningful deployments, and to scale these deployments up and down based upon needs.
Cloud native technologies
In what has become a traditional DevOps workflow, developers and operators have joined forces to manage and deploy software stacks on the cloud. Business logic is implemented and enhanced in units of microservices, deployments are handled automatically and scaled out on demand, and everything is managed by code in source control. That code might be the Go or Javascript implementing our business logic, or the Terraform and Helm implementing the infrastructure that it runs on.
This is how we roll in today’s New Stack-savvy workplace. Cloud native technologies are the lingua franca of technical solutions, and because of that teams are faster and more efficient than ever before. But, this comes at a cost, namely knowledge and experience. Even when using managed services, it takes experienced and knowledgeable practitioners to get that code into containers and to work those containers into orchestrated deployments. It takes experience and nuance to make those deployments run smoothly and update dependably, and above all, to know how to find out what went wrong when something goes wrong.
This is a lot to ask from a traditional development team, tasked with wrestling the monolith into the cloud by any means necessary. No doubt there are plenty of smart people to learn from, training to be acquired and of course, staffing may be an option. However, Serverless brings another option to bear, and while it’s one worth considering by any organization for numerous reasons, it could be especially interesting for teams just beginning their migration to the cloud.
SEE ALSO: Mo’ developers, mo’ problems: How serverless has trouble with teams
Using serverless to leapfrog initial problems
So, Serverless to the rescue! Or, at least Serverless may be a means of leapfrogging some initial growing pains. It promises a number of things, one of which its very name claims – we can forget all about the servers. As we know, there’s no magic in the cloud and there are obviously servers in Serverless. The idea is that developers need not worry about them. Ideally, that includes all aspects of the servers, from container primitives to orchestrating deployment clusters.
Put another way, Serverless is an architectural choice that can provide a development team much of what’s good in container-native microservices and orchestrated cloud deployments without needing to become fluent in the current best-in-class tooling of today’s cloud native infrastructure. In a typical cloud native workflow deploying atop Kubernetes, we write code, push code, build containers, push containers, define deployments, test deployments, monitor deployments and scale deployments. In Serverless, we write code and push code. Serverless handles the heavy lifting for you, allowing you to focus on the applications and services which drive your business.
As developers, we are accustomed to – and should expect to need to – adapt to improved development practices and emerging architectural models. When implementing applications in a Serverless context, microservices design and development best practices are implicitly enforced. Further, with a Serverless solution, we depend upon the platform to implement infrastructure best practices under the covers. In many ways, adopting Serverless allows a team to leapfrog over infrastructure hurdles and straight into an established cloud native practice.
Serverless is far more than a nascent cloud native abstraction, though. There is great power in the event-driven nature of Serverless solutions. With abstract events providing a common glue between services, it’s very simple to focus on your code, pulling together a set of best-in-class services to fulfill backend requirements. This enables development teams to rapidly prototype solutions that often easily roll into production-ready applications. Teams avoid reinvention of the same wheels, and it’s easier than ever to pivot to emerging best-in-class solutions for particular requirements.
It should not go unmentioned that this event-driven architecture is also what makes Serverless a good cloud native choice if a team wants to avoid vendor lock-in. Hint: all teams should want to avoid vendor lock-in. By its very nature, the Serverless model promotes freedom of choice on a per-component basis. This should extend to managed services and underlying infrastructure components as well, and does so with many cloud providers.
SEE ALSO: Serverless computing – What is the future for Dev and Ops teams?
In conclusion
As you explore this space, you should spend some time considering the openness of a provider’s Serverless solutions. Many cloud providers are offering managed services based upon open platforms, such as Apache’s OpenWhisk and Oracle’s Fn Project. You can also follow the work of the CNCF Serverless Working Group. This group consists of most major cloud providers and many other interested parties. The group is currently working on the CloudEvents specification which seeks to describe event data in a common way, further enabling Serverless solutions to be vendor- and cloud- agnostic.
Serverless concepts and practices have been in play for a couple of years now, some may say even longer. However, we are still in the early days regarding broad adoption. A lot of work is yet to come and it is sure to play a big part in the future of cloud native development. If you’re part of a team that is looking to make the migration to the cloud, it may offer just the kickstart you are looking for.
This article is part of the latest JAX Magazine issue. You can download it now for free.
Have you adopted serverless and loved it, or do you prefer containers? Are you still unsure and want to know more before making a decision? This JAX Magazine issue will give you everything you need to know about containers and serverless computing but it won’t decide for you.
The post Straight to serverless: Leapfrogging turtles all the way down appeared first on JAXenter.
Source : JAXenter