Does serverless technology really spell the death of containers?

Share
  • November 27, 2018

To misquote Mark Twain, the rumours of the death of containers have been greatly exaggerated. Serverless technologies may well usurp other technological functions, but I would argue that this will only happen on a case-by-case basis.

So rather than the horse and cart being replaced by the automobile, a more accurate analogy would be the advent of air travel complementing the shipping industry.

It is also clear that three big hosting platforms – AWS, Google Cloud and Microsoft Azure – are not attempting to replace containers with serverless technologies, they are augmenting them with other services that containers can make use of.

I would encourage people to consider containers as another tool of the cloud-native tool belt – perhaps even as the final piece of the puzzle – as opposed to a function destined for the figurative computing scrapheap. Below are some examples of the so-called serverless future, which are by themselves incredibly useful, but not indicative of a future without containers.

    DevOpsCon Whitepaper 2018

    Free: BRAND NEW DevOps Whitepaper 2018

    Learn about Containers,Continuous Delivery, DevOps Culture, Cloud Platforms & Security with articles by experts like Michiel Rook, Christoph Engelbert, Scott Sanders and many more.

The serverless one-liner

Serverless essentially falls into a shared idea of host-provided services. These services are now becoming more granular, connectable and abstract and, as such, can be used to provide a whole product architecture without the need of a server – hence the term, ‘serverless’.

There are considerable functions to mention here – and there are a lot more out there – but these three are particularly fundamental in the building of any web-based application.

Where do we keep all these cat gifs?

File hosting, a contained function which has been replaced by object storage, is a commonly referred-to example of the move to ‘serverless’. If you need to send static assets, images for example, then having your own server that serves these up to your users cannot even begin to compete in terms of efficiency and cost with that of object storage. The cost of serving a single static asset on Amazon’s S3 is infinitesimally small – half a cent per 1000 requests – not to mention the sheer scale that Amazon are able to offer to even the most pedestrian user.

Add to that the benefit of costing based on number of assets served (rather than the uptime of a static file host) and you can see why it would be ineffective to attempt to replicate this on a standard server setup – it’s cheap, reliable, requires zero maintenance and is easy to set up.

SEE ALSO: Data science applications: Containers or serverless?

And the ‘hilarious’ comments that go with them?

Another example is databases, and although this is not a completely solved problem with every hosting supplier, it is getting there rapidly.

We have already seen things like Aurora from Amazon and Cloud SQL and Datastore from Google. These are services that replace your own managed database cluster (whether SQL or not) but still provide all of the same functionality you would see with a native Postgres or Mongo database. The joy being that you don’t need to deal with scale, replication, backup or patching. It’s all taken care of for you by your host provider, all you need to do is plugin your credit card and tap into it.

Like with object storage, many of these services provide a pricing plan that is tailored towards use and capacity, meaning costs scale alongside your businesses success.

Now how do I turn them into a meme?

Finally, we have ‘cloud functions’, which some would point to as the serverless technology to sound the death knell for containers. Services like ‘Lambda’ from Amazon or Azure ‘Functions’ are still seen in their infancy, but provide the concept of ‘processing on demand’.

This can be a powerful addition to the toolkit of developers and system designers, effectively allowing custom code to be triggered on a given hook. Take the example of a photo processor. Here, a photo will come in and your processor will scan the photo and highlight any faces it can find. In a non-serverless environment you would have a server that accepted a HTTP request, most likely a POST request with the photo attached to the body of the request.

The server would then run your code that would find the faces, save the results to a database, and store the file somewhere (probably object storage). This can easily be replicated as a cloud function, which can be specifically triggered on events like HTTP requests. Again, these are managed and scaled for you by the hosting provider, and have the benefit of costs based on usage (many platforms allow 10k functions to be run per month for free).

For startups, this can provide bulletproof architecture very simply, and takes no additional time to produce the code. The cloud function APIs that I have used mimic things like the ‘Express’ framework in Javascript, providing the same request and response API your developers are familiar with, or Go’s inbuilt HTTP framework, the Context objects of which developers also recognise. In many cases migrating is a job of copy and pasting code from your existing HTTP server into the file structure the cloud provider prefers, and writing a manifest that specifies how each function is triggered.

The less in serverless

This holy trinity of ‘storage’, ‘state’ (database) and ‘processor’ (cloud functions) can be the only things many web services require. This means that even relatively complex services can have a fully scalable, reliable, and incredibly low maintenance architecture that is costed dependant on success rather than projected capacity requirements.

SEE ALSO: Straight to serverless: Leapfrogging turtles all the way down

Contain yourself

So where does this leave containers – are they now surplus to requirements?

While it is true that there are many applications that can use the trinity outlined above and never need to touch containers, this is not the case with all applications. Platforms like ours at Diffblue have processing requirements that can’t be fully defined as ‘cloud functions’. Our main product takes a supplied code base and will ‘automagically’ (sic) generate unit tests for it. In the majority of our platforms requirements, however, there are like-for-like components that we can, and do utilise from our cloud provider.

There are also processes that we need to do that just don’t fit, for example, we need to compile Java test cases and run them. This isn’t possible within ‘cloud functions’, and this is where containers excel.

Containers give us the ability to completely specify and maintain the environment that our code runs in, it’s not simply enough to run the code, we need to know on the host OS that we have access to a specific version of Java, or Maven, or even Git, not to mention running custom compiled C++ applications – cloud functions just can’t offer the same power right now.

Having seen the big three hosting providers integrating Kubernetes – the new lingua franca for how containers should run and interact with one another, containers are the fourth aspect – the even holier quadrangle, that augments the rest of the ‘serverless’ service, rather than replaces them.

The post Does serverless technology really spell the death of containers? appeared first on JAXenter.

Source : JAXenter