When monitoring containers, machine data is essential

Share
  • December 19, 2018

For developers, software containers are growing in popularity with companies looking to deploy containerized applications that can scale quickly. In the third annual State of Modern Applications and DevSecOps in the Cloud Report for 2018, our research shows that the number of companies running on Docker in production instances within Amazon Web Services has grown to 28 percent in 2018.

Similarly, container management and orchestration technologies are also quickly growing in popularity. Native deployment of Kubernetes has nearly doubled in the past year from 8 percent to 14 percent – this rapid growth shows that Kubernetes is catching up with ECS.

What does all this growth show, and why should developers care?

First of all, it shows that the adoption of containers is not slowing down. If anything, it is gathering pace as developers begin to take their containerized applications from initial test and development into production. Secondly, it means developers are embracing the advantages of distributed container and serverless-based cloud architectures to deliver uninterrupted and seamless customer experiences.

One of the inherent advantages of containers is that developers can scale up faster in their deployments compared to more traditional software deployment models. This approach also complements the growth of container-based applications based on microservices and serverless technologies.

Another big advantage of containers is that they help support more modular application designs. Rather than the big and monolithic apps of the past, containers can be used to support specific application elements talking to each other. Additional images containing those elements can be added to cope with peaks in demand, and equally, they can then be removed once demand drops.

Each set of containers can also be replaced with other elements that achieve the same objective and provide the same result by different means. Over time, developers have more freedom of choice and can use their APIs to choose between different storage or database platforms to store data as they need, rather than being tied to specific services. What might have been a huge migration project in the past can be solved using those APIs to avoid lock-in.

SEE ALSO: More containers means we need better system visibility

Legacy on-premises technology and processes can hold organizations back from innovating and meeting customer needs. Legacy systems and processes are no longer adequate for providing the rapid scale, agility, and visibility across the full application stack required to deliver uninterrupted and seamless customer experiences.

However, this fast growth can lead to more sprawl, more complexity and can hide a multitude of sins, from poor set-up, weak security and operational performance through to undocumented dependencies.  Far from being more resilient, poorly designed container-based applications can be more affected by change across a complex stack. Similarly, it can be more difficult to diagnose and fix problems when application code misbehaves or does not perform. This can be particularly problematic when third-party application elements are used and when the sheer volume, velocity, veracity, and variety of data serving these applications have to be considered in real-time.

To deal with this, it’s important to put some best practices in place from the start.

Data, analytics, and documentation

The first area that needs attention is the data that your application produces and the associated documentation around this data. This kind of data is important for new containerized applications as these elements can change so rapidly over time, particularly as individual elements can be swapped out without affecting other elements. Because each service is connected via APIs, replacing a database or an analytics service within an application can be simpler as the APIs can abstract each element from each other, making it easier to migrate over time.

Without a concrete central history of the application – particularly around those changes – it can be harder to track those decisions over time. This can make it more difficult to meet business goals. Poor or missing documentation can jeopardize an application over time, as this knowledge can live in developers’ heads and go with them if they ever leave the team.

Alongside proper documentation on the application’s design and components, it is also worth looking at how your application creates data. Across a complete application, each element will create data on activities that you should be collecting. Using this data can help you troubleshoot any issues and look for more detail around problems.

Monitoring and troubleshooting

With more complex applications running in containers, getting this data sorted and used can be more difficult, especially when multiple cloud services are involved. Traditionally, log management was simpler – applications were more tightly coupled and contained in one place centrally and there were fewer moving parts involved. Today, the opposite is true – the rise of containers has made it easier to host parts of an application in multiple places, while the trend towards serverless, microservices, and cloud has also meant that there are far more elements involved in each application stack.

Without a good monitoring plan in place, it is all too easy to miss data from the elements involved in the application execution. Secondly, it can be difficult to get a complete picture of the entire application and the underlying infrastructure in real-time – with so many moving parts sprawled across internal data centers, hybrid cloud, and/or multi-cloud deployments and third-party services using public container image libraries, getting a true and complete stack performance in one place and in real-time is more difficult. It can also be harder to predict and diagnose faults and issues in the event of something going wrong in real-time for operational and security proposes.

Implementing application monitoring for containers should be done at the start of any project. This will make it easier to secure, ingest, index, analyze and correlate data from all the containers. Looking at cloud-native infrastructure services should also make it simpler to manage data from a mixed IT environment over time.

Faster applications, DevOps and security

Alongside getting better insight into application performance and issues, monitoring your containers can help you improve your application security. Developers are taking on more responsibility for application deployments and operations over time in collaboration with IT operations teams, but they also have to collaborate more effectively with IT security teams too. Security is being added to DevOps as DevSecOps to represent this collaboration.

Using data from your application infrastructure can provide essential information for security monitoring. For example, looking at how your application stores and processes data can be important for compliance. Poorly implemented storage on services like Amazon S3 have led to large-scale data leaks in the past, so checking that your implementations follow best practices on access control and encryption should be done right at the start.

SEE ALSO: Does serverless technology really spell the death of containers?

The challenge here is that you still have to manage application sprawl. When container instances are located on different clouds, or when some elements are on one cloud and others on another, getting consistency over role-based access control is a necessary step. Similarly, managing access to those assets over time relies on good key management. Keys should be refreshed regularly for all images – but with so many containers being spawned and torn down automatically in response to demand and with a mix of stateless and stateful assets, this is not something that developers can do for themselves without having a significant impact on productivity.

Key management across all those assets is, therefore, something that should be automated. This automation of best practices – helping developers work more efficiently and productively while meeting requirements around security – is an area where data can help.

Alongside access control, data can be used to help security for applications too.  However, legacy security analytics tools, including Security Information and Event Management platforms (SIEMs), are failing to provide the insight needed to effectively manage security and compliance in the cloud, as was highlighted in a recent survey conducted by Dimensional Research. According to that survey, 93 percent of respondents think current SIEMs are ineffective for the cloud, and two-thirds identified the need to consolidate and rethink traditional tools.

Containers and best practices

Getting log data in one place and in an understandable format can help you see where your applications and your processes are performing well. Equally, it can point to areas where your thinking needs to change – where containers are getting spawned too quickly compared to demand levels and cost for cloud resources, for example. By bringing together information from all your application assets, you should be able to streamline and improve your overall performance.

Containers will continue to grow in importance for application developers. The performance advantages and automation support make it far easier to roll out apps that meet business needs quickly, while the ability to support faster changes makes it easier to run operations over time.

However, getting the right process, automation and security steps in place around these apps requires its own planning. And that’s why ensuring you have the right data around your container-based applications in the first place will go a long way to make those deployments more efficient and more valuable.

 

This article is part of the latest JAX Magazine issue. You can download it now for free.

Have you adopted serverless and loved it, or do you prefer containers? Are you still unsure and want to know more before making a decision? This JAX Magazine issue will give you everything you need to know about containers and serverless computing but it won’t decide for you.

The post When monitoring containers, machine data is essential appeared first on JAXenter.

Source : JAXenter