An introduction to containers and microservices for enterprise leaders

It is important for IT professionals and enterprise leaders to have an understanding of microservices and containers, as they can significantly enhance the speed and agility of an application when implemented effectively

Containers currently make up about 19 percent of hybrid cloud production workloads today, but in just two years’ time this will increase to around 33 percent. In particular, microservices are being increasingly employed in the finance sector, with NAB CEO Andrew Thorburn recently saying they are “resilient, reusable, flexible and proven”.

It is important for IT professionals and enterprise leaders to have an understanding of microservices and containers, as they can significantly enhance the speed and agility of an application when implemented effectively.

What are containers and microservices?

Containers are isolated workload environments in a virtualized operating system. A container consists of an application and all the figures and files needed to run it, packaged together. As they are independent environments, they aren’t tied to software on physical machines, providing a solution for application-portability. Containers also speed up workload processes and application delivery because they can be spun up quickly.

Microservices are a type of software architecture. Applications are broken down into a series of functions, and each singular function is performed by small, self-contained units working together. As they are independent of each other they aren’t reliant on a standardised programming language or data storage, promoting upmost portability. This architecture is also significantly faster and more agile. As the microservices are responsible for completing smaller, lighter tasks they can complete their tasks much faster, which then allows for more frequent reassessment and adaptation.


Without monitoring, businesses prolong the time it takes to identify and repair a fault, opening opportunities for an outage to occur. Productivity, quality, security and visibility of assets can all take a massive hit. This is exacerbated in the world of containers, where agile development methodologies are used to continuously deliver changes with blue/green deployments and canary tests.

Flying blind is simply no longer an option, and at a high level we need to answer the following:

  • Am I more interested in lowering the cost of IT, or creating a strategic value add?
  • Is my data being provided in real-time in a constantly changing dynamic environment?
  • Does my solution have machine learning to be able to baseline ‘normal’ not just from a metrics or performance perspective, but down to the revenue impact?
  • Do I have the capability to proactively alert and remediate issues before they occur?
  • Do I have the data necessary to optimize the user journey and measure the business impact of my changes?

Three fundamentals of monitoring containers and microservices:

Container-based monitoring (looking from the outside-in)

This method of monitoring is most similar to traditional infrastructure monitoring. This is typically used to optimize IT costs, as it focuses more on resource utilisation and workload balancing. It is helpful in determining the architectural relationships between containers, but falls short in giving you a real-time view of an end-to-end transaction. Additionally, due to the ephemeral nature of containers, it really doesn’t make sense to only monitor what the container was doing at a moment in time from an infrastructure perspective - you need to look at clusters of behavior. This method of monitoring is most preferred by operations and infrastructure teams.

Microservice-specific monitoring (looking from the inside-out)

This is often referred to as application-centric monitoring. Since applications are typically at the core of the business, their ability to perform and the data they house will have a direct impact on revenue. This is done by monitoring the code being executed for every single user interaction, tracing transactions end-to-end in real-time and extracting business metrics to help make better business decisions. You will need something that is tried and tested in production environments that can trace interactions notwithstanding where they are deployed, and is dynamic enough to support the constant changes being deployed.

Log-based monitoring (looking from the sideline)

This is often used in parallel to the above two methods. However, this method is typically a remnant of the way services were monitored in the past and becomes less effective when monitoring microservices because microservices are stateless, distributed, and independent. This means additional effort is required to change application code just to be able to regain the same level of visibility and it becomes more difficult to correlate events across several platforms.

Your monitoring strategy

Whether you use one, two or all of the above strategies, a monitoring strategy should include robust application monitoring capabilities that provide full visibility into the containers and microservices, as well as real-time insight into how they are being used and impacting business outcomes. Containers are immutable and, therefore, there must be a consistent strategy from development, test, and load test to production. This means the solution needs to be part of the culture as well as development lifecycle.

Finally, as containers and microservices continue to evolve, businesses need to take control and advantage of all the benefits they have to offer. Monitoring is key to measuring these benefits, and insights gathering is critical to then actioning ways to optimise performance. The CIOs and IT leaders that invest in monitoring will be the ones to gain optimal results from container and microservices environments and be able to drive meaningful business impact.

Mykhaylo Shaforostov is APAC CTO and director of systems engineering at AppDynamics