‘Cloud Native’ which is building code and services natively to the Cloud platform in question, can be viewed as part of the continuum-process from VMs in a PaaS to Serverless. All of the above technologies are valid, and many will be used in a poly-deployment architecture. At the one end are traditional virtual machines (VMs) operated as stateful entities, and at the other end are completely stateless, serverless apps that are effectively just bundles of app code without any packaged accompanying operating system (OS) dependencies.
Virtual Machines (VMs)
The vast majority of the world’s workloads today run ‘directly’ (non-containerized) in VMs. VMs are not a legacy platform to eliminate, given that many applications cannot – or will not be in the future – containerized. While a VM not hosting containers doesn’t meet all three attributes of a cloud-native system, it nevertheless can be operated dynamically and can run microservices. They also give the greatest degree of security, isolation and OS control.
Examples of VM technologies include VMware’s vSphere, Microsoft’s Hyper-V and the instances provided by virtually every IaaS cloud provider, such as Amazon’s EC2. VMs are often operated in a stateful manner with little separation between OS, app and data.
‘Thin’ VMs are the same underlying technology as VMs but deployed and run in a more stateless manner. Thin VMs are typically deployed through automation with no human involvement, are operated as fleets rather than individual entities and prioritize separation of OS, app and data. Whereas a VM may store app data on the OS volume, a thin VM would store all data on a separate volume that could easily be reattached to another instance. They typically have a stronger emphasis on dynamic management than traditional VM where a thin VM would be deployed from a standard image, using automation tools like Puppet, Chef or Ansible, with no human involvement.
Thin VMs are differentiated from VMs by the intentional focus on data separation, automation and disposability of any given instance. They’re differentiated from VM integrated containers by a lack of a container runtime. Thin VMs have apps installed directly on their OS file system and executed directly by the host OS kernel without any intermediary runtime.
For some organizations, especially large enterprises, containers provide an attractive app deployment and operational approach but lack sufficient isolation to mix workloads of varying sensitivity levels. VMs provide a much stronger degree of isolation but at the cost of increased complexity and management. VM-integrated containers, like Kata containers and VMware’s vSphere Integrated Containers, seek to remediate this by providing a blend of a developer- friendly API and abstraction of app from OS, while hiding the underlying complexities of compatibility and security isolation within the hypervisor.
Users execute typical container commands like docker run and the underlying platform automatically and invisibly creates a new VM, starts a container runtime within it and executes the command. The end result is that the user has started a container in a separate operating system instance, isolated from all others by a hypervisor. In general, these VM-integrated containers typically run a single container (or set of closely related containers akin to a pod in Kubernetes) within a single VM.
VM-integrated containers possess all three cloud native system attributes (listed in the conclusion) and typically don’t even provide manual configuration as an optional deployment approach. They’re differentiated from pure containers by the mapping of a single container per OS instance and the integrated workflow used to instantiate both a new VM and the container it hosts via a singular, container-centric flow.
Popularized and best known by the Docker project, containers have existed in various forms for many years, and have their roots in technologies like Solaris Zones and BSD Jails. While Docker is a well-known brand, other vendors are adopting its underlying technologies of runc and containerd to create similar but separate solutions.
Containers balance separation (though not as strong as VMs), excellent compatibility with existing apps and a high degree of operational control with good density potential and easy integration into software development flows. Containers can be complex to operate, primarily due to their broad configurability and the wide variety of choice they present to operational teams. Containers can be either completely stateless, dynamic, and isolated or highly intermingled with the host operating system and stateful, or anywhere in between.
They’re differentiated from Container-as-a-Service platforms by requiring users to be responsible for deployment and operation of all the underlying infrastructure, including not just hardware and VMs but also the maintenance of the host operating systems within each VM.
CaaS or Containers as a Service
As containers grew in popularity and diversity of use cases, orchestrators like Kubernetes (and its derivatives like OpenShift), Mesos and Docker Swarm became increasingly important to deploy and operate containers at scale. While abstracting much of the complexity required to deploy and operate large numbers of microservices, composed of many containers and running across many hosts, these orchestrators themselves can be complex to set up and maintain.
Additionally, these orchestrators are focused on the container runtime, and do little to assist with the deployment and management of underlying hosts. While sophisticated organizations often use technologies like thin VMs wrapped in automation tooling to address this, even these approaches do not fully unburden the organization from managing the underlying compute, storage and network hardware.
Since major public cloud IaaS providers already have extensive investments in lower level automation and deployment, many have chosen to leverage this advantage to build complete platforms for running containers that strive to eliminate management of the underlying hardware and VMs from users.
These Container as a Service platforms include Google Container Engine, Azure Kubernetes Service and Amazon’s EC2 Container Service. These solutions combine the container deployment and management capabilities of an orchestrator with their own platform-specific APIs to create and manage VMs. This integration allows users to more easily provision capacity without the need to manage the underlying hardware or virtualization layer. Some of these platforms, such as Google Container Engine, even use thin VMs running container-focused operating systems, like Container-Optimized OS or CoreOS, to further reduce the need to manage the host operating system.
They’re differentiated from on-demand containers to their right by typically still enabling users to directly manage the underlying VMs and host OS. For example, in most CaaS deployments, users can SSH directly to a node and run arbitrary tools as a root user to aid in diagnostics or to customize the host OS
Platform Managed Container Services
While CaaS platforms simplify the deployment and operation of containers at scale, they still provide users with the ability to manage the underlying host OS and VMs. For some organizations, this flexibility is highly desirable, but in other use cases it can be an unneeded distraction. Especially for developers, the ability to simply run a container, without any knowledge or configuration of the underlying hosts or VMs can increase development efficiency and agility.
On-demand container platforms include Amazon’s Fargate and Azure Container Instances. On these platforms, users may not have any ability to directly access the host OS and must exclusively use the platform interfaces to deploy and manage their container workloads. These platforms provide all three cloud-native attributes and arguably even require them; it’s typically impractical to not build apps for them as microservices, and the environment can only be managed dynamically and deployed as containers.
While on-demand containers greatly reduce the ‘surface area’ exposed to end users and, thus, the complexity associated with managing them, some users prefer an even simpler way to deploy their apps. Serverless is a class of technologies designed to allow developers to provide only their app code to a service which then instantiates the rest of the stack below it automatically.
In serverless apps, the developer only uploads the app package itself, without a full container image or any OS components. The platform dynamically packages it into an image, runs the image in a container and (if needed) instantiates the underlying host OS, VM and hardware required to run them. In a serverless model, users make the most dramatic tradeoffs of compatibility and control for the simplest and most efficient deployment and management experience.
Examples of serverless environments include Amazon’s Lambda and Azure Functions. Containers are extensively used in the underlying implementations, even if those implementations are not exposed to end users directly.
The Cloud Native Computing Foundation’s Charter, defines cloud-native systems as having the following properties: