Comprehensive software reviews to make better IT decisions
Containers and the “End” of Server Virtualization
Attention managers of virtual machine infrastructure. Containers are coming. For some they are already here. It is not the end but it is a new chapter. Be warned and be ready.
That’s the thing with infrastructure in a rapidly changing technology environment. Just when you think you’ve got a new thing nailed down and normalized in production, along comes a new, new thing and everybody is saying the formerly new thing is over.
Such is the case with processor virtualization, the sort of thing enabled by a hypervisor such as VMware vSphere or Microsoft Hyper-V. The way you got efficiency and resiliency for the growing sprawl of commodity servers in the datacenter was to virtualize those servers (as virtual machines or VMs) and consolidate them on fewer physical hosts.
We are coming to the end of that movement. Most organizations with large numbers of x86 (Windows and Linux) servers are today majority virtualized. Many tell Info-Tech they are 90% or more virtual.
But now comes this thing called the application container and the container host platform (the most well-known being Docker). Wrapping an application in a container is said to be more efficient and lightweight than wrapping it in a VM. Further, to host a bunch of applications in containers on a server you don’t even need a hypervisor.
Containers Are Virtualization
So is virtualization over? Far from it. Virtualization is just beginning. The mega trends we’ve seen in IT infrastructure over the past decade or more are continuing. These big trends include:
- Consolidation and Convergence. Distributed processing on Windows/Linux servers lead to sprawl. Consolidation and convergence is about reversing physical device sprawl. Bringing it all together in ever tighter clusters of high capacity processing and storage.
- Standardization and Commodification. The foundational layer of the consolidated and converged infrastructure is standardized grids or clusters of commodity hardware. The more hands off and wire-once this grid, the better.
- Abstraction (Software Defined). With unchanging hardware underneath, all the management and configuration action happens not in the hardware but in software. A hypervisor, for example, is an abstraction layer that lets you treat a single physical machine as if it were a bunch of separate machines (VM) each with its own operating system and applications installed.
A container is just another form of abstraction. Where a hypervisor divides up, or partitions, a single physical machine into multiple virtual machine, containers partition a single operating system into multiple instances. The abstraction is just at a different layer.
- With a hypervisor each virtual machine has an operating system that thinks it exclusively owns (rather than shares) a computer.
- With a container each containered application thinks it has exclusive ownership of an operating system although multiple containers can be hosted on a single OS.
OS abstraction has been around as long as machine partitioning into VMs. In the past VMs have had an advantage over containers in that they were more portable. As each VM had a complete OS installed it could be copied from host to host to host. This changed with advent of container platforms like Docker.
Docker extended the idea of a container to the concept of a “shipping container for code” that promised frictionless deployment and optimum portability. Now you can package up just the OS services that the application depends on and move the packaged container to another computer running the same operating system and the Docker platform.
Better Demarcation of Accountabilities
Proponents of containers over VMs will point out that containers are more lightweight than VMs as they do not contain the full operating system, only the bits necessary to make the application run on a given OS. This also means that infrastructure management and development will be able to function with less operational overlap.
In an ideal world, infrastructure operations would focus on the availability, capacity, and performance of a homogenous platform. Developers would focus on building and configuring the application. It would be a frictionless process in that there would be no need to establish requirements and approvals for a server (even a virtual one). When the application is ready it is simply moved to the appropriate host.
When an application is “wrapped” in a virtual machine, that machine has all the maintenance requirements (such as configuration and patching) of a physical machine. Overlap in the accountability for the maintenance of that VM is a source of friction (and possibly contention) in operations.
Long Live the VM!
That highly efficient virtual server infrastructure you have been building and tending this past decade is far from obsolete. For all the hype, containers remain an emerging technology choice and it is not an either/or decision.
In that ideal world, the infrastructure would be a homogenous grid of commodity servers. This is what cloud infrastructures look like. The real world of the corporate data center is more heterogeneous. VMs have moved from the next big thing to legacy investment.
VMs are also better at heterogeneity, where multiple OS versions are hosted. Containers are better suited to a single server type and OS. The current investment in virtualization also includes investment in mature management and governance tool sets for the infrastructure. A 2015 survey by StackEngine (later acquired by Oracle) found 49% of respondents listed their chief concerns with containers as security and operational tools maturity.
In order to protect and leverage current investment in virtualization while exploring the potential of containers, the near-term strategy is to host your emerging container infrastructure on virtual machines. Hosting a container on a VM may at first seem redundant and resource wasteful, but it is the best way to take advantage of containers while ensuring enterprise-level security, reliability, availability, and scalability.
- Get started with containers. Set as a strategic goal the creation of a container-ready infrastructure that will meet both the requirements of developers and apps managers and the availability, recoverability, and security requirements of the enterprise.
- Start with hosting containers on VMs. In the short term, the best solution is likely to be hosting containers on container-ready VMs running Linux and a container engine like Docker. This may not be optimal for performance but will be optimal for securing and assuring availability for the underlying infrastructure.
- Look to less hypervisor dependence in the future. Longer term, enterprises should pilot running containers on bare metal to become familiar with the emerging tools for managing containers. The future is likely a hybrid of virtualized infrastructure and bare-metal container infrastructure.
Abstraction in the form of virtualization and software defined isn’t going anywhere. But one form of abstraction, the server hypervisor, has peaked in terms of market penetration and mainstream adoption. Future infrastructures will be 100% software defined but that doesn’t mean 100% of servers will need hypervisors. Your container strategy should focus on a hybrid future to bridge from legacy to new style virtualization.
Want to Know More?
My Firewall Is Smarter Than Your Firewall
Next-generation firewalls were smarter than previous firewalls, able to deeply analyze traffic and integrate with complementary security solutions. Today our needs are more complex, however, with a 742% increase in software supply chain attacks over the past three years. Sonatype Nexus Firewall has been paying attention and claims its firewall product is smarter about these attacks.
Your Internet Secret Service, Otherwise Known as External Attack Surface Management (EASM)
Have you ever thought of what else you could do to take your security operations center (SOC) to the next level and focus on prevention? Look no further – external attack surface management (EASM) was a popular managed service and topic of discussion at Rivest–Shamir–Adleman (RSA) Conference 2023, named after a popular public-key cryptosystem.
Can Hillstone Networks Position Its StoneOS to Take Firewalls Beyond the Next Generation?
Hillstone Networks has positioned itself as a robust and feature-rich provider of not only hardware but also security solutions. With its ZTNA 3.0 release and support for centralized management of IoT assets and incident response, the company embodies a next-generation firewall.
Acronis Offers a Unique Endpoint Protection and Data Recovery Package Tailored for the Small to Medium-Sized Business
Acronis hopes to overtake many competitors in the data recovery and endpoint protection solution space by forging partnerships with many MSSPs and appealing to the SMB market. The company has doubled down by hiring the former CEO of GoDaddy, who is committed to reinvesting in its technology and increasing and improving its product line.
Zoho Announces Trident to Power Workplace’s UCaaS Capabilities
Zoho, a multinational software and web-based business tool provider, has announced the launch of Trident – a hub that brings Zoho’s pre-existing and new unified communications capabilities into a single pane of glass. How will Trident’s addition to Workplace impact customer migrations from Microsoft and Google.
Next-Gen EDR/MDR/XDR – Field Effect Covalence
Field Effect Covalence is an EDR/MDR/XDR offering that translates chaos into order.
Will Avaya’s Five-Step Transformation Strategy Generate a Stronger Outlook for 2023?
To revitalize and strengthen business transformation, Avaya has outlined a five-step plan for restructuring its product lines, go-to-market strategy, and balance sheet. This tech note evaluates these five steps, highlighting the main contingencies for each step’s successful rollout.
Informatica World 2022 Highlights
On May 24-25, Informatica held its annual conference in Las Vegas – the first time “in-person” since the beginning of the COVID-19 pandemic.
Are You In or Out? How to Source Application Development
Custom application development is a strategic differentiator in the digital economy. Organizations need to make good decisions on how to insource or outsource that development or they risk bad software … and worse results.