Comprehensive software reviews to make better IT decisions
Tips and Tricks to Setting Up and Configuring Your Kubernetes Cluster to Orchestrate Containers
This note outlines some tips and tricks that you should be aware of when embarking on the installation and configuration of a Kubernetes cluster. Such an endeavor should only be attempted if the need for an enterprise-grade container orchestration solution is required.
Kubernetes is an excellent solution for those who have the need to quickly scale workloads on the cloud or between clouds effectively, efficiently, and with high availability. However, there is a predicate of this process which is to containerize all the applications, services, data, and workloads that need to be scaled in the cloud. These components also need to be developed or modified to be asynchronous and to be aware of the ephemeral storage situation in the container environment.
Administration and Upgrades
- When a Kubernetes feature is deprecated, plan to replace that functionality before it is removed from a future release. Otherwise there is a risk of a loss of functionality, or worse, a complete loss of container orchestration.
- Windows 2019 is the only worker node Windows variant that is supported. Kubernetes was built to work most effectively on Linux platforms such as Ubuntu.
- Kubernetes often introduces breaking features of older versions. Use “Apt-mark hold Kubelet Kubeadm Kubectl” to stop the updates from pulling the latest ones down.
- You can use can-I before commands in a script to test if the user has access before executing them. This is useful for automated scripts to test if permissions exist before trying to run them.
- The Kubernetes init command creates a token that expires after two hours to use to connect the worker nodes to the control plane. Once this expires, generate a new one before connecting additional nodes.
- You can deploy an ephemeral container into a pod just for troubleshooting purposes. It contains a troubleshooting toolchain that can access the filesystem and resources of the other containers in that pod.
Implementing Specific Patterns
- With the right network connectivity setup you could run a single Kubernetes cluster across multiple clouds, which will enable elegant FinOps functionality.
- It is a good practice to put resource limits and requests limits on the containers that are in turn passed to the container engines.
- Scripts need to handle 409 conflict errors in case there are two commands from different users who act with conflicting commands on the same resources at the same time.
- Use CRON jobs to scale to zero overnight and hot start with a number of pods, then the horizontal autoscaler can take it from there.
- One could configure Kubernetes to have a minimum number of worker nodes at night when resource need is low, which are small or extra small, then scale to larger ones in the day when resource need is higher.
- If you need non-ephemeral persistence, use the pod. The data will persist for the life of the pod, not the container.
- Use traffic policies of “node” to prioritize using containers in the same node, if possible, to reduce latency (if you have local endpoints on the same node that are needed). To fix this, use Daemon sets to ensure there is one pod of that type on each node during scheduling.
- Use podAntiAffinity to deploy pods for higher availability.
- Use podAffinity to bring together pods that are particularly chatty with each other.
- You can use “taints” to specify which applications can run on which pods. You might use this to reserve a pod that is GPU enabled for example.
While Kubernetes is an excellent solution for container orchestration, one should be aware of the underlying architecture to take advantage of the full power of the solution. As there is no complete architecture diagram in the Kubernetes documentation from the Linux Foundation, here is a good meta model with most of the main elements defined:
Want to Know More?
Please book a call with an analyst to talk about this in more detail.
These are the trends we predict will be most important is it relates to Enterprise Architecture in 2021.
Lean IX and Apptio have partnered to produce an integrated solution that better informs the strategic decision-making process with improved visibility into an application’s total cost of ownership and alignment to business capabilities.
From the business architecture perspective, agility is the ability to quickly change structurally and operationally to react to external changes or to create new business value. Enterprise architecture comprises business and service/application architecture – therefore, it needs to provide an environment for harmonized agility at each level.
This note outlines some of the fundamental KPIs that you should measure to show the success of the enterprise architecture team. It also discusses how you measure them and visualize the result.
The application portfolio management (APM) tool space can be a confusing one, as many software vendors offer their own take of what APM is. Enterprise architecture, application management and project portfolio management tools offer an APM use case, but these are often quite skewed the primary function of the tool.
Application rationalization fails when the chosen framework does not match your scenario or the goals for your application portfolio. This note looks at how to apply application rationalization during an M&A.
Often people misidentify the purpose of application rationalization, leading to misuse and unsatisfactory results. We try to break application rationalization down to its simplest form to understand how to make the most of this critical IT function. tr
Measuring technical debt is important, but more important is communicating the implications of this problem in terms of risk to business capabilities. Cast Highlight tools are used for application portfolio management (APM), specializing in applying code analytics to business decisions regarding your organization’s applications.
Many application rationalization tools and frameworks miss the true benefits of this practice, as they only assess the individual application without consideration for its redundancies. Infusing an enterprise architecture perspective, as seen with LeanIX, will generate the bigger savings you are looking for.