Latest Research


This content is currently locked.

Your current Info-Tech Research Group subscription does not include access to this content. Contact your account representative to gain access to Premium SoftwareReviews.

Contact Your Representative
Or Call Us:
1-888-670-8889 (US/CAN) or
+1-519-432-3550 (International)

Comprehensive software reviews to make better IT decisions

Tips and Tricks to Setting Up and Configuring Your Kubernetes Cluster to Orchestrate Containers

This note outlines some tips and tricks that you should be aware of when embarking on the installation and configuration of a Kubernetes cluster. Such an endeavor should only be attempted if the need for an enterprise-grade container orchestration solution is required.

Recommendations

Kubernetes is an excellent solution for those who have the need to quickly scale workloads on the cloud or between clouds effectively, efficiently, and with high availability. However, there is a predicate of this process which is to containerize all the applications, services, data, and workloads that need to be scaled in the cloud. These components also need to be developed or modified to be asynchronous and to be aware of the ephemeral storage situation in the container environment.

Administration and Upgrades

  • When a Kubernetes feature is deprecated, plan to replace that functionality before it is removed from a future release. Otherwise there is a risk of a loss of functionality, or worse, a complete loss of container orchestration.
  • Windows 2019 is the only worker node Windows variant that is supported. Kubernetes was built to work most effectively on Linux platforms such as Ubuntu.
  • Kubernetes often introduces breaking features of older versions. Use “Apt-mark hold Kubelet Kubeadm Kubectl” to stop the updates from pulling the latest ones down.
  • You can use can-I before commands in a script to test if the user has access before executing them. This is useful for automated scripts to test if permissions exist before trying to run them.
  • The Kubernetes init command creates a token that expires after two hours to use to connect the worker nodes to the control plane. Once this expires, generate a new one before connecting additional nodes.
  • You can deploy an ephemeral container into a pod just for troubleshooting purposes. It contains a troubleshooting toolchain that can access the filesystem and resources of the other containers in that pod.

Configuration


  • You can use Rancher as a single pane of glass if you have Docker swarm, Mesos, and Kubernetes or a mixture of two or more of these container orchestrators.
  • Use Kubernetes Kompose to translate native docker images into a Kubernetes environment.
  • Kubespray uses ansible to install Kubernetes, which is a familiar and compatible infrastructure as code component.
  • It is possible to allocate pods with containers with memory and CPU maximum utilization that is above the allowed resources in that node. Some will be pending when limits are reached. This is random.
  • Kubernetes cannot restrict IOPS or GPU with limits on the container, pod, or namespace – only CPU and memory. It may be possible to “use taints” to remedy this.

Implementing Specific Patterns

  • With the right network connectivity setup you could run a single Kubernetes cluster across multiple clouds, which will enable elegant FinOps functionality.
  • It is a good practice to put resource limits and requests limits on the containers that are in turn passed to the container engines.
  • Scripts need to handle 409 conflict errors in case there are two commands from different users who act with conflicting commands on the same resources at the same time.
  • Use CRON jobs to scale to zero overnight and hot start with a number of pods, then the horizontal autoscaler can take it from there.
  • One could configure Kubernetes to have a minimum number of worker nodes at night when resource need is low, which are small or extra small, then scale to larger ones in the day when resource need is higher.
  • If you need non-ephemeral persistence, use the pod. The data will persist for the life of the pod, not the container.
  • Use traffic policies of “node” to prioritize using containers in the same node, if possible, to reduce latency (if you have local endpoints on the same node that are needed). To fix this, use Daemon sets to ensure there is one pod of that type on each node during scheduling.
  • Use podAntiAffinity to deploy pods for higher availability.
  • Use podAffinity to bring together pods that are particularly chatty with each other.
  • You can use “taints” to specify which applications can run on which pods. You might use this to reserve a pod that is GPU enabled for example.

Extending Kubernetes


  • Labels are used to connect Kubernetes native objects, but annotations can be used to bind Kubernetes native objects to external objects from the Kubernetes environment.
  • You can use Service Mesh by Istio or another external project to better route traffic between pods if there are a very large number of pods in a large organization. But this is only needed if the complexity gets out of hand and latency is too much.

Bottom Line

While Kubernetes is an excellent solution for container orchestration, one should be aware of the underlying architecture to take advantage of the full power of the solution. As there is no complete architecture diagram in the Kubernetes documentation from the Linux Foundation, here is a good meta model with most of the main elements defined:


Want to Know More?

Please book a call with an analyst to talk about this in more detail.

Visit our Exponential IT Research Center
Over 100 analysts waiting to take your call right now: 1-519-432-3550 x2019