<div><p>Containers offer great flexibility in how we deploy applications in the cloud. A good thing that is too: modern applications require unprecedented levels of scale. This necessitates the ability to rapidly scale up in the face of heavy traffic, and to scale back down as the traffic lessens. It's not uncommon for such applications to run on tens, hundreds or even thousands of servers in the cloud, responding to spikes in traffic, automatically healing themselves when failures occur, and being updated on the fly.</p><div class="bigdata-services-area p-5 mb-5 bg-eef6fd"><h4>How do you handle all this complexity?</h4><p>Container orchestration helps IT professionals manage fleets of containerized applications deployed on one-or many-servers (a cluster). An orchestration engine handles the placement of container instances (which server to run on); scaling up and down; the security of the cluster and its containers; and recovery from failure modes.</p><p>In effect, an orchestration engine allows IT professionals to manage a cluster of cloud servers as if they were a single computing resource. In that sense, I think of it as an operating system for the cloud. With a single script, DevOps can deploy code to servers across an entire cluster of servers. And, DevOps can manage it similarly, either through command-line utilities or a web-based portal.</p><p>Let's imagine such a system, with a series of container instances across three servers. Spanning those servers is the orchestration engine, and at the bottom is a particular "command post" that monitors and manages the deployment of containers across the servers (scheduler) and maintains configuration information, including the "desired state" (config manager). </p><p>Today, there are several such orchestration systems available, including Docker Swarm, Rancher, Apache Mesos and others.</p><p>However, the fastest growing platform today, with the largest ecosystem of partners and developers is Kubernetes. First introduced by Google in 2014, Kubernetes (Greek for "helmsman" or "pilot") is an open-source project with over 35,000 individual contributors to date and over 148,000 code commits.1 Kubernetes is the flagship project of the Cloud Native Computing Foundation, which seeks to advance state-of-the-art in cloud-native technologies.2 And as a member of the CNCF, Company is working with the organization to drive growth in this space.</p></div><div class="bigdata-services-area p-5 mb-5 bg-eef6fd"><h4>Introduction to Kubernetes</h4><p>In Kubernetes, the servers that host the application-level containers are called worker nodes, and the "command center" is the master node.</p><div style="padding:20px 0px;"><img src="/uploads/kub1_7a03e97bc2.png" alt="" caption=""></div></div><h4>Worker nodes</h4><p>The pods in the worker nodes represent the units of execution for Kubernetes. Each pod typically contains a single container instance (although for specific purposes it can wrap more than one container). Each pod gets assigned a unique IP address. <br><br>Kubernetes places two components on the worker node: the kubelet, which is responsible for connecting with and communicating with the master node; and the kube-proxy, which manages networking services on each node.</p><div class="bigdata-services-area p-5 mb-5 bg-eef6fd"><h4>The master node</h4><p>The master node, as the name implies, controls the Kubernetes cluster. Controllers, which run inside the kube-controller-manager, manage Kubernetes resources (such as pods). A job controller launches one or more pods to accomplish a task of some sort and lets them run to completion. A deployment launches and maintains a set of pods or replica set, and so on. (The controller pattern is extensible, enabling developers to create custom controllers.)</p><p>The Kubernetes scheduler finds the best worker node for a new pod to run on. If there are several available worker nodes, the scheduler scores them and chooses the one that's most ready. </p><p>The Kubernetes configuration is stored in a key-value database called etcd.</p><p>Here is an important point: the master node as shown in the illustration is a single point of failure. If the master node fails, it will be impossible to issue commands to the cluster and to manage it. Therefore, effort should be made into ensuring high availability for the master node.</p></div><h4>Kubectl</h4><p>Finally, a command-line tool called kubectl accepts commands from an administrator. It communicates through an API. This enables many third-party and add-on vendors to create web-based GUI consoles as well.</p></div>
if you are interested in exploring more on this topic please get in touch with us on insights@fintinc.com.