Cluster Autoscaling Is Not A Trivial Task

To ensure that the cluster is always ready for any load and new nodes connect and disconnect as needed, you must implement autoscaling. To make sure that your applications automatically receive the necessary resources in the required amount.

Autoscaling applications in a cluster are possible on any infrastructure – this is done using Kubernetes tools. But cluster autoscaling, which allows you to automatically connect and disconnect nodes when the load changes, is implemented on Bare Metal only by purchasing additional servers. So, we order them and wait – it will not work out right away.

Plus, if we are talking about Self-Hosted on Bare Metal, all the servers necessary for running applications in case of load will have to be kept in working order and constantly paid for.

If the Self-Hosted cluster is deployed on IaaS, the scheme is similar: the engineer adds a new virtual machine and brings it into the cluster. Another option is to take the provider’s API, if it provides it, connect a Kubernetes cluster through it, teach it to launch new servers for itself and implement auto scaling in this way. But you will need to develop a different solution – this is a complex task, requiring a high level of expertise in Kubernetes and clouds.

In addition, to quickly scale a Self-Hosted cluster on IaaS, you will have to reserve the required amount of provider resources and create new virtual machines from them as needed. And you will have to pay for these reserved resources: VMware resellers have a practice of charging for disabled resources. On our platform, in the case of disconnected VMs, you do not pay for help, only for disks. In some Managed solutions, auto scaling is enabled by a button; check with your provider for this option.

Pitfalls Of Self-Hosted Kubernetes

  1. To operate the cluster independently, you need a full-time specialist who knows the technology well and understands how everything works inside Kubernetes.
  2. You will need to configure monitoring, logging, load balancing, and much more in the cluster.
  3. The particular problem is deploying and integrating a storage system with a cluster.
  4. Many additional servers or virtual machines will be required – these are other costs to ensure the cluster’s failover, 
  5. To scale the cluster under load, you need a supply of servers or virtual machines – this is another item of the additional cost.

Calculate your opportunities at the start of the project. What resources your company has, your background, skills and other details greatly influence the choice of solution, whether it will be profitable for you to deploy Kubernetes on your own or better to do it in the cloud using a ready-made service. And do not forget the central question of all Kubernetes: do you need this technology for your project?

Also Read: Kubernetes Needs Pumping: It Doesn’t Run On Its Own

Leave a Reply

Your email address will not be published. Required fields are marked *