How To Use Gatekeeper In Production And Avoid Childhood Mistakes.

How To Use Gatekeeper In Production And Avoid Childhood Mistakes.

What Makes Working With Clusters Safe?

Companies have different security policies. But most of them have similar requirements:

  1. It denies access to system resources such as host PID and host IPC. This helps prevent unauthorized access and ensures container isolation.
  2. Use reliable images from trusted sources. Such images usually have fewer vulnerabilities and provide higher security.
  3. Prohibition of using the Latest tag in images. This allows for more predictable and controlled image deployment.
  4. Mandatory indication of limits and resource requests. This allows you to manage resources effectively and prevent resource depletion and container conflicts.
  5. They are limiting the launch of privileged containers. They have advanced access rights and can pose a potential threat.
  6. Policy for protection against network attacks (including DDoS protection) and action plan in case of their occurrence.

In addition, companies often create their own security rules and policies. To manage them and ensure security in Kubernetes clusters, they use tools that can work with these policies, validate actions, and configure not only Kubernetes, but also other services and applications.

We were looking for a universal tool that would secure work with clusters for our clients – Gatekeeper met our requirements.

How OPA (Open Policy Agent) And Gatekeeper Work

Open Policy Agent (OPA) is a universal tool for managing policies and access control. It can be integrated with various services and applications

What is Gatekeeper? This is a particular version of OPA created for managing security in a Kubernetes system. This is a kind of “guard” between the Kubernetes management server and the Open Policy Agent. It accepts requests that come to the Kubernetes cluster and relate to creating any objects (for example, containers or services) from the kube-API server component.

If we talk about the built-in Gatekeeper, the algorithm for working with it looks like this.

  1. When something new is created in the Kubernetes system, a request with data goes to Kube API.
  2. The system checks what rights the sender has. This is done through authentication and authorization.
  3. Once identity and permissions have been successfully verified, the information about the created object can be modified: new data can be added or changed.
  4. The object is checked against the specified rules and data structure. For example, whether all the necessary keys are present and the values ​​are correct.
  5. The validation phase uses Gatekeeper rules that the user specifies. For example, is it prohibited to create containers with special access rights or use outdated versions of images?
  6. If the object successfully passes the checks and meets the rules, it is stored in a particular storage (etc.). There, it can be further managed.

Gatekeeper is universal and copes with the main task of monitoring compliance with security policies. However, some nuances are worth considering. Let’s sort them out.

Why Is It Important To Follow The Limits/Requests Practice?

he load on nodes in clusters was distributed unevenly. Users ran priority pods on particular nodes, but over time, they were “evicted” to other nodes. This problem is because the client needs to control what works and where. 

We analyzed these problems and learned that, in most cases, they are associated with the fact that users do not set restrictions (limits) on the use of resources and do not specify how many resources they (requests) need. As a result, the servers became overloaded and inefficient. Gatekeeper helped us solve these problems by automatically enforcing rules and restrictions to ensure more even load distribution and more efficient use of resources in Kubernetes.

Kubernetes has a KubeScheduler component. He decides which server (node) in the cluster will host containers (pods). He does this through a specific method that involves three steps.

  1. KubeScheduler finds a list of available nodes with enough resources to run the desired pod.
  2. KubeScheduler then ranks the nodes based on the amount of available resources. Nodes with ample free resources receive more “points.”
  3. The new pod is placed on the node with the most points.

It is worth understanding that KubeScheduler receives information about loading nodes from a particular data storage – etc. This store contains information about how many resources other containers on each node use.

If limits and resource limits are not specified for the pods, then KubeScheduler will not know how many resources they need. Consequently, placement works by touch, which can lead to uneven loading of nodes and, therefore, problems.

It is important to properly configure pod resource limits so that KubeScheduler can distribute the load more efficiently across the Kubernetes cluster.

Also Read: Protection Of Personal Data With Encryption


Leave a Reply

Your email address will not be published. Required fields are marked *