We translated the checklist for running Redis inside a Kubernetes cluster. It is worth familiarizing yourself with it before moving on to using Redis under a workload.
Redis is a popular open-source in-memory data store and cache. This product has become an essential component of building scalable microservice systems. Many cloud providers offer fully managed Redis services: Amazon ElastiCache, Azure Cache for Redis, GCP Memorystore (and MCS also has such a managed service – translator’s note ).
However, you can quickly deploy Redis to Kubernetes if you need more control over its configurations. Out of the box, it already has decent performance, but if you are going to use Redis with a workload, then first check if all the checklist points are met
As with many other databases, Redi’s performance depends on the characteristics of the virtual machines. Create a node pool with memory-optimized machines with high bandwidth to minimize latency between clients and Redis servers. It is a single-threaded database. Fast processors with large caches (for example, virtual machines on Intel Skylake or Cascade Lake) will perform better, and adding new cores only indirectly affects performance.
If your workload is primarily tiny objects (less than 10KB), then memory size and bandwidth are not critical to optimizing Redis performance. Read more about the speed of Redis on different hardware. Read here.
Choosing a Deployment Method
You can deploy a Redis cluster to Kubernetes using the Bitnami Redis Helm or Redis operators. Although I usually advocate for Kubernetes operators, it doesn’t seem like a popular and mature Redis operator compared to Bitnami Helm Chart. Redis Labs, the company behind Redis, offers an official Redis Enterprise Kubernetes operator. Still, if you want an authentic Open Source version, you can choose between Spotahome or Amadeus IT Group (alpha). I haven’t worked with them, but there is a good article on problems using the Spotahome Redis operator. Redis Enterprise and Redis Open Source
Bitnami supports two Redis deployments: Master-Slave cluster with Redis Sentinel and Redis cluster topology with sharding. If you have a large read load, then the Master-Slave cluster will help transfer read operations to Slave-pods. Sentinel classes are configured to provide a slave pod to the Master in the event of a failure. A Redis cluster shards data across multiple instances and is excellent for situations where memory requirements exceed the limits for a single master. The processor becomes a bottleneck (more than 100 GB). The Redis cluster also maintains high availability with each Master connected to one or more slave pods. When the Master Pod falls, one of the Slaves becomes the new Master.
Some of the data Redis keeps in temporary storage, but persistent volumes are required for high availability. Redis offers two options:
- RDB (Redis Database File): snapshots at a specific point in time;
- AOF (Append Only File): Logs all Redis operations.
You can combine both types, but you need to understand their features to achieve the best performance.
RDB is a compact snapshot of the database (snapshot) optimized for typical backup operations. These operations have minimal impact on Redis’s performance because the parent forks the child to create the backup. In disaster recovery, RDB starts faster than AOF because the files are smaller. But since the RDB is a snapshot, in case of failures, data is lost between snapshots.
On the other hand, AOF saves every operation and is more reliable than RDB because it can be configured to use fsync every second or every request. On failure, AOF can go through the log and repeat each operation. Redis can also automatically and safely rewrite AOF in the background if it gets too large. The disadvantages of AOF include file size and speed. With replication enabled, the Slave sometimes cannot sync with the Master fast enough to retrieve all of the data. Also, AOF can be slower than RDB, depending on the fsync policy.