On Mon, Nov 28, 2022 at 10:49:28AM +0100, Aurelien Bompard wrote:
- You'll need to share the same redis password across several projects.
Redis does have users and permissions, at least from a quick look at their docs: https://docs.redis.com/latest/rc/security/database-security/passwords-users-...
- Since you'll use an emptyDir (in-memory storage), every restart will flush the cache for all connected applications.
I was thinking of running Redis in a VM, not in OpenShift. Sorry if that wasn't clear in my initial message.
- Applications owners lose control over the redis instance in case they want to do some fancy stuff with it, or just general debugging.
True.
It avoids a single point of failure for a bunch of services.
Right, but that's what our PostgreSQL host is at the moment already.
Yeah, true. I have thought about splitting that out too, but I am not sure if I think it's a good idea to have databases in openshift and making them vm's adds overhead of more vm's.
Contention/resource problems. (ie, one app is hammering the shared instance and starving other apps for resources).
True, true.
OK, that makes sense. The good thing about having a central Redis DB was, in my mind, to have persistent storage. What happens if I store a lot of data in the Redis Openshift pod? Won't that hit a memory limit? I think our current usage of Redis has been pubsub and light cache, but we haven't stored a lot of data in there yet.
Well, we can actually do persistent storage in the ocp4 cluster. ;)
There's nfs volumes, but also there's a local ceph storage (using disk on the compute nodes). I'm not sure how slow/fast it might be, but it is there...
kevin