I am running version-checker on a single node, quite small cluster with ~60 pods. So far it is working nicely, but I do not understand the memory behavior it has.
I'm basically running the sample deployment file, plus the --test-all-containers flag and some cpu limits:
resources:
requests:
cpu: 10m
memory: 32M
limits:
cpu: 50m
memory: 128M
kubectl get pod -o yaml
Over time, I see that version-checker approaches the memory limit and then stays near ~99% for a while. After some time, the kernel kills the ct due to OOM and k8s restarts the pod.

However, I do not see anything alarming in the logs, other than some failures and expected permission errors.
This doesn't seem to have any functional impact, but does fire some alerts and doesn't look good on my dashboards :)
Is this behavior intended, and/or is there any way to prevent it?
I am running
version-checkeron a single node, quite small cluster with ~60 pods. So far it is working nicely, but I do not understand the memory behavior it has.I'm basically running the sample deployment file, plus the
--test-all-containersflag and some cpu limits:kubectl get pod -o yaml
Over time, I see that
version-checkerapproaches the memory limit and then stays near ~99% for a while. After some time, the kernel kills the ct due to OOM and k8s restarts the pod.However, I do not see anything alarming in the logs, other than some failures and expected permission errors.
This doesn't seem to have any functional impact, but does fire some alerts and doesn't look good on my dashboards :)
Is this behavior intended, and/or is there any way to prevent it?