What happened:
When a pod containing a blob csi PV mount triggers node scale up and is scheduled on the new node, k8s csi layer tries to create the mount before the blob csi driver is ready on the nodes, leading to transient FailedMount events like kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name blob.csi.azure.com not found in the list of registered CSI drivers
What you expected to happen:
K8s waits for blob csi liveness probe to report ready before attempting to use the driver
How to reproduce it:
Schedule a pod with a blob mount which will trigger node scale up
Anything else we need to know?:
I suspect this might be expected but annoying behaviour from k8s. Raising this in case there is an issue with the liveness probe behaviour.
Environment:
- CSI Driver version: 1.27.3
- Kubernetes version (use
kubectl version): 1.34.0 (AKS)
- OS (e.g. from /etc/os-release): azurelinux3
- Kernel (e.g.
uname -a):
- Install tools: Helm
- Others:
What happened:
When a pod containing a blob csi PV mount triggers node scale up and is scheduled on the new node, k8s csi layer tries to create the mount before the blob csi driver is ready on the nodes, leading to transient FailedMount events like
kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name blob.csi.azure.com not found in the list of registered CSI driversWhat you expected to happen:
K8s waits for blob csi liveness probe to report ready before attempting to use the driver
How to reproduce it:
Schedule a pod with a blob mount which will trigger node scale up
Anything else we need to know?:
I suspect this might be expected but annoying behaviour from k8s. Raising this in case there is an issue with the liveness probe behaviour.
Environment:
kubectl version): 1.34.0 (AKS)uname -a):