Bug ID 1638629: "Unhealthy" kubevirt pod due to internal networking issue with blade

Last Modified: Jun 19, 2025

Affected Product(s):
F5OS F5OS-C, Install/Upgrade(all modules)

Known Affected Versions:
F5OS-C 1.8.0

Fixed In:
F5OS-C 1.8.1

Opened: Sep 04, 2024

Severity: 2-Critical

Symptoms

Some kubevirt pods are in a "CrashLoopBackOff" state following a live upgrade. The output of the 'show cluster' command shows that kubevirt status is unhealthy.

Impact

Might affect tenant deployment & traffic on the issued blade.

Conditions

Exact conditions are unknown and this occurs rarely. It was encountered during internal testing after a live upgrade.

Workaround

There are 2 workarounds for this issue: 1. Reboot the affected blade 2. Unschedule & reschedule the affected node Steps for workaround #2: 'oc adm cordon <node>' ------> Mark <node> as unschedulable. 'oc adm drain <node> --delete-local-data --ignore-daemonsets' -----> safely evicts all pods from the specified node,preparing it for maintenance or decommissioning. 'oc adm uncordon <node>' -------> mark the node as schedulable again. After the maintenance is complete, can use this command to allow new pods to be scheduled onto the node.

Fix Information

Please follow the work around steps and contact f5 support if need further assistance.

Behavior Change

Guides & references

K10134038: F5 Bug Tracker Filter Names and Tips