Last Modified: May 05, 2026
Affected Product(s):
F5OS Velos
Known Affected Versions:
F5OS-C 1.8.0, F5OS-C 1.8.1, F5OS-C 1.8.2
Opened: Nov 20, 2025 Severity: 2-Critical
A tenant disk will be removed from compute nodes when the tenant appears to be running from partition ConfD CLI. As a result, the VM pods cannot be started because the disk is gone. The orchestration software will notice this and recreate the tenant with a new disk, but all configuration and other data associated with the original tenant will be lost.
The orchestration software that takes over because the other node is at fault could end up running the reconciliation logic to add and remove tenants with no data because the DB read is not done and/or the kubernetes layer is not ready for queries. If the reconciliation tenant list is empty, orchestration software end up misinterpreting it as if there were no tenants, causing tenants to be removed. But the DB does have the tenants. Subsequent iterations will detect there is a tenant that needs to be created, but at this point the disk is gone and a new disk gets created.
- Power Cycle is performed on the a controller, forcing the partition go to the one staying up if it was not already there - Cause a fault in the system that is up (like unplug mgmt interface cable). This is to cause the controller that was power cycled to take over as soon as it is able to. - When the node that was down takes over (because the one that became active is at fault after unplugging cable) the tenants in the partition could end up getting wiped. This is intermittent since it is cause by a race condition at failover time. When the failover occurs, the other partition instance is running on controller node that will take a number of seconds to become active on the controller side and take over the kubernetes layer.
Because this is a race condition that occurs when a double fault happens (powercycle on one controller node and not having the mgmt interface up on the other controller node), the only way to try avoid it is to be ahead of the faults reported by the system.
None