Last Modified: Jan 06, 2020
See more info
Known Affected Versions:
13.1.0, 220.127.116.11, 18.104.22.168, 22.214.171.124, 126.96.36.199, 188.8.131.52, 184.108.40.206, 220.127.116.11, 18.104.22.168, 13.1.1, 22.214.171.124, 126.96.36.199, 188.8.131.52, 184.108.40.206, 14.0.0, 220.127.116.11, 18.104.22.168, 22.214.171.124, 126.96.36.199, 188.8.131.52, 14.0.1, 14.1.0, 184.108.40.206
15.0.0, 220.127.116.11, 18.104.22.168, 22.214.171.124
Opened: Feb 16, 2018
-- clusterd restarts on secondary blade. -- Messages similar to the following are logged in each secondary blade's /var/log/ltm file as clusterd restarts: Management IP (<guest_management_ip>) already in use by (vcmp guest <guest_name>) -- Messages similar to the following are logged in the primary blade's /var/log/ltm file when clusterd restarts on a secondary blade: notice clusterd: 013a0006:5: Hello from slot 1. notice clusterd: 013a0006:5: Informing MCP about slot ID 1 member status. notice clusterd: 013a0006:5: Goodbye from slot 1.
Secondary slot on VIPRION hypervisor is in 'INOPERATIVE' state.
-- Power-cycling a blade reproduces the issue most of the time. -- Possibly specific to platform: + This issue has been seen multiple hardware platforms, including B2100, B2150, B2250, and PB300. + Issue does not reproduce under the same conditions on a VIPRION 4800.
On the vCMP Host, copy the file /shared/db/cluster.conf from the primary to each secondary cluster members. For each secondary blade's slot, use a command similar to the following: scp /shared/db/cluster.conf slot<slot number>:/shared/db/cluster.conf Note: Implementing the workaround does not prevent the issue from recurring. An upgrade to an unaffected version is recommended.