Last Modified: Nov 07, 2022
Affected Product(s):
BIG-IP TMOS
Known Affected Versions:
11.5.1, 11.5.1 HF1, 11.5.1 HF10, 11.5.1 HF11, 11.5.1 HF2, 11.5.1 HF3, 11.5.1 HF4, 11.5.1 HF5, 11.5.1 HF6, 11.5.1 HF7, 11.5.1 HF8, 11.5.1 HF9, 11.5.10, 11.5.2, 11.5.2 HF1, 11.5.3, 11.5.3 HF1, 11.5.3 HF2, 11.5.4, 11.5.4 HF1, 11.5.4 HF2, 11.5.4 HF3, 11.5.4 HF4, 11.5.5, 11.5.6, 11.5.7, 11.5.8, 11.5.9, 11.6.0, 11.6.0 HF1, 11.6.0 HF2, 11.6.0 HF3, 11.6.0 HF4, 11.6.0 HF5, 11.6.0 HF6, 11.6.0 HF7, 11.6.0 HF8, 11.6.1, 11.6.1 HF1, 11.6.1 HF2, 11.6.2, 11.6.2 HF1, 11.6.3, 11.6.3.1, 11.6.3.2, 11.6.3.3, 11.6.3.4, 11.6.4, 11.6.5, 11.6.5.1, 11.6.5.2, 11.6.5.3, 12.0.0, 12.0.0 HF1, 12.0.0 HF2, 12.0.0 HF3, 12.0.0 HF4, 12.1.0, 12.1.0 HF1, 12.1.0 HF2, 12.1.1, 12.1.1 HF1, 12.1.1 HF2, 12.1.2, 12.1.2 HF1, 12.1.2 HF2, 12.1.3, 12.1.3.1, 12.1.3.2, 12.1.3.3, 12.1.3.4, 12.1.3.5, 12.1.3.6, 12.1.3.7, 12.1.4, 12.1.4.1, 12.1.5, 12.1.5.1, 12.1.5.2, 12.1.5.3, 12.1.6
Opened: Jul 19, 2016 Severity: 3-Major
You notice an unusually high amount of sync traffic when changing many pool members at once. In extreme cases, mcpd may run out of memory and crash.
This can cause an unusually high amount of sync traffic to occur between devices in the sync group with auto-sync enabled. In extreme cases this can cause mcpd to crash and traffic is disrupted while mcpd restarts.
When looking at a list of pool members, it is possible to choose to view many pool members at once, and you can then select them all and enable or disable them with one press of a button. Rather than sending all of the operations in a single transaction, the GUI code updates each pool member one by one. When there are a lot of pool members and auto-sync is being used, this can cause race conditions that can generate a large number of transactions going from the local machine to the remote machine.
If you frequently need to enable/disable many pool members at once, there are a couple of options: 1. You can switch to manual sync during this operation. 2. You can minimize the number of pool members that are altered at once. The issue was observed when changing over 300 pool members at once.
None