Last Modified: Sep 13, 2023
Affected Product(s):
BIG-IP LTM
Known Affected Versions:
11.2.1, 11.3.0, 11.4.0, 11.4.1, 11.5.0, 11.5.1, 11.5.2, 11.5.3, 11.5.4, 11.5.5, 11.5.6, 11.5.7, 11.5.8, 11.5.9, 11.5.10, 11.6.0, 11.6.1, 11.6.2, 11.6.3, 11.6.3.1, 11.6.3.2, 11.6.3.3, 11.6.3.4, 11.6.4, 11.6.5, 11.6.5.1, 11.6.5.2, 11.6.5.3, 12.1.0 HF1, 12.1.0 HF2, 12.1.1 HF1, 12.1.1 HF2, 12.1.2 HF1, 12.1.2 HF2
Fixed In:
12.0.0
Opened: Feb 10, 2015 Severity: 3-Major Related Article:
K16369
Monitored node/pool member flaps even when the services are not down.
Monitored node/pool member might flap even when the services are not down. This might require the bigd process to be killed and restarted.
With many thousands of monitor instances, bigd might reach a state where it cannot keep up with the load.
Increase probe times (decrease frequency) or switch to simpler monitor types (for example, ICMP or TCP Half-Open instead of HTTP or HTTPS).
Monitoring daemon, bigd, may now run as multiple processes for distributed monitoring load. Previously, bigd (the primary monitoring daemon) ran as a single instance per BIG-IP system. By default, the system now runs multiple bigd processes per BIG-IP system if there are enough processor cores to support doing so. Monitor instances are divided among the processes, allowing each to do a subset of the monitoring work. A new sys db variable has been added to control this behavior: Bigd.NumProcs. This variable defaults to 0, which instructs the system to select a reasonable default. When set to 1, bigd runs a single process, very much like it always has. Any value greater than 1, and less than or equal to the number of available processor cores, causes that number of bigd processes to be started. Note that bigd must be restarted with bigstart whenever this variable is changed.
Previously, bigd (the primary monitoring daemon) ran as a single instance per BIG-IP system. By default, the system now runs multiple bigd processes per BIG-IP system if there are enough processor cores to support doing so. Monitor instances are divided evenly among the processes, allowing each to handle a subset of the monitoring work. A new sys db variable has been added to control this behavior: Bigd.NumProcs. This variable defaults to 0, which instructs the system to select a reasonable default, which is defined for hyperthreaded systems as 60% of the number of physical CPUs, and 50% for non-hyperthreaded CPUs. Any fractional component of the calculation is discarded (eg, 1.6 is treated as 1) When set to 1, bigd runs in single process mode, with all monitors being handled by a single CPU core. When set to 2 or higher, bigd spawns that number of process (with an upper limit imposed of the number of physical CPUs in the system) For example, a hyperthreaded systems with 4 cores on 2 physical CPUs will by default spawn 1 bigd process (60% of 2, rounded down), but this can be increased to 2 by setting bigd.numProcs to the value 2. Forcing a higher than default number of bigd processes is supported, but may result in poorer performance from other control plane processes such as the GUI and REST API. Note that bigd must be restarted whenever this variable is changed. Restarting bigd is a non service-impacting procedure (monitoring is paused while it restarts)