Bug ID 1517469: Database monitor daemon process memory and CPU consumption increases over time

Last Modified: May 30, 2024

Affected Product(s):
BIG-IP DNS, GTM, LTM(all modules)

Known Affected Versions:,, 14.1.3,, 14.1.4,,,,,,, 14.1.5,,,,,, 15.1.5,, 15.1.6,, 15.1.7, 15.1.8,,, 15.1.9,, 15.1.10,,,, 16.0.0,, 16.0.1,,, 16.1.0, 16.1.1, 16.1.2,,, 16.1.3,,,,,, 16.1.4,,,, 17.0.0,,, 17.1.0,,,, 17.1.1,,,

Opened: Feb 12, 2024

Severity: 1-Blocking


When monitoring pool members using the LTM or GTM mssql (Microsoft SQL Server) monitor, memory and CPU consumption by the database monitor daemon (DBDaemon) process may increase over time. The increase in memory consumption by the DBDaemon process may be gradual and relatively steady over a long period of time, until memory consumption nears an RSS size of approximately 150MB. At that point, CPU consumption may start increasing rapidly. These increases may continue until the DBDaemon process restarts, restoring normal memory and CPU consumption until the cycle begins again.


As more objects remain in memory in the DBDaemon process, database monitor query operations may complete more slowly, which may cause pool members to be marked Down incorrectly. As memory and CPU consumption reach critical levels, more pool members may be marked Down. While the DBDaemon process restarts, all pool members monitored by database monitors (mssql, mysql, oracle, postgresql) may be marked Down until the restart is complete and normal operation resumes


This issue may occur when using the mssql (Microsoft SQL Server) monitor to monitor LTM or GTM pools/members. BIG-IP versions affected by this issue use the MS SQL JDBC (Java DataBase Connectivity) driver v6.4.0 to enable the DBDaemon process to connect to Microsoft SQL Server databases. This issue is not observed with other database types, which use different vendor-specific JDBC drivers, or with more current versions of the MS SQL JDBC driver. The time required for memory and CPU consumption to reach critical levels depends on the number of pool members being monitored, the probe interval for the configured mssql monitors, and whether the mssql monitors are configured to perform a database query (checking the results against a configured recv string) or to make a simple TCP connection with no query (send & recv strings) configured. In one example, a configuration with 600 monitored pool members with a mix of monitors with and without queries and an probe interval of 10 seconds was observed to reach critical memory and CPU consumption levels and restart to recover after approximately 24 hours of continuous operation. To view the memory and CPU usage for the DBDaemon process as recorded over time in tmstats tables, use the following commands. -- To obtain the Process ID (PID) of the DBDaemon process, observe the numeric first element of the output of the following command: "ps ax | grep -v grep | grep DB_monitor" -- To view memory and cpu usage for the DBDaemon process, use the PID obtained from the above command in the following command: "tmctl -D /shared/tmstat/snapshots/blade0/ -s time,cpu_usage_5mins,rss,vsize,pid proc_pid_stat pid=pid_from_above_command" The output of the above command will display statistics at one-hour intervals for the preceding 24 hours, then statistics at 24-hour intervals for prior days. The "cpu_usage_5mins" and "rss" columns display, respectively, the CPU and resident memory usage for the specified DBDaemon process. Gradual increases in "rss" to a critical upper limit near 150MB, and sharp increases in CPU usage as this critical upper memory limit is reached, are indications that this problem is occurring.


To prevent memory and CPU consumption from reaching critical levels, you can manually restart the DBDaemon process at a time of your choosing (e.g., during a scheduled maintenance window).

Fix Information


Behavior Change

Guides & references

K10134038: F5 Bug Tracker Filter Names and Tips