Last Modified: Sep 15, 2025
Affected Product(s):
BIG-IP TMOS
Known Affected Versions:
15.1.0, 15.1.0.1, 15.1.0.2, 15.1.0.3, 15.1.0.4, 15.1.0.5, 15.1.1, 15.1.2, 15.1.2.1, 15.1.3, 15.1.3.1, 15.1.4, 15.1.4.1, 15.1.5, 15.1.5.1, 15.1.6, 15.1.6.1, 15.1.7, 15.1.8, 15.1.8.1, 15.1.8.2, 15.1.9, 15.1.9.1, 15.1.10, 15.1.10.2, 15.1.10.3, 15.1.10.4, 15.1.10.5, 15.1.10.6, 16.1.0, 16.1.1, 16.1.2, 16.1.2.1, 16.1.2.2, 16.1.3, 16.1.3.1, 16.1.3.2, 16.1.3.3, 16.1.3.4, 16.1.3.5, 16.1.4, 16.1.4.1, 16.1.4.2, 16.1.4.3, 16.1.5, 16.1.5.1, 16.1.5.2, 16.1.6, 17.1.0, 17.1.0.1, 17.1.0.2, 17.1.0.3, 17.1.1, 17.1.1.1, 17.1.1.2, 17.1.1.3, 17.1.1.4, 17.1.2, 17.1.2.1, 17.1.2.2
Opened: Mar 04, 2025 Severity: 3-Major
The leak develops on standby device but may persist on active device. Restjavad may fail and restart with a similar error to the following log snippet (in /var/log/restjavad.0.log if failure is recent): 'DieOnUncaughtErrorHandler Uncaught Error causing restjavad to exit.' It may also trigger frequent CPU intensive garbage collection such as many invocations of 'Full GC'. These will not be able to clear the memory, and that may be observable in GC logs as only small drops in restjavad heap size when Full GC runs. Restart of restjavad may not clear the issue fully or for long. Issue may persist after upgrade. /var/log/restjavad-api-usage.json has a large file size. Typically it will be tens of Kilobytes before leak develops and eventually grow to Megabytes or tens of MB.
Restjavad exits and restarts, perhaps repeatedly. High CPU use due to frequent intensive garbage collection may occur.
Restjavad that fails or exhibits issues will have had a long time as standby in a HA cluster (DSC), but may not be standby at time of failure.
The memory leak can be cleared with a straightforward procedure that should have low impact to service that doesn't require constant availability of REST API. For systems that are more dependant on that availability such as SSLO you may want to restrict use of Procedure A to standby devices. Procedure A Use this procedure to clear a well developed leak, which shows as medium to large /var/log/restjavad-api-usage.json file, ~500KB to tens of MB. This can be used to clear smaller leaks as well. The REST API is unavailable while restjavad is stopped. Note: qkviews typically truncate this file to 5MB or less if it is larger. The qkview_run.data in top level of qkview will show its true size. From bash: Stop restjavad: # bigstart stop restjavad Truncate the file to empty it: # truncate -s 0 /var/log/restjavad-api-usage.json This command should show the file as empty, zero bytes long. # ls -l /var/log/restjavad-api-usage.json Start restjavad (if iAppsLX or guided config in use restart restnoded as well 15s or so afterwards) # bigstart start restjavad Procedure B This can be used to clear smaller leaks without stopping restjavad. If used on bigger leaks it may cause restjavad to run out of memory and restart, or just be very busy and perhaps impact REST API service. # /bin/curl -u admin: -X DELETE http://localhost/mgmt/shared/diagnostics/api-usage > /dev/null This procedure can be automated, on a regular basis to prevent the leak growing, by turning it into a cron job as below. This will run the command around 3am every day From bash: Create a file with the command above, and make it executable, and move it to be executed by cron on daily basis : ==copy and paste below========================================================================= cat > /var/tmp/restjavad-stats-cleaner.sh<<END #!/bin/sh /bin/curl -u admin: -X DELETE http://localhost/mgmt/shared/diagnostics/api-usage > /dev/null END chmod ug+x /var/tmp/restjavad-stats-cleaner.sh mv /var/tmp/restjavad-stats-cleaner.sh /etc/cron.daily/restjavad-stats-cleaner.sh ==copy and paste above=========================================================================
None