Last Modified: Sep 18, 2019
See more info
Known Affected Versions:
13.1.0, 22.214.171.124, 126.96.36.199, 188.8.131.52, 184.108.40.206, 220.127.116.11, 18.104.22.168, 22.214.171.124, 126.96.36.199, 13.1.1, 188.8.131.52, 184.108.40.206, 220.127.116.11, 18.104.22.168, 22.214.171.124, 13.1.3, 14.1.0, 126.96.36.199, 188.8.131.52, 184.108.40.206, 220.127.116.11, 18.104.22.168, 22.214.171.124, 14.1.2, 15.0.0, 15.0.1
Opened: Aug 29, 2017
TMM killed with SIGABRT by the SOD process that monitors all process's health. TMM misses the keep alive, hence the restart. The stack trace shows that tmm was killed when it was waiting on a memory map (sys_mmap_obj) call.
Traffic disrupted while TMM restarts.
The memory map call is known to take a long time to complete when the disk IO sub-systems is very slow. On a BIG-IP Virtual Edition, and with a busy hypervisor, the disk IO can get overloaded at times if all VMs are active on IO, choking the IO sub-system.
This problem is not likely to persist after a TMM service restart. So no user intervention is required. If this problem happens repeatedly, it would be required to take a look at IO Resources used by the various VMs provisioned, monitor disk IO OPS on VSphere, and ensure that the system is capable of handling basic level of Disk IOPS.