Last Modified: Jan 08, 2020
See more info
Known Affected Versions:
13.1.0, 126.96.36.199, 188.8.131.52, 184.108.40.206, 220.127.116.11, 18.104.22.168, 22.214.171.124, 126.96.36.199, 188.8.131.52, 13.1.1, 184.108.40.206, 220.127.116.11, 18.104.22.168, 22.214.171.124, 126.96.36.199, 13.1.3, 188.8.131.52, 184.108.40.206, 14.1.0, 220.127.116.11, 18.104.22.168, 22.214.171.124, 126.96.36.199, 188.8.131.52, 184.108.40.206, 14.1.2, 220.127.116.11, 18.104.22.168, 22.214.171.124, 15.0.0, 15.0.1, 126.96.36.199
Opened: Aug 29, 2017
TMM killed with SIGABRT by the SOD process that monitors all process's health. TMM misses the keep alive, hence the restart. The stack trace shows that tmm was killed when it was waiting on a memory map (sys_mmap_obj) call.
Traffic disrupted while TMM restarts.
The memory map call is known to take a long time to complete when the disk IO sub-system is very slow. High IO can also be a result of memory starvation accompanied by intensive paging
This problem is not likely to persist after a TMM service restart. So no user intervention is required. If this problem happens repeatedly, it would be required to take a look at IO Resources in use at time of the database load or reload, and see if a way to lower IO can be found.