Decoder service has killed and restarted by oom-killer with following messages
Sep 23 13:13:12 DCFDECSECHW kernel: Out of memory: Kill process 68930 (NwDecoder) score 658 or sacrifice child Sep 26 10:19:45 DCFDECSECHW kernel: Out of memory: Kill process 206836 (NwDecoder) score 661 or sacrifice child Nov 29 16:47:06 DCFDECSECHW kernel: Out of memory: Kill process 393689 (NwDecoder) score 666 or sacrifice child
Followings are the graph of capture.rate and memory.process when the oom-killer was triggered, capture rate was not increased, but memory.process was significantly increased right before the oom-killer.
Msearch query (search operation) which was fired to search raw packet content has high memory usage and caused the decoder to get killed by OOM.
The graphs align with the spike in process memory.
The following are queries and they are looking at a pretty large number of sessions.
Sep 23 13:01:32 DCFDECSECHW NwDecoder[68930]: [SDK-MSearch] [audit] User admin (session 1209, 10.40.20.72:37180) has issued msearch (channel 8945) (thread 226783): flags=sp,sm,ci,ds search="JSP SHELL" limit=1000000 where packets=5606487487p348877716288,5606487496p348877724814,... Sep 26 10:06:33 DCFDECSECHW NwDecoder[206836]: [SDK-MSearch] [audit] User admin (session 1227, 10.40.20.72:54448) has issued msearch (channel 1248) (thread 265025): flags=sp,ci,ds search="<% Runtime.getRuntime().exec(request.getParameter(“cmd”) %>" limit=1000000 where packets=5765218353p358815600312,5765218288p358815600350,... Nov 29 16:30:29 DCFDECSECHW NwDecoder[393689]: [SDK-MSearch] [audit] User admin (session 11001, 10.40.20.72:59788) has issued msearch (channel 18841) (thread 393900): flags=sp,ci,ds search=tuyenttk@fpt.com.vn limit=1000000 where packets=8830186220p557504962933,8830186203p557504962934,...
msearch
is used by the Events view text search.
Following options would help resolve OOM issues due to msearch.