If customer is using lots of custom LUA parsers, index chunk size can be unexpectedly increased as you can see below and it leads to index full filesystem issue.
# du -h /var/netwitness/concentrator/index/
12G /var/netwitness/concentrator/index/managed-values-310
12G /var/netwitness/concentrator/index/managed-values-311
12G /var/netwitness/concentrator/index/managed-values-312
0 /var/netwitness/concentrator/index/reindex
0 /var/netwitness/concentrator/index/assimilate
8.0G /var/netwitness/concentrator/index/managed-values-313
235M /var/netwitness/concentrator/index/managed-values-314
3.1G /var/netwitness/concentrator/index/managed-values-315
45G /var/netwitness/concentrator/index/
If those LUA parsers implementations are incorrectly done in terms of trying to generate too much meta or incorrectly tagging large field values to a meta, that could be the reason to grow index chunk size.
As an example, you may also observe that the following meta(s) in each index slice show a large size.
-rw-r--r--. 1 root root 29M Sep 16 17:23 sld.nwindex
-rw-r--r--. 1 root root 30M Sep 18 03:21 sld.nwindex
-rw-r--r--. 1 root root 24M Sep 20 09:08 sld.nwindex
-rw-r--r--. 1 root root 28M Sep 22 06:47 sld.nwindex
-rw-r--r--. 1 root root 30M Sep 23 16:23 sld.nwindex
-rw-r--r--. 1 root root 30M Sep 24 22:22 sld.nwindex
-rw-r--r--. 1 root root 26M Sep 26 23:10 sld.nwindex
-rw-r--r--. 1 root root 32M Sep 27 19:05 sld.nwindex
sld is a default OOTB meta named Second Level Domain and this is a text field.
So the factors influencing the size would be the valueMax parameter used in the index-concentrator-custom.xml file.
Also if there is a custom parser or custom implementation of the OOTB parsers which is badly written, then it could cause large text to be tagged to this meta, which could also increase the size.
-rw-r--r--. 1 root root 17M Sep 15 13:59 word.nwindex
-rw-r--r--. 1 root root 17M Sep 16 17:23 word.nwindex
-rw-r--r--. 1 root root 17M Sep 18 03:21 word.nwindex
-rw-r--r--. 1 root root 17M Sep 20 09:08 word.nwindex
-rw-r--r--. 1 root root 17M Sep 23 16:23 word.nwindex
-rw-r--r--. 1 root root 17M Sep 23 16:23 word.nwindex
-rw-r--r--. 1 root root 17M Sep 24 22:22 word.nwindex
-rw-r--r--. 1 root root 17M Sep 26 23:10 word.nwindex
The word meta is consistently 17MB in size, which basically implies that there seem to be parsing-related issues which are causing many word meta(s) being generated.
Normally if the parsers are well written, only a few word meta(s) might get generated.
To reduce index chunk size, you need to reduce the /index/config/save.session.count config value from auto(defaults to 200 million) to 100000000(100 million) and it depends on the customer's environment.
This might have a slight performance penalty in terms of the queries opening a few extra index slices but opening one file of 11+GB in memory is more problematic than opening 3 files of around 4 GB in size.
you can look up /index/stats/sessions.since.save stat value and see how far off from 200 million it basically is.
For your information, you can do a manual index save, (concentrator->config->index->save) in case you want to save index chunk data.