2024-08-18 08:56 PM
I'm interested in learning more about the Meta and Raw log compression and would like to understand the performance impacts of enabling that would be?
2024-08-19 05:06 PM
Compression is really only suggested for Archivers. The reason for this is that it takes time to decompress the needed meta/log information if you are using compression. Archivers should never be used as Investigation sources. Instead they should only be used as reporting sources due to the time it takes to decompress the underlying database files. This decompression can last longer than the Investigation UI's timeout window and provide an unsatisfactory experience. Some customers have successfully used Investigation against Archivers but this is not tested or supported due to the variability in performance.
If you do use compression on a concentrator and/or log decoder but they are getting close to their maximum throughput, the decompressing of these files could cause processing backups when returning Investigation results. These backups may take the form of delayed pulling of logs from event sources, missed UDP syslog messages, or delayed results in the concentrators. If you have virtual log collectors this can help buffer the log decoders if there is slowness, but this will only help so much and won't do much for concentrators with performance issues.
Based on all of this, it really comes down to your mileage may very based on the situation of your environment. We don't have statistics that I am aware of that shows the impact of compressed verses uncompressed data on our log decoders or concentrators. All I can tell you is that it is best to experiment cautiously to determine if your environment can work effectively with compression turned on. Just remember that even if it works now, this does not mean you won't run into issues in the future with performance if you continue to add event sources to your NetWitness environment.
I know this is probably not the exact answer you were hoping for, but I wanted to make sure you understood the difference between what is possible and what is appropriate for your NetWitness environment.
2024-08-19 05:06 PM
Compression is really only suggested for Archivers. The reason for this is that it takes time to decompress the needed meta/log information if you are using compression. Archivers should never be used as Investigation sources. Instead they should only be used as reporting sources due to the time it takes to decompress the underlying database files. This decompression can last longer than the Investigation UI's timeout window and provide an unsatisfactory experience. Some customers have successfully used Investigation against Archivers but this is not tested or supported due to the variability in performance.
If you do use compression on a concentrator and/or log decoder but they are getting close to their maximum throughput, the decompressing of these files could cause processing backups when returning Investigation results. These backups may take the form of delayed pulling of logs from event sources, missed UDP syslog messages, or delayed results in the concentrators. If you have virtual log collectors this can help buffer the log decoders if there is slowness, but this will only help so much and won't do much for concentrators with performance issues.
Based on all of this, it really comes down to your mileage may very based on the situation of your environment. We don't have statistics that I am aware of that shows the impact of compressed verses uncompressed data on our log decoders or concentrators. All I can tell you is that it is best to experiment cautiously to determine if your environment can work effectively with compression turned on. Just remember that even if it works now, this does not mean you won't run into issues in the future with performance if you continue to add event sources to your NetWitness environment.
I know this is probably not the exact answer you were hoping for, but I wanted to make sure you understood the difference between what is possible and what is appropriate for your NetWitness environment.
2024-08-19 07:06 PM
Thanks @JohnKisner for the reply. It does answer my question.
Obvsiouly I want to use as least amount of disk space as possible to get the most amount of retention, but if it means I can't respond effectively because the system is taking too much time to compress/decompress the data then that's not what I want either.
2024-08-20 10:43 AM - edited 2024-08-20 10:43 AM
You are welcome. The only way that I know of that can help with maximizing storage space is to make sure you are only keeping what is important. Some event sources can produce logs of little forensic value. You can use application rules to remove them at the time of capture to help reduce the space usage in raw log as well as meta generated. The nice thing about this approach is you can actually use any meta generated off the log once it is captured to inform other application rules and then dump the original log and meta. This is the process of using an application rule to filter those logs and make the meta from those logs transient. I'm not sure if this is something you might want to look into, but if it is then you can take a look at the following link for more information: https://community.netwitness.com/t5/netwitness-platform-online/configure-application-rules/ta-p/669140