Database Configuration Nodes

This topic describes database configuration nodes. The following database configuration nodes are some of the advanced database configuration items of the NetWitness Core database that do not change frequently.

packet.dir , meta.dir , session.dir

This is the primary configuration entry for each database (also known as the Hot tier). It controls where in the file system the respective databases are stored. This configuration entry understands a complex syntax for specifying many directories as storage locations.

Configuration syntax:

config-value = directory, { ";" , directory } ;
directory    = path, [ ( "=" | "==" ) , size ] ;
path         = ? linux filesystem path ? ;
size         = number size_unit ;
size_unit    = "t" | "TB" | "g" | "GB" | "m" | "MB" ;
number       = ? decimal number ? ;				


/var/lib/netwitness/decoder/packetdb=10 t;/var/lib/netwitness/decoder0/packetdb=20.5 t				

The size values are optional. If set, they indicate the maximum total size of files stored there before databases roll over. If the size is not present, the database does not automatically roll over, but its size can be managed using other mechanisms.

The use of = or == is significant. The default behavior of the databases is to automatically create directories specified when the Core service starts. However, this behavior can be overridden by using the == syntax. If == is used, the service does not create any directories. If the directories do not exist when the service starts, the service does not successfully start processing. This gives the service resilience against file systems that are missing or unmounted when the host boots.

If you modify the size of a directory in use, the size takes effect immediately, as long as it is larger. If the size is smaller, it is ignored if it is more than 10 percent smaller than the existing size. This prevents an accidental mistype that causes a enormous loss of data. For example, if the packet database was configured for 12 TB and someone mistyped it as 12 GB , the database would end up deleting over 11 TBs of data in order to shrink it down to just 12 GB. Instead, the database ignores the 12 GB setting and logs a warning, so that the error can be caught quickly. Of course, if the size specified is actually correct and more than a 10 percent difference from the existing size, the only recourse for it to take effect is to restart the service. When it starts back up, it assumes the size is correct and adjusts the database to the new size by rolling out the oldest data until the new size is reached. If you actually do want to adjust the size downward and by more than 10 percent without restarting the service, you need to modify the size multiple times, each time adjusting it by less than 10 percent. Watch the service logs to know when the database has adjusted to the new size, as it only adjusts the total database size when the latest file being written has been closed.

If new directories get added or deleted (semicolon separated), they do not take effect until the service restarts.

packet.dir.warm , meta.dir.warm , session.dir.warm

These settings are optional and are used for Warm tier storage on an Archiver. By default, they are blank and unused. If configured, they follow the same format and behavior as packet.dir , meta.dir , and session.dir (see _ packet.dir , meta.dir , and session.dir _ above). When configured, the oldest file on the Hot tier moves to the Warm tier when no available space remains in the Hot tier.

packet.dir.cold , meta.dir.cold , session.dir.cold

These settings are optional and are used to move files from either a Hot or Warm tier storage system to the Cold tier directory specified. Specifically, this setting is nothing more than a directory, there are no size specifiers. However, the defined path name has a few special format specifiers that you can use to name the directory with the date of the data in it.

%y   = The year of the data being moved to the cold tier
%m   = The month of the data being moved to the cold tier
%d   = The day of the data being moved to the cold tier
%h   = The hour of the data being moved to the cold tier
%##r = A block of time within a day. So %12r would create two blocks, 00 and 01\. 00 for all data in the AM, 01 for all PM data				

Example setting:

packet.dir.cold = /var/lib/netwitness/archiver/database1/alldata/cold-storage-%y-%m-%d-%8r				

For the setting above, if a log database file was about to be moved to cold storage and it was created on 2014-03-02 15:00:00 , it would be moved to the following directory on the Cold tier:


The last number 01 needs some explanation. The %8r specifier breaks the hours of the day into 24 / 8 = 3 parts. The first eight hours of the day would be block 00 , so 12 a.m. to 8 a.m. The next eight hours are from 8 a.m. to 4 p.m. and are assigned block 01 . Since the data being moved to cold storage was created at 3 p.m., it falls into block 01 . The %r format specifier is useful for backing up files with a granularity somewhere between a day %d and a single hour %h . The Cold storage directory is created on demand and is defined by the data being moved when the format specifiers are used.

The ability to add a date to the path of the data is just a convenience added for backup and restore. It is a way of tagging the data with a date in the path.

packet.file.size , meta.file.size , session.file.size

This controls the size of the files created with each database. It is normally not necessary to change these values as the default values typically work well. This setting takes effect immediately for subsequent files.

packet.files , meta.files , session.files

This setting controls the number of files held open by the database. You can increase this value to improve performance: however, the operating system has an overall limit on the number of files that service can keep open. If this limit is exceeded, an error is reported and the service does not function. This setting takes effect immediately.

In versions 10.6 and later, the default value for packet.files, meta.files, and session.files is auto and the service manages the number of open files based on this criteria:

  1. Number of collections
  2. ​Amount of system memory

When set to auto , the number is dynamic and you can view it in the logs when it changes. NetWitness recommends that you leave this value as auto and do not change it to a specific number. , ,

This setting provides a safety limit on the minimum free space that exists on the paths specified by the packet.dir, meta.dir, and session.dir directories, respectively. This setting is used to prevent the service from running out of space in the event that other programs have filled up the space that should be dedicated to each of the databases. This setting takes effect immediately. ,

This setting controls how frequently packet ID locations and meta ID locations are indexed. This setting can be increased to reduce the amount of space needed by each packet or meta nwindex file, but increasing the setting reduces the speed at which individual packets or meta items can be located. This setting takes effect immediately.

The session database does not have a fidelity setting because it does not generate index files.

packet.integrity.flush , meta.integrity.flush , session.integrity.flush

This setting controls whether the database forces a sync operation on the file system when it is finished writing a file. The default value is sync , which means when a file is closed there will be a significant delay while the data writes to non-volatile storage. It may be necessary to set this to normal in order to achieve higher sustained write rates, especially on a Decoder. This setting takes effect on the next file created. Therefore, it is expected that at least one more sync will happen if the value was just changed to normal .

If packet drops are occurring and packet.integrity.flush is set to sync , set it to normal and monitor. Keep the session and meta flush settings on sync . If packet drops are still problematic, then set all three to normal and monitor.

packet.write.block.size , meta.write.block.size , session.write.block.size

The block size represents how much data is allocated at a time within each database file. Larger block sizes can provide higher throughput and compression ratios, and can improve the rate at which items can be retrieved from the database sequentially. However, larger block sizes have a detrimental impact on random read speed for compressed packet and meta items. This setting takes effect immediately.

packet.compression , meta.compression

These parameters control whether the databases compress data. Compression reduces the amount of storage needed by each database, but it can have a major detrimental impact on the speed at which items are written to the database, and the speed at which items are retrieved from the database. Changes take effect immediately on the next file creation. Make a note that you can not compress the pcapng packet db format.

As of version 10.4, the valid values for this parameter are gzip , bzip2 , lzma , or none . gzip is the preferred algorithm when compression is used, because it provides a good balance between performance and space savings. Both bzip2 and lzma can achieve better space savings, but the tradeoff in speed is substantial and likely should only be considered for low ingest speeds and when storage space is at a premium.

packet.compression.level , meta.compression.level

You can use these settings to further refine how the compression algorithms behave. They have no effect when compression is disabled. The valid values are between 0–9\. The default value of zero means let the software pick the best setting for speed and compression. The values between 1 and 9 are used as a sliding scale between performance (1) and compression (9). The value of 9 typically gives you the best compression for a given algorithm, but the worst performance. Somewhere in the middle is usually the best setting, which is what zero picks.


This setting controls how the database files are hashed. The default value is none , so no hashing is performed. The valid values are none , sha256 , sha1 , or md5 . Database files can be hashed to provide evidence that they have not been tampered with since they were closed. Hashing is time intensive and affects ingest performance when enabled. This change takes effect immediately.


This setting controls which databases are hashed. Valid values are session , meta , and packet and are comma separated when hashing multiple databases. This change takes effect immediately.


This setting is normally empty, which means the hash file is created in the same directory as the database file that was hashed. If this setting is defined, the hash file is written to the directory specified instead. This could be some form of write-once storage for resilience against hash tampering.

Hash files are small XML files containing the hex encoded hash along with metadata about the database file that was hashed.