“GCS/Pub/Sub … authentication failed” |
This error message appears when there are some authentication errors with the service account configured for the plugin |
In this scenario, do the following:
-
If a credentials file is not used for explicit authentication, ensure the service account is attached to the VM with full access to Cloud APIs.
-
Confirm that the service account has been onboarded with read access to the relevant GCP resources.
|
“Error encountered when downloading file” |
This error message appears when the plugin could not retrieve a file from the bucket. |
In this scenario, do the following:
-
Check the log for a best-effort description of the error encountered.
-
Verify if the file still exists. In many cases, the file may have been deleted, as files are retained in the bucket for only 3 days. If the Pub/Sub message retention exceeds the file retention period, this could cause the error.
|
“Failed to process due to extraction error” |
This error message appears if the plugin fails to unzip the ZIP file it received from the bucket. |
During plugin recovery, inspect the staging folder for files that may contain invalid ZIP file content or headers, which is often caused by truncated downloads resulting from plugin crashes. |
“Failed to process due to stream error” |
This error message appears when the plugin could not stream processed packet data to the Decoder’s streamer adapter. |
Do the following:
-
Investigate whether the packet data is invalid or corrupt.
-
Note that the default behavior of the Decoder is to drop and re-establish the connection when such exceptions occur.
-
Ensure that the plugin attempts to reconnect to the Decoder after a connection drop.
|
“JSON decryption ran into an error” |
This error message appears if the plugin could not decrypt the key data provided in the ZIP file in question.
|
Verify that the public-private keys are paired correctly. |
Received message count is increasing, but not PCAP transferred
|
The plugin is receiving messages from the subscription, but the files are not being processed correctly.
|
-
Identify any errors occurring during the processing of the PCAP files.
-
Check for issues with message payload processing, such as key mismatches or missing payloads, which may require attention from the Engineering team.
|
Unresolved Python modules
|
This error message appears if the plugin cannot resolve some code dependencies.
|
Verify if there was a packaging or build issue during the release that was preventing the module from being installed correctly. If so, escalate this matter with the NetWitness Support team. |
Files dropped when max_queue_size is full |
If the Decoder cannot process files from the bucket quickly enough as the downloaded queue increases, the queue can become full and the oldest file in the queue will be dropped and replaced with a newer one. |
By default, the plugin will download at most 10 files every download “round” (max_file_downloads) and the plugin will manage a backlog of 50 downloaded files (max_queue_size). This warning appears when the configured max queue size is met or exceeded. This situation arises when the Decoder cannot offload/process files from the backlog fast enough as files are continuously downloaded from the bucket.
-
Config option max_queue_sleep can be used to control what happens on meeting the max queue size.
-
The option is disabled by default, which sets the oldest files in the queue to be removed and replaced with newer files that were just downloaded on previous download rounds.
-
If old files are replaced, the impact is greatest when there are high message backlogs from plugin downtime. During the recovery time following a plugin restart, many old files may be dropped and will not be processed by the Decoder.
-
When the plugin is not capturing, the subscription continues to retrieve messages from the PAN topic and retains messages according to the configured retention policies.
-
Instead of dropping the old files, the plugin can be configured to sleep until the backlog isn’t full, delaying new file downloads from the bucket.
Refer to the image below for an example of a subscription with a large backlog of messages.
|
Disk usage advisory for max_queue_size |
The max_queue_size impacts the disk space usage. |
The default value of 50 unzipped file is 10 GB on disk at 200 MB per file in the staging directory under /var/netwitness, which should be configured based on the bandwidth for the staging and working directories.
-
Files from the bucket can be a variable size but will be at most 200 MB; the cutoff for a file is 200 MB in size or the last 5 minutes of traffic, whichever comes first.
-
The plugin uses staging and working folders located in the Decoder’s data directory (/var/netwitness/decoder/hosted/paloalto/%INSTANCE_NAME%/...)
-
The staging area holds the downloaded unzipped ZIP files, while the working area holds the extracted packet data ready for streaming to the Decoder.
-
The maximum queue size can be configured at max_queue_size and adjusted to accommodate more downloaded files staged simultaneously.
-
The working disk locations can be adjusted in the staging_folder and working_folder if the default location in the data directory is not suitable.
-
If utilizing the Decoder’s data directory, any minimum free space settings for the packet, meta, and session databases must be carefully considered while configuring the queue size and working locations.
-
Considering the variable sizes of the PCAP files, there are no size restrictions based on the disk footprint. Any guardrails must be enforced through the configurable queue and working area settings.
|