Configure Logstash Event Sources in NetWitness
Configure Logstash Event Sources in NetWitness
You can configure the Logstash collection protocol.
IMPORTANT:
- Do not change logstash.yml file as it breaks the functionality.
- Do not change sincedb_path input configuration. If you change the sincedb_path, the back up and restore functionality breaks.
- Do not modify any pipeline configuration yml files.
To configure a Logstash Event Source:
- Go to (Admin) > Services from the NetWitness menu.
- Select a Log Collection service.
- Under Actions, select > View > Config to display the Log Collection configuration parameter tabs.
-
Click the Event Sources tab.
- In the Event Sources tab, select Logstash/Config from the drop-down menu.
-
In the Event Categories panel toolbar, click .
The Available Event Source Types dialog is displayed.
-
Select the event source type and click OK.
The newly added event source type is displayed in the Event Categories panel.
-
Select the new type in the Event Categories panel and click in the Sources toolbar.
The Add Source dialog is displayed.
-
Fill in the fields, based on the Logstash event source you are adding. General details about the available parameters are described below in Logstash Collection Parameters.
- Click OK.
Logstash Collection Parameters
The following tables provides descriptions of the Logstash Collection source parameters.
Note: Items that are followed by an asterisk (*) are required.
Basic Parameters
Custom Event Source Parameters
The following table lists the custom event source parameters.
Name *
|
Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen.
|
Enabled
|
Select the checkbox to enable the event source configuration to start collection. The checkbox is selected by default.
|
Description
|
Enter a text description for the event source.
|
Input Configuration *
|
An input plugin enables a specific source of events to be read by Logstash.
|
Filter Configuration *
|
A filter plugin performs intermediary processing on an event. Paste in any filter details for your Logstash event source.
|
Device Type * |
Enter the device parser type used to parse the data.
Note: - While saving an existing instance with no specified device type, enter the device type to enable Ok. Else, click Cancel. - The device type can have 3 to 30 characters of a-z, 1-9, or underscore and must start with a-z.
|
Source Address |
Enter the IP address, host name or other identifier of the event source. |
Message ID |
Enter the message group ID used to bypass header parsing.
|
Message Prefix |
Enter the prefix added to each message to assist parsing.
|
Event Destination *
|
Select the NetWitness Log Collector or Log Decoder to which event needs to be send from the drop-down list.
|
Test Configuration
|
Checks the configuration parameters specified in this dialog to make sure they are correct.
|
Beats Event Source Parameters
The following table lists the beats event source parameters.
Name *
|
Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen. |
Enabled
|
Select the check box to enable the event source configuration to start collection. The check box is selected by default. |
Description
|
Enter a text description for the event source. |
Port Number*
|
Enter the port number (for example, 5044) that you configured for your event sources. |
Linux-Audit
|
Select the checkbox to enable processing for Linux audit. |
Linux-System |
Select the checkbox to enable processing for Linux system. |
Ngnix |
Select the checkbox to enable processing for Nginx. |
Event Destination* |
Select the NetWitness Log Collector or Log Decoder to which event needs to be send from the drop-down list. |
Test Configuration
|
Checks the configuration parameters specified in this dialog to make sure they are correct.
|
Export Connector Event Source Parameters
The following table lists the custom export connector event source parameters.
Name *
|
Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen. |
Enabled
|
Select the check box to enable the event source configuration to start collection. The check box is selected by default. |
Description
|
Enter a text description for the event source. |
Host*
|
Select the hostname of the Decoder or Log Decoder for data aggregation from the drop-down list. |
Username*
|
Username used to access the Decoder or Log Decoder for data aggregation. |
Authentication |
Note: If you upgrade from NetWitness Platform 11.6.0.0 to 11.6.1.0, automatic key is generated and stored in the key store management for the password set in Logstash pipeline configuration. You can view the key instead of password in the Authentication field.
Select the authentication type used for data aggregation. By default, SSL field is enabled, if you select trusted authentication.
Note: For trusted authentication, make sure you add the PEM file at /etc/pki/nw/node/node-cert.pem to the source Decoders REST APIs (/sys/trustpeer and /sys/caupload).
|
SSL
|
Select the check box to communicate using SSL. The security of data transmission is managed by encrypting information and providing authentication with SSL certificates. By default, SSL option is enabled, if you select trusted authentication type in the Authentication field.
|
Decoder Type*
|
Decoder Type is a read only field and it is auto populated when you select the Host.
|
Output Configuration* |
Logstash pipeline output configuration to forward events received from input stream to a defined destination/s. The output plugin sends the processed event data to the data warehouse destinations. You can use the standard Logstash output plugins to send the data. To understand more, see Work Flow of NetWitness Export Connector.
A basic sample Logstash output TCP plugin would look like below.
output {
tcp {
id => "nw-output-tcp"
host => "10.10.1.2"
port => 514
}
}
You can use any of the listed output plugins on Output plugins. We recommend you check with your respective vendor to know the input receiver type (such as TCP, HTTP) they support.
|
Test Configuration
|
Checks the configuration parameters specified in this dialog to make sure they are correct.
|
HTTP Receiver Event Source Parameters
The following table lists the HTTP receiver event source parameters.
Name *
|
Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen. |
Enabled
|
Select the checkbox to enable the event source configuration to start collection. The checkbox is selected by default. |
Description
|
Enter a text description for the event source. |
Port Number*
|
Enter the port number that you configured for your event sources. The default value of port number is 8080. |
Device Type*
|
Enter the device parser type used to parse the data.
|
Message ID |
Enter the message group ID used to bypass header parsing.
|
Message Prefix
|
Enter the prefix added to each message to assist parsing.
|
Event Destination*
|
Select the NetWitness Log Collector or Log Decoder to which event needs to be sent from the drop-down list.
|
HTTP Receiver SSL |
Select the checkbox to communicate using HTTP receiver SSL. The security of data transmission is managed by encrypting information and providing authentication with HTTP receiver SSL certificates. This checkbox is not selected by default.
Note: If you select the checkbox, the event source accepts SSL connections only. Also, if you change this setting, you must stop and restart Syslog collection for the change to become effective.
|
Certificate
|
Select the name of the HTTP receiver server’s SSL certificate.
|
Key
|
Select the name of the HTTP receiver server’s SSL key.
|
IPFIX Event Source Parameters
The following table lists the IPFIX event source parameters.
Name *
|
Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen.
|
Enabled
|
Select the checkbox to enable the event source configuration to start collection. The checkbox is selected by default.
|
Description
|
Enter a text description for the event source.
|
Port Number |
Enter the port number that you configured for your event source. The default value of port number is 4739. |
Event Destination *
|
Select the NetWitness Log Collector or Log Decoder to which event needs to be sent from the drop-down list.
|
Test Configuration
|
Checks the configuration parameters specified in this dialog to ensure they are correct.
|
Kubernetes Event Source Parameters
The following table lists the Kubernetes receiver event source parameters.
Name *
|
Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen. |
Enabled
|
Select the checkbox to enable the event source configuration to start collection. The checkbox is selected by default. |
Description
|
Enter a text description for the event source. |
Port Number*
|
Enter the port number that you configured for your event sources. The default value of port number is 5044. |
Event Destination*
|
Select the NetWitness Log Collector or Log Decoder to which event needs to be sent from the drop-down list.
|
Test Configuration
|
Checks the configuration parameters specified in this dialog to ensure they are correct.
|
JDBC Oracle 11g Auditing Event Source Parameters
The following table lists the JDBC Oracle 11g auditing event source parameters.
Name *
|
Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen. |
Enabled
|
Select the checkbox to enable the event source configuration to start collection. The checkbox is selected by default. |
Description
|
Enter a text description for the event source. |
Host ID* |
Enter the IP address of the machine where the oracle 11g database server is installed. |
Port Number*
|
Enter the port number that you configured for your event sources. The default value of port number is 1521. |
Database Name* |
Enter the name of the database where the audit tables exists. |
User ID*
|
Enter the username of oracle 11g database.
|
Password* |
Enter the password to log into the oracle 11g database. |
Polling Interval*
|
Polling interval takes the input in minutes. Based on the minutes entered, the pipeline will pull the data from the database.
For example, If the polling interval is 1, then the pipeline will pull the data from the database for every 1 minute. If the polling interval is 2, then the pipeline will pull the data from the database for every 2 minute. This filed takes the values between 1 to 60.
|
Event Destination*
|
Select the NetWitness Log Collector or Log Decoder to which event needs to be sent from the drop-down list.
|
Test Configuration
|
Checks the configuration parameters specified in this dialog to ensure they are correct.
|
JDBC Oracle 12c Auditing Event Source Parameters
The following table lists the JDBC Oracle 12c auditing event source parameters.
Name *
|
Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen. |
Enabled
|
Select the checkbox to enable the event source configuration to start collection. The checkbox is selected by default. |
Description
|
Enter a text description for the event source. |
Host ID* |
Enter the IP address of the machine where the oracle 12c database server is installed. |
Port Number*
|
Enter the port number that you configured for your event sources. The default value of port number is 1521. |
Database Name* |
Enter the name of the database where the audit tables exists. |
User ID*
|
Enter the username of oracle 12c database.
|
Password* |
Enter the password to log into the oracle 12c database. |
Polling Interval*
|
Polling interval takes the input in minutes. Based on the minutes entered, the pipeline will pull the data from the database.
For example, If the polling interval is 1, then the pipeline will pull the data from the database for every 1 minute. If the polling interval is 2, then the pipeline will pull the data from the database for every 2 minute. This filed takes the values between 1 to 60.
|
Event Destination*
|
Select the NetWitness Log Collector or Log Decoder to which event needs to be sent from the drop-down list.
|
Test Configuration
|
Checks the configuration parameters specified in this dialog to ensure they are correct.
|
JDBC Oracle 18c Auditing Event Source Parameters
The following table lists the JDBC Oracle 18c auditing event source parameters.
Name *
|
Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen. |
Enabled
|
Select the checkbox to enable the event source configuration to start collection. The checkbox is selected by default. |
Description
|
Enter a text description for the event source. |
Host ID* |
Enter the IP address of the machine where the oracle 18c database server is installed. |
Port Number*
|
Enter the port number that you configured for your event sources. The default value of port number is 1521. |
Database Name* |
Enter the name of the database where the audit tables exists. |
User ID*
|
Enter the username of oracle 18c database.
|
Password* |
Enter the password to log into the oracle 18c database. |
Polling Interval*
|
Polling interval takes the input in minutes. Based on the minutes entered, the pipeline will pull the data from the database.
For example, If the polling interval is 1, then the pipeline will pull the data from the database for every 1 minute. If the polling interval is 2, then the pipeline will pull the data from the database for every 2 minute. This filed takes the values between 1 to 60.
|
Event Destination*
|
Select the NetWitness Log Collector or Log Decoder to which event needs to be sent from the drop-down list.
|
Test Configuration
|
Checks the configuration parameters specified in this dialog to ensure they are correct.
|
JDBC Oracle 19c Auditing Event Source Parameters
The following table lists the JDBC Oracle 19c auditing event source parameters.
Name *
|
Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen. |
Enabled
|
Select the checkbox to enable the event source configuration to start collection. The checkbox is selected by default. |
Description
|
Enter a text description for the event source. |
Host ID* |
Enter the IP address of the machine where the oracle 19c database server is installed. |
Port Number*
|
Enter the port number that you configured for your event sources. The default value of port number is 1521. |
Database Name* |
Enter the name of the database where the audit tables exists. |
User ID*
|
Enter the username of oracle 19c database.
|
Password* |
Enter the password to log into the oracle 19c database. |
Polling Interval*
|
Polling interval takes the input in minutes. Based on the minutes entered, the pipeline will pull the data from the database.
For example, If the polling interval is 1, then the pipeline will pull the data from the database for every 1 minute. If the polling interval is 2, then the pipeline will pull the data from the database for every 2 minute. This filed takes the values between 1 to 60.
|
Event Destination*
|
Select the NetWitness Log Collector or Log Decoder to which event needs to be sent from the drop-down list.
|
Test Configuration
|
Checks the configuration parameters specified in this dialog to ensure they are correct.
|
JDBC IBMDB2 Event Source Parameters
The following table lists the JDBC IBMDB2 event source parameters.
Name *
|
Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen. |
Enabled
|
Select the check-box to enable the event source configuration to start collection. The check-box is selected by default. |
Description
|
Enter a text description for the event source. |
Host ID* |
Enter the IP address of the machine where the jdbc_ibmdb2 server is installed. |
Port Number*
|
Enter the port number that you configured for your event source. The default value of port number is 50000. |
Database Name* |
Enter the name of the database where the audit table exists. |
User ID*
|
Enter the username of jdbc_ibmdb2 database.
|
Password* |
Enter the password to log into the jdbc_ibmdb2 database. |
Polling Interval*
|
Polling interval takes the input in minutes. Based on the minutes entered, the pipeline will pull the data from the database.
For example, If the polling interval is 1, then the pipeline will pull the data from the database for every 1 minute. If the polling interval is 2, then the pipeline will pull the data from the database for every 2 minute. This filed takes the values between 1 to 60.
|
Event Destination*
|
Select the NetWitness Log Collector or Log Decoder to which event needs to be sent from the drop-down list.
|
Test Configuration
|
Checks the configuration parameters specified in this dialog to ensure they are correct.
|
JDBC Custom Event Source Parameters
The following table lists the JDBC Custom event source parameters.
Name *
|
Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen. |
Enabled
|
Select the check-box to enable the event source configuration to start collection. The check-box is selected by default. |
Description
|
Enter a text description for the event source. |
JDBC Driver Library* |
Enter the JDBC driver library path to interact with a database. |
JDBC Driver Class*
|
Enter the JDBC Driver class to interact with a database.
|
Host ID* |
Enter the IP address of the machine where the Oracle server is installed. |
Port Number*
|
Enter the port number that you configured for your event source. The default value of port number is 1521. |
Database Name* |
Enter the name of the database where the audit table exists. |
User ID*
|
Enter the username of Oracle database.
|
Password* |
Enter the password to log into the Oracle database. |
Polling Interval*
|
Polling interval takes the input in minutes. Based on the minutes entered, the pipeline will pull the data from the database.
For example, If the polling interval is 1, then the pipeline will pull the data from the database for every 1 minute. If the polling interval is 2, then the pipeline will pull the data from the database for every 2 minute. This filed takes the values between 1 to 60.
|
SQL Statement* |
Enter the SQL statements based on the requirement. Here, the text area field will also supports the new custom query. |
Device Type*
|
Enter the device type of the event source used for parsing.
|
Message ID |
Enter the message group ID to bypass header parsing. |
Message Prefix
|
Enter the prefix added to each message to assist parsing.
|
Event Destination*
|
Select the NetWitness Log Collector or Log Decoder to which event needs to be sent from the drop-down list.
|
Test Configuration
|
Checks the configuration parameters specified in this dialog to ensure they are correct.
|
Advanced Parameters
Click next to Advanced to view and edit the advanced parameters, if necessary.
Debug
|
Caution: Only enable debugging (set this parameter to On or Verbose) if you have a problem with an event source and you need to investigate this problem. Enabling debugging will adversely affect the performance of the Log Collector.
Caution: Enables or disables debug logging for the event source. Valid values are:
- Off = (default) disabled
- On = enabled
- Verbose = enabled in verbose mode ‐ adds thread information and source context information to the messages.
This parameter is designed to debug and monitor isolated event source collection issues. If you change this value, the change takes effect immediately (no restart required). The debug logging is verbose, so limit the number of event sources to minimize performance impact.
|
Beats SSL
(This option is applicable only for beats typespec)
|
Select this checkbox to communicate using beats SSL. The security of data transmission is managed by encrypting information and providing authentication with SSL certificates. This checkbox is not selected by default.
Note: Ensure that you copy the server SSL certificate and the key (generated in your system) to /etc/logstash/pki on Log Collector, which is used during SSL connection. /etc/logstash/pki is a path in the Log Collector node.
|
Kubernetes SSL
(This option is applicable only for Kubernetes typespec)
|
Select this checkbox to communicate using Kubernetes SSL. The security of data transmission is managed by encrypting information and providing authentication with SSL certificates. This checkbox is not selected by default.
Note: Ensure that you copy the server SSL certificate and the key (generated in your system) to /etc/logstash/pki on Log Collector, which is used during SSL connection. /etc/logstash/pki is a path in the Log Collector node.
|
Certificate
(This option is applicable only for beats and Kubernetes typespec)
|
Select the name of a server SSL certificate located at /etc/logstash/pki.
|
Key
(This option is applicable only for beats and Kubernetes typespec)
|
Select the name of a server SSL key located at /etc/logstash/pki. |
SSL Enabled
|
-
For custom typespec - Select the checkbox to communicate using SSL. The security of data transmission is managed by encrypting information and providing authentication with SSL certificates. This checkbox is selected by default.
-
For beats typespec - Select the checkbox to communicate using SSL. The security of data transmission is managed by encrypting information and providing authentication with SSL certificates. This checkbox is selected by default.
-
For export connector typespec - Select the checkbox for Logstash to communicate with Packet Decoder and Log Decoder in SSL and Non-SSL mode. This checkbox is not selected by default.
|
Export Logs
(This option is applicable only for export_connector typespec)
|
Select the checkbox to export logs from Log Decoders.
|
Starting Session
(This option is not applicable for export_connector typespec)
|
Specify the session ID from which you want to start the aggregation from instead of the default. |
Include Metas
(This option is not applicable for export_connector typespec)
|
Specify the list of metas separated by comma that you want to aggregate. The default metas such as time, did and sessionid are collected in addition to the metas you added for aggregation.
|
Exclude Metas
(This option is not applicable for export_connector typespec)
|
Specify the list of metas separated by comma that you want to exclude from aggregration. |
Query
(This option is not applicable for export_connector typespec)
|
Specify the query so the data matching the query is only aggregated. For example, select * where user.dst = 'john'.
|
Enable Metrics Collection
(This option is applicable only for export_connector typespec)
|
Enables metrics collection in Elastic.
IMPORTANT: If you enable the metrics collection, you must provide the Elastic host, username, and password.
|
Elastic Host (This option is applicable only for export_connector typespec)
|
Specify the Elastic host to forward metrics.
|
Elastic Port
(This option is applicable only for export_connector typespec)
|
Port number through which metrics are forwarded. |
Elastic Username
(This option is applicable only for export_connector typespec)
|
Specify the name of the Elastic search.
|
Elastic Password
(This option is applicable only for export_connector typespec)
|
Note: If you upgrade from NetWitness Platform 11.6.0.0 to 11.6.1.0, automatic key is generated and stored in the key store management for the password set in Logstash pipeline configuration. You can view the key instead of password in the Elastic Password field.
Specify the Elastic search password.
|
Minutes Back
(This option is applicable only for export_connector typespec)
|
Starts collecting data from last xx minutes.
For example, if the value is set to 30 minutes, Log Collector starts collecting the logs and metas starting from last 30 minutes.
|
Persistent Queue (This option is applicable only for export_connector typespec |
Persistent queue stores the message queue on disk to avoid data loss. By default, this option is not enabled. |
Additional Custom Configuration
|
Use this text box for any additional configuration, in case you have multiple inputs or another set of outputs to send somewhere in addition to a NetWitness Log Collector or Log Decoder.
For example, you can configure the data to be sent to Elasticsearch. In this case each event that is sent to Netwitness Platform will also be send to Elasticsearch.
|
Required Plugins
|
Specify the required plugins in a comma separated list.
Note: - Backup and restore is not supported for custom plugins. - If the test connection failed due to required plugin is not installed, you must install the required plugin, for more information, see Install or Manage Logstash Plugin.
|
Pipeline Workers
|
Number of pipeline worker threads allocated for logstash pipeline.
|
Ports
(This option is only applicable for custom typespec and beats typespec)
|
Enter a port number (for example, 5000 or UDP:5000, TCP:5000), and ensure the checkbox is checked. This allows the plugins to collect logs over the network (For example, UDP, TCP).
IMPORTANT: If you are configuring beats event source, ensure you provide beats event source port (For example, 5044) in the advance configuration even if you have updated the port in the basic parameters.
|
Required Ports
(This option is not applicable for custom typespec and beats typespec)
|
Enter the list of ports required for external access.
|
Exclude Metas
(This option is not applicable for export_connector typespec)
|
Specify the list of metas separated by comma that you want to exclude from aggregration. |
Query
(This option is not applicable for export_connector typespec)
|
Specify the query so the data matching the query is only aggregated. For example, select * where user.dst = 'john'.
|
Destination SSL (This option is only applicable for cutom typespec, http_receiver typespec, ipfix typespec, kubernetes typespec, jdbc_oracle_11g_auditing_typespec, jdbc_oracle_18c_auditing, jdbc_oracle_19c_auditing typespec, ibmdb2 typespec and jdbc_custom typespec) |
Select the checkbox to communicate using destination SSL. |
Custom SQL Statement (This field is applicable only for jdbc_oracle_18c_auditing typespec, jdbc_oracle_19c_auditing typespec, jdbc_oracle_11g_auditing_typespec and jdbc_oracle_12c_auditing_typespec) |
By default, this is an empty text area field. This field also supports the new custom query. |
Legacy Codec (This option is only applicable for custom typespec and beats typespec)
|
Select the check box to send events to Log Decoder using legacy format. When selected, Log Collector will send events to Log Decoder embedded in JSON encoded Logstash events. When not selected, Log Collector will send events in the collected format.
|
Note: You can monitor the health and throughput for Logstash pipeline using Logstash Input Plugin Overview dashboard. For more information, see "New Health and Wellness Dashboards" topic in the System Maintenance Guide.
Note: The JDBC pipelines are supported only from 12.3 version onwards.
View logstash collection in the Investigation > Events view
To view logstash collection in the Events view
-
Go to the Investigation > Events view.
-
Select a Log Decoder service
(for example, LD1) which collects logstash events, from the drop-down list.
-
Select the time range.
-
Click .
-
Look for events with a device.type value that matches the one defined in the Logstash pipeline.
Note: Though some meta are parsed by default, a custom parser or Log Parser Rules are required for full parsing.
Install or Manage Logstash Plugin
By default, Logstash related plugins are installed when Logstash is installed. In addition, you can add or customize the plugins based on your requirement.
List All Lostash Plugins
You can list all the Logstash plugins available on your environment using the following command: /usr/share/logstash/bin/logstash-plugin list --verbose
Install New Logstash Plugin
You can install new plugin using the following command:
/usr/share/logstash/bin/logstash-plugin install <plugin-name>
For example, see the following command:
/usr/share/logstash/bin/logstash-plugin install logstash-input-github
Manage Logstash Plugin
You can manage the existing plugins using the following commands:
Configure Keystore Management
Keystore management allows you to securely store secret values (key and password). These keys are used as a placeholder for authentication credentials within a Logstash pipeline configuration.
To configure keystore management:
-
Go to (ADMIN) > Services from the NetWitness Platform menu.
-
Select a Log Collection service.
-
Under Actions, select > View > Config to display the Log Collection configuration parameter tabs.
-
Click the Event Sources tab.
-
In the Event Sources tab, select Logstash > Keystore management from the drop-down menu.
-
Click .
-
In the key field, enter the name of the key.
-
In the password field, enter the password.
-
Click Save.