You must configure the Logstash configuration file to process the NetWitness Platform events.
-
Run the following commands
cd /etc/logstash/conf.d/metaexport/
-
Modify metaexport_pipelinename.conf file according to the need.
-
A Logstash configuration file can have three separate sections for each type of plugin that you want to add to the event processing pipeline. The first section is for Input plugin (NetWitness Export Connector), the second section is for Filter plugin (optional) and the third section is for syslog Output plugin. To configure the NetWitness Export Connector plugin, add the parameter settings in the first section of the Logstash configuration file.
The following is an example of NetWitness Export Connector with block of required and optional parameter settings which fetches data from a decoder.
input
{
netwitness_export_connector
{
# Mandatory Fields.
name => "<Pipelinename>"
host => "<IP/Host>"
username => "<Username>"
password => "${KEY}"
decoder_type => "decoder"
# Optional Fields.
meta_include => ""
meta_exclude => ""
query => ""
minutes_back => 0
multi_value => ""
start_session => 0
# SSL Configuration.
# ssl_enable => true
# ssl_certificate_path => "/etc/pki/nw/trust/truststore.pem"
# ssl_client_certificate_path => "/etc/pki/nw/node/node-cert.pem"
# ssl_key_path => "/etc/pki/nw/node/node-key.pem"
}
}
Following are the parameters accepted by NetWitness Export Connector.
name
|
A unique name to identify the logstash export connector pipeline
|
String
|
N/A
|
host |
IP address or hostname of the Decoder (mandatory) |
String
|
N/A
|
username |
Username used to access the Decoder (mandatory) |
String |
N/A
|
password
|
LogStash KeyStore key for accessing Decoder (mandatory).
|
String
|
${KEY}
|
decoder_type |
Accepts only 'decoder' (mandatory) |
String |
decoder |
meta_include
|
Aggregates only the meta keys that are added in this parameter setting. Accepts comma separated values (csv) format
|
String
|
NIL
|
meta_exclude |
Excludes the meta keys that are added in this parameter setting from aggregation. Accepts comma separated values (csv) format |
String |
NIL |
query
|
Takes any NetWitness Platform query as Input.
Note: Only Indexed meta key must be the part of the query. For example, select * where country.src='Country Name'
|
String
|
Select *
|
minutes_back
|
How far back in time should we go for a fresh start. This field accepts the values in minutes.
|
Number
|
0
|
multi_value |
Metas to be treated as multiple values. For example, action, alias.host, alias.ip, alias.ipv6, email and username. |
String |
N/A |
start_session |
Session from which the aggregation starts. Setting the value to 0 starts the aggregation from last.session.id in the Decoder |
Number |
0 |
Filter Configuration (Optional):
You can configure the Logstash Filter plugin to add, remove, or modify the specific input events from the Decoder. To configure the Filter plugin, add the Filter plugin parameter settings in the second section of the Logstash configuration file . This plugin modifies the events based on the parameter settings. You can use the existing standard Logstash filter plugins for adding the parameter settings to the configuration file.
For more information on existing Logstash standard filter plugins, see Filter Plugins.
The configuration of the plugin must consist of the plugin name followed by a block of parameter settings for that plugin.
Example:
# Example
filter {
if [country_src] == "country name" {
# If this condition holds true, the entire session will be dropped.
drop {}
}
}
Output Configuration:
You must configure the Logstash syslog Output plugin to send the input events to a third party application.
The syslog Output plugin sends the processed event data to the third-party application where the Syslog receiver is configured.
For more information on existing Logstash standard output plugins, see Output Plugins.
Example:
output
{
syslog
{
id => "plugin_id"
host => "<server_IP>"
port => 514
sourcehost => “%{did}”
protocol => "tcp"
rfc => "rfc5424"
codec => "json"
#ssl_cert => “/path/to/server.crt”
#ssl_key => “path/to/server.key”
#ssl_cacert => “path/to/ca.crt”
}
}
id |
Add a unique ID to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type. |
host |
Syslog server address to connect to the third party application such as splunk, palo alto cortex and so on. |
port
|
Syslog server port to connect to the third party app. Use 6154 for the ssl connection.
|
sourcehost |
Source host for syslog message. The default value for this is the Decoder ID. |
protocol
|
Syslog server protocol. Use ssl-tcp for the ssl connection.
|
rfc |
Syslog message format. You can choose between rfc3164 or rfc5424. |
codec
|
Encodes the data to the json format if set to json.
|
ssl_cacert |
The SSL CA certificate, chainfile or CA path. The system CA path is automatically included. |
ssl_cert
|
The SSL certificate path.
|
ssl_key |
There is no default value for this setting. |
Configure Multiple Pipelines:
Follow these steps to configure multiple pipelines.
-
In order to create the multiple pipelines, Copy the file with the name metaexport_pipelinename.conf inside the /etc/logstash/ to /etc/logstash/conf.d/metaexport with different name.The name of the configuration files should be unique.
If you need to run more than one pipeline in the same process, Logstash provides a way to do this through a configuration file called pipelines.yml. This file must be placed in the /etc/logstash/ folder and follows this structure:
- pipeline.id: pipeline-1
path.config: "/etc/logstash/conf.d/metaexport/metaexport_pipelinename.conf"
-
Add one more configuration block to the same pipelnes.yml file with unique pipeline ID and the path to the newly created configuration file.
Example:
- pipeline.id: pipeline-1
path.config: "/etc/logstash/conf.d/metaexport/metaexport_pipelinename.conf"
- pipeline.id: pipeline-2
path.config: "/etc/logstash/conf.d/metaexport/metaexport_pipelinename1.conf"
For more information on multiple pipelines, See Multiple Pipelines.