This section provides information about possible issues when using NetWitness UEBA.

Task Failure Issues in Airflow

Problem

The userId_output_entities task fails when the username contains a backslash.

Cause

When events with usernames containing a backslash character is passed through UEBA, then the userId_output_entities task fails.

Solution

To resolve these issue contact the customer success to obtain the relevant files and execute the following steps:

  • Stop airflow-scheduler service.
  • Remove all MongoDB documents in the "aggr", "accm" and "input" collections that contains context.userId with hashtag. These documents can be located using the FindCollecionsContainsBackslash.js script.
  • Replace the /var/netwitness/presidio/flume/conf/adapter/transformers/authentication.json file with the updated authentication .json.
  • Restart the airflow-scheduler service.
  • Validate that the next run of the userId_output_entities task is completed successfully.

 

Problem

The AUTHENTICATION_userId_build_feature_historical_data task fails when the username contains a hashtag.

Cause

When events with usernames containing a hashtag character is passed through UEBA, then the AUTHENTICATION_userId_build_feature_historical_data task fails.

Solution

To resolve these issue contact the customer success to obtain the relevant files and execute the following steps:

  • Stop airflow-scheduler service.
  • Remove all MongoDB documents in the "aggr", "accm" and "input" collections that contains context.userId with hashtag. These documents can be located using the FindCollecionsContainsHashtagContextUserId.js script.
  • Replace the /var/netwitness/presidio/flume/conf/adapter/transformers/authentication.json file with the updated authentication .json.
  • Restart the airflow-scheduler service.
  • Validate that the next run of AUTHENTICATION_userId_build_feature_historical_data task is completed successfully.

User Interface Inaccessible

Problem

The User Interface is not accessible.

Cause You have more than one NetWitness UEBA service existing in your NetWitness deployment and you can only have NetWitness UEBA service in your deployment.
Solution

Complete the following steps to remove the extra NetWitness UEBA service.

    1. SSH to NW Server and run the following commands to query the list of installed NetWitness UEBA services.
      # orchestration-cli-client --list-services|grep presidio-airflow
      ... Service: ID=7e682892-b913-4dee-ac84-ca2438e522bf, NAME=presidio-airflow, HOST=xxx.xxx.xxx.xxx:null, TLS=true
      ... Service: ID=3ba35fbe-7220-4e26-a2ad-9e14ab5e9e15, NAME=presidio-airflow, HOST=xxx.xxx.xxx.xxx:null, TLS=true
    2. From the list of services, determine which instance of the presidio-airflow service should be removed (by looking at the host addresses).

    3. Run the following command to remove the extra service from Orchestration (use the matching service ID from the list of services):
      # orchestration-cli-client --remove-service --id <ID-for-presidio-airflow-form-previous-output>

Note: Run the following command to update NW Server to restore NGINX:
# orchestration-cli-client --update-admin-node

  1. Log in to NetWitness, go to AdminIcon_25x22.png(Admin) > Hosts, and remove the extra NetWitness UEBA host.

Get UEBA Configuration Parameters

Issue

How to get UEBA configuration parameters?

Explanation

In order to get the UEBA configuration main parameters, run the curl http://localhost:8888/application-default.properties command from the UEBA machine.
ParaConf_1453x584.png
The main parameters which will be returned are the following:

  • uiIntegration.brokerId: The Service ID of the NW data source (Broker / Concentrator)
  • dataPipeline.schemas: List of schemas processed by the UEBA
  • dataPipeline.startTime: The date the UEBA started consuming data from the NW data source
  • outputForwarding.enableForwarding: The UEBA Forwarder status
Resolution

See the resolution for these statistics in the Troubleshooting UEBA Configurations section.

 

Check UEBA Progress Status using Airflow:

Issue

How to check UEBA progress status using Airflow?

 
  1. Navigate to- https://<UEBA-host-name>/admin. Enter the admin username and the deploy-admin password. The following image is of the Airflow home page that shows the system is working as expected.
    UEBAArFlw_531x269.png
  2. Make sure that no red or yellow circles appear in the main page:

    • red circle indicates that a task has failed.
    • yellow circle indicates that a task has failed and is “awaiting” for a retry.
    • If a “failed” or “up-for-retry” task appears, investigate what is the root cause of the problem.
  3. Make sure the system continues to run.

  4. Tap the Browse button and select Task Instance.

  5. Add the following filters: State = running and Pool = spring_boot_jar_pool.
    The Task Instance page is displayed.
    UEBAStatus_403x117.png

The Execution Date column shows the current time window for each running task. Make sure the execution date is greater than the UEBA start-date and that new tasks have an updated date are added to the table.

Resolution
 

 

Check if data is received on the UEBA by Kibana:

Issue How to check if data is received on the UEBA by Kibana:
Explanation Navigate to- https://<UEBA-host-name>/kibana. Enter the admin username and password: To check that the data is flowing to the UEBA go to the Adapter Dashboard: Tap the Dashboard tab in the left menu Tap Adapter Dashboard at the right menu Select the relevant time range at the top bar The charts on this dashboard will present you the data that already fetched by the UEBA.
adapter_dashboard_411x180.png

Scaling Limitation Issue

When installed on a Virtual Machine, UEBA can process up to 20 million network events per day. Based on this limitation, you may encounter the following issues.

Issue

How to determine the scale of network events currently available, to know if it exceeds the UEBA limitation.

Solution

To know the network data limit, perform the following :

  • Run the query on the Broker or Concentrator that connects to UEBA using NetWitness UI:

service=443 && direction='outbound' && analysis.service!='quic' && ip.src exists && ip.dst exists && tcp.srcport!=443

Calculate the total number of events for the selected days (including weekdays with standard workload). If the average is above 20 million per day then it indicates that UEBA’s supported scale is exceeded.

 

Issue

Can UEBA for Packets be used if UEBA's supported scale is exceeded?

Solution

You must create or choose a Broker that is connected to a subset of Concentrators that does not exceed the supported limit.

To know the network data limit, perform the following :

  • Run the query on the Concentrator that connects to UEBA using NetWitness UI:

service=443 && direction='outbound' && analysis.service!='quic' && ip.src exists && ip.dst exists && tcp.srcport!=443

Calculate the total number of events for the selected days (including weekdays with standard workload). If the average is above 20 million per day then it indicates that UEBA’s supported scale is exceeded.

Note: The Broker must query all the available and needed data needed such as logs, endpoint and network (packets). UEBA packets models are based on the whole environment. Hence, make sure that the data parsed from the subset of Concentrators is consistent.

UEBA Policy Issue

Issue After you create a rule under UEBA policy, duplicate values are displayed in the Statistics drop-down.
Solution

To remove the duplicate values, perform the following:

  1. Log in to MongoDB using following command:mongo admin -u deploy_admin -p {Enter the password}
  2. Run the following command on MongoDB:
    use sms;
    db.getCollection('sms_statdefinition').find({componentId :"presidioairflow"})
    db.getCollection('sms_statdefinition').deleteMany({componentId :"presidioairflow"})

Troubleshoot Using Kibana

Issue

After you deploy NetWitness UEBA, the connection between the NetWitness and NetWitness UEBA is successful but there are very few or no events in the Users > OVERVIEW tab.

  1. Log in to Kibana.
  2. Go to Table of Content > Dashboards > Adapter Dashboard.
  3. Adjust the Time Range on the top-right corner of the page and review the following:
    • If the new events are flowing.
    • In the Saved Events Per Schema graph, see the number of successful events per schema per hour.
    • In the Total Events vs. Success Events graph, see the total number of events and number of successful events. The number of successful events should be more every hour.

    For example, in an environment with 1000 users or more, there should be thousands of authentication and file access events and more than 10 Active Directory events. If there are very few events, there is likely an issue with Windows auditing.

Solution

You must identify the missing events and reconfigure the Windows auditing.

  1. Go to INVESTIGATE > Navigate.
  2. Filter by devide.type= device.type “winevent_snare” or “winevent_nic”.
  1. Review the events using reference.id meta key to identify the missing events.
  2. Reconfigure the Windows auditing. For more information, see NetWitness UEBA Windows Audit Policy topic.

 

Issue

The historical load is complete and the events are coming from Adapter dashboard but no alerts are displayed in the Users > OVERVIEW tab.
Solution
  1. Go to Kibana > Table of content > Scoring and model cache.
  2. Adjust the Time Range from the top-right corner of the page, and see if the events are scored.

 

Issue

The historical load is complete but no alerts are displayed in the Investigate > Users tab.
Solution
  1. Go to Kibana > Dashboard > Overview.

  2. Adjust the Time Range from the top-right corner of the page, and see how many users are analyzed and if any anomalies are found.

Troubleshoot Using Airflow

Issue After you start running the UEBA it is not possible to remove a data source during the run process else the process stops.
Solution

You must either continue the process till it completes or remove the required data source from UEBA and rerun the process.

 

Issue After you deploy UEBA and if there are no events displayed in the Kibana > Table of content > Adapter dashboard and Airflow has already processed the hours but there are no events. This is due to some communication issue.
Solution

You must check the logs and resolve the issue.

  1. Log in to Airflow.
  2. Go to Admin > REST API Plugin.
  3. In the Failed Tasks Logs, click execute.
    A zip file is downloaded.
  4. Unzip the file and open the log file to view and resolve the error.
  5. In the DAGs > reset_presidio, click Trigger Dag.
    This deletes all the data and compute all the alert from the beginning.

Note: During initial installation, if the hours are processed successfully but there are no events, you must click reset_presidio after fixing the data in the Broker. Do not reset if there are alerts.

Troubleshoot Using Elasticsearch Migration Tool

Issue If you upgrade the UEBA host to 12.1 and 12.1.x.x without performing Elasticsearch data backup (using the Elasticsearch migration tool), the data such as Users, Entities, Alerts, and Indicators will be lost.
Solution

If you fail to perform Elasticsearch data backup and lose all the data after upgrading the UEBA host to 12.1 and 12.1.x.x, follow these steps to recover the lost data.

  1. Go to /var/netwitness/ after upgrading the UEBA host to 12.1 and 12.1.x.x.

  2. Copy backup-elasticsearch-xxxx (mandatory) and backup-elasticsearch-recover-xxxx (If it is available) directories to /var/netwitness/ in your UEBA lab (on version 12.0 or older versions).

    Note:
    - If you do not have a UEBA lab setup for version 12.0 or older versions, contact NetWitness.
    - NetWitness requires your permission to copy the directories to /var/netwitness/ in NetWitness UEBA lab for recovering the lost data.

  3. Rename the backup-elasticsearch-xxxx directory as elasticsearch. Run the following command.

    cd /var/netwitness

    rm -r elasticsearch

    mv backup-elasticsearch-xxxx elasticsearch

    chown -R elasticsearch:elasticsearch elasticsearch

    systemctl start elasticsearch

  4. Rename the backup-elasticsearch-recover-xxxx directory as elasticsearch-recover. Run the following command.

    IMPORTANT: If backup-elasticsearch-recover-xxxx directory is not available, do not run the below command.

    cd /var/netwitness

    rm -r elasticsearch-recover

    mv backup-elasticsearch-recover-xxxx elasticsearch

    chown -R root:elasticsearch elasticsearch-recover

    systemctl start elasticsearch

  5. Restart the Elasticsearch 5.5.0. Run the following command.

    systemctl restart elasticsearch.service

  6. Follow the steps provided in the Elasticsearch migration tool guide to export Elasticsearch presidio data. For more information, see the Upgrade Preparation Tasks section in the NetWitness Upgrade guide for 12.1 and NetWitness Upgrade guide for 12.1.1.

    Note: ueba_es_migration_tool (Elasticsearch migration tool) allows you to migrate presidio Elasticsearch data from Elasticsearch version 5.5.0 to 7.15.2 while upgrading the UEBA host to 12.1 and 12.1.x.x from 12.0 and older versions. This tool contains elk-migration-script.sh script file and presidio-elk-migration-1.0.0.jar file and it can be downloaded from https://community.netwitness.com/t5/netwitness-platform-downloads/ueba-elasticsearch-migration-tool/ta-p/687496.

  7. Once the Elasticsearch presidio data Export operation is completed, copy the backup directories to your UEBA host.

  8. Import Elasticsearch presidio data. For more information, see Post Upgrade Tasks topic in the NetWitness Upgrade guide for 12.1 and NetWitness Upgrade guide for 12.1.1.