This section provides information about possible issues when using NetWitness UEBA.
Task Failure Issues in Airflow
Problem |
The userId_output_entities task fails when the username contains a backslash. |
Cause |
When events with usernames containing a backslash character is passed through UEBA, then the userId_output_entities task fails. |
Solution |
To resolve these issue contact the customer success to obtain the relevant files and execute the following steps:
|
Problem |
The AUTHENTICATION_userId_build_feature_historical_data task fails when the username contains a hashtag. |
Cause |
When events with usernames containing a hashtag character is passed through UEBA, then the AUTHENTICATION_userId_build_feature_historical_data task fails. |
Solution |
To resolve these issue contact the customer success to obtain the relevant files and execute the following steps:
|
Problem |
The task output_forwarding_task fails in Airflow UI for userId_hourly_ueba_flow DAG due to Elasticsearch 'too many clauses' exception. |
Cause |
The output_forwarding_task task in the userId_hourly_ueba_flow DAG fails in the Airflow UI. The failure is caused by an Elasticsearch exception with the following message: "caused_by":{"type":"too_many_clauses","reason":"maxClauseCount is set to 1024"}. The too_many_clauses error occurs when the number of clauses in an Elasticsearch query exceeds the maximum limit set by the system. In this case, the maximum number of clauses was set to 1024. The output_forwarding_task exceeded this limit, which caused the failure. |
Solution |
To increase the max clause count value, execute the following steps:
|
Problem |
The TLS model is taking too long to complete tasks. |
Cause |
If the TLS_raw_model_task and TLS_aggr_model tasks take more than 20 hours to process the data. |
Solution |
You need to enable the configuration in application.properties file to improve the processing time by executing the following steps.
Note: This is only applicable to the 12.3 version. |
Problem |
Out of Memory exception in Model DAG. |
Cause |
When you encounter an out of memory exception in any model DAG, it may be because of I/O operations with the database in a single thread. It could also be due to insufficient memory allocated for the respective operator of the task. |
Solution |
To prevent out of memory exceptions on any of the model DAG, add or modify the configuration in the application.properties file. Follow these steps to make the required changes:
Note: This is only applicable to the 12.3.1 version. |
Problem |
Invalid username and password are displayed in Adapter dags after upgrading UEBA to the 12.3.1 version. |
Cause |
After upgrading UEBA to version 12.3.1, UEBA is unable to retrieve data from the Broker because the Broker password contains special characters such as (@/! ), which prevent it from being parsed. |
Solution |
Complete the following steps to reset the password of the Broker service. Note: NetWitness recommends you use only alpha-numeric characters and no special characters for the Broker service.
|
Problem |
Task failure in root DAG due to Airflow issue. |
Cause |
In the root DAG, one of the tasks failed unexpectedly due to an existing issue with the airflow system.
|
Solution |
Whenever this issue occurs, you must examine the logs related to the specific failed task and verify whether the following log entry is present: dagrun.py:465} INFO - (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "task_instance_pkey". In such situations, you can safely ignore this issue as it does not require further action. This issue does not impact the functionality of UEBA. |
MongoDB I/O Operations Slowness Issue
Problem |
Increased execution time for DAGs with Mongo I/O Operations. |
Cause |
Some of the DAGs in the system experienced increased execution time due to slow MongoDB Input/Output (I/O) operations. |
Solution |
To increase the MongoDB cache size in the MongoDB config file, execute the following steps:
|
User Interface Inaccessible
Problem |
The User Interface is not accessible. |
Cause | You have more than one NetWitness UEBA service existing in your NetWitness deployment and you can only have NetWitness UEBA service in your deployment. |
Solution |
Complete the following steps to remove the extra NetWitness UEBA service.
Note: Run the following command to update NW Server to restore NGINX:
|
Get UEBA Configuration Parameters
Issue |
How to get UEBA configuration parameters? |
Explanation |
In order to get the UEBA configuration main parameters, run the curl http://localhost:8888/application-default.properties command from the UEBA machine.
|
Resolution |
See the resolution for these statistics in the Troubleshooting UEBA Configurations section. |
Check UEBA Progress Status using Airflow:
Issue |
How to check UEBA progress status using Airflow? |
Note: To access the Airflow UI, you must use the deploy_admin credentials.
The Execution Date column shows the current time window for each running task. Make sure the execution date is greater than the UEBA start-date and that new tasks have an updated date are added to the table. |
|
Resolution | |
Check if data is received on the UEBA by Kibana:
Issue | How to check if data is received on the UEBA by Kibana: |
Explanation |
Note: To access the Kibana UI, you must use the deploy_admin credentials. Navigate to- https://<UEBA-host-name>/kibana. Enter the admin username and deploy-admin password: To check that the data is flowing to the UEBA go to the Adapter Dashboard: Tap the Dashboard tab in the left menu Tap Adapter Dashboard at the right menu Select the relevant time range at the top bar The charts on this dashboard will present you the data that already fetched by the UEBA. |
Scaling Limitation Issue
When installed on a Virtual Machine, you can determine the number of network events to be processed by referring to the latest version of the Learning Period Per Scale topic.
Note: If the scaling limits are exceeded, NetWitness recommends provisioning the UEBA on a physical appliance.
Issue |
How to determine the scale of network events currently available, to know if it exceeds the UEBA limitation. |
Solution |
To know the network data limit, perform the following :
service=443 && direction='outbound' && analysis.service!='quic' && ip.src exists && ip.dst exists && tcp.srcport!=443 Calculate the total number of events for the selected days (including weekdays with standard workload). To determine the number of network events to be processed on a virtual machine for your environment, always refer to the latest version of the Learning Period for Scale topic. |
Issue |
Can UEBA for Packets be used if UEBA's supported scale is exceeded? |
Solution |
You must create or choose a Broker that is connected to a subset of Concentrators that does not exceed the supported limit. To know the network data limit, perform the following :
service=443 && direction='outbound' && analysis.service!='quic' && ip.src exists && ip.dst exists && tcp.srcport!=443 Calculate the total number of events for the selected days (including weekdays with standard workload). If the average is above 20 million per day then it indicates that UEBA’s supported scale is exceeded. |
Note: The Broker must query all the available and needed data needed such as logs, endpoint and network (packets). UEBA packets models are based on the whole environment. Hence, make sure that the data parsed from the subset of Concentrators is consistent.
UEBA Policy Issue
Issue | After you create a rule under UEBA policy, duplicate values are displayed in the Statistics drop-down. |
Solution |
To remove the duplicate values, perform the following:
|
Troubleshoot Using Kibana
Issue |
After you deploy NetWitness UEBA, the connection between the NetWitness and NetWitness UEBA is successful but there are very few or no events in the Users > OVERVIEW tab.
|
Solution |
You must identify the missing events and reconfigure the Windows auditing.
|
Issue |
The historical load is complete and the events are coming from Adapter dashboard but no alerts are displayed in the Users > OVERVIEW tab. |
Solution |
|
Issue |
The historical load is complete but no alerts are displayed in the Investigate > Users tab. |
Solution |
|
Troubleshoot Using Airflow
Issue | After you start running the UEBA it is not possible to remove a data source during the run process else the process stops. |
Solution |
You must either continue the process till it completes or remove the required data source from UEBA and rerun the process. |
Issue | After you deploy UEBA and if there are no events displayed in the Kibana > Table of content > Adapter dashboard and Airflow has already processed the hours but there are no events. This is due to some communication issue. |
Solution |
You must check the logs and resolve the issue.
Note: During initial installation, if the hours are processed successfully but there are no events, you must click reset_presidio after fixing the data in the Broker. Do not reset if there are alerts. |