About Thresholds
Thresholds define the conditions required for a validator to consider a metric to be a data quality incident or anomaly. When the validator detects data that breaches the defined threshold, it creates an incident that you can inspect on the validator details page. Optionally, you can define rules to notify you about identified incidents and route the notifications to different channels, such as Slack or webhooks.
When creating a Validator, you must configure a threshold to identify data quality incidents. Validio supports the following types of thresholds:
- Fixed Threshold–Performs comparison operations between the metric and a specified numeric threshold. For example, you can define a fixed threshold to check that no values in the field "Age" are less than zero. For more information, see Configuring Fixed Thresholds.
- Dynamic Threshold–Automatically calculates thresholds for numeric metrics based on statistical methods and allows you to adjust the sensitivity, which is the range of accepted threshold values. For example, you can define a dynamic threshold to track the daily average of sales to detect anomalies. For more information, see Configuring Dynamic Thresholds.
- Difference Threshold–Monitors a metric and alerts when the metric value deviates from a specified absolute value or percentage value for consecutive windows. For example, you can track the mean of a numeric value and alert when the metric value has decreased by X percentage over two consecutive days. For more information, see Configuring Difference Thresholds.
For more information about reviewing and managing validator incidents, see About Validator Details and About Validator Incidents. For information on how to configure rules and channels to receive notifications when validator incidents occur, see About Notifications.
Configuring Fixed Thresholds
Fixed thresholds perform comparison operations between numeric metrics and a specified value. For example, you can define a fixed threshold to check that no values in the field “Age” are less than zero. When the validator detects data matching the conditions, it creates an incident and sends an alert to any configured channels.
The following table lists the parameters for configuring a fixed threshold. Configuration options differ depending on the type of validator.
Parameter name | Parameter value | Validator Type |
---|---|---|
Operator | Equal to Not equal to Less than Less than or equal to Greater than Greater than or equal to | All |
Unit | Second Minute Hour Day Week Month Year | Freshness Relative Time |
Value | Numeric value | All |
Configuring Dynamic Thresholds
Dynamic Thresholds use a combination of smart algorithms to automatically detect anomalies in your data. The threshold model infers trends, seasonality, and peaks, and also adapts to shifts in your data. It learns from historical data and is trained on new data, continuously improving as more data is read.
When applied to a backfilled source, the dynamic thresholds can quickly detect upcoming anomalies without any training period. This means you get incidents and insight immediately, even if you lack the domain knowledge to create appropriate thresholds. You can also provide input to improve the anomaly detection algorithm. For more information, see Model Retraining.
Dynamic thresholds will continuously track and automatically update when it detects shifts in seasonality and trends. You can use dynamic thresholds to monitor sources where you expect changes in your data over time. For more information, see Seasonality Detection.
The following table lists the parameters for configuring a dynamic threshold. All validator types will have the same configuration options.
Parameter name | Parameter value | Validator Type |
---|---|---|
Decision Bounds | Upper and lower Upper Lower | All |
(Preset) Sensitivity | (Wide) 1.2 (Default) 2 (Narrow) 3.2 (Custom) Positive floating value | All |
Decision bounds type
The decision bound type on the dynamic threshold specifies whether the boundaries for anomaly detection are double or single-sided:
- Upper and lower–Detects both upper and lower anomalies.
- Upper–Treats only upward deviations as anomalies. For example, this is the default for freshness validators. You do not want to be alerted about too fresh data but rather when your data is late.
- Lower–Treats only downward deviations as anomalies.
Sensitivity
Sensitivity defines the accepted range of values for the dynamic threshold.
- Higher sensitivity (lower threshold)–Means that the accepted range of values is more narrow, and the model will identify more data quality incidents or anomalies, leading to more alerts. Higher sensitivity is best suited for your most important tables.
- Lower sensitivity (higher threshold)–Implies a wider range of accepted values, resulting in fewer incidents and alerts. Lower sensitivity is ideal for less important tables that have historically produced noisy incidents.
Setting the right sensitivity is often an iterative process to find a balance between false positives and alert fatigue versus false negatives and missing real errors. The typical starting sensitivity value for testing is between 2 and 3. The default sensitivity in Validio is 2.0.
The following table maps the numeric value of Validio sensitivity presets to standard deviations:
Sensitivity Preset Options | Validio Sensitivity Values | Standard Deviations |
---|---|---|
Narrow | 3.2 | 2.5 |
Default | 2.0 | 4 |
Wide | 1.2 | 5.5 |
Model Retraining
You can help improve the anomaly detection on dynamic thresholds by giving feedback on detected incidents. To give feedback, you need to change the triage state of the incident to “False Positive”, which means the incident is not an anomaly. This feedback is used to retrain the threshold model to become more precise.
For example, when you resolve an incident (that was detected at any point in the past) to False Positive, the threshold model is less likely to wrongly detect future data points as incidents if they appear in a similar context and have a similar value. Retraining the model in this way can help minimize alert fatigue and the amount of falsely detected anomalies.
For example, as shown in the figure for the "TOTAL_SALES_AMOUNT" Validator, three data points exceeded the dynamic threshold, resulting in "High" severity alerts. By marking these incidents as “False Positive,” the model will learn from these cases, adjusting the bounds so that values in a similar context and magnitude are less likely to be flagged in future detections.
Each "False Positive" feedback recalibrates threshold bounds based on the current data, reducing sensitivity for similar patterns. However, major shifts in data may need further feedback.
Note
You cannot undo the feedback to the threshold model after setting an incident status to False Positive. However you can clear retrained events if you delete and recreate a validator or reset the source. For more information, see Managing Incidents.
Changes appear progressive as more feedback is provided, making the threshold less sensitive to repeated false positives.
Seasonality Detection
Dynamic thresholds can automatically adapt to seasonality patterns that appear in your data which is related to the calendar. You do not have to enable or configure this feature. When there is enough evidence in your data to support the pattern detection, the dynamic threshold will adapt and not trigger an incident if it is caused by the seasonality.
- Calendric Seasonality–Seasonal patterns can appear in your data due to the calendar. Calendric seasonality can relate to business processes and cycles where work may be planned and reviewed in regular cycles that may be weekly, bi-weekly, or monthly, and this behavior is reflected in your data. One example of calendric seasonality is recognizing that a Volume validator returns 0 on all days except the days when the pipeline runs and ingests data.
Metric Support
Dynamic thresholds include functionality to estimate the support of the metric by partitioning the sample space (where data can appear) into negative values, zeros, and positive values. Depending on the frequency of the support, the metric gets an estimated positive, non-negative, or unbounded support. The estimated support is not static--it can change over time.
Depending on the estimated support, the lower decision bound is adapted:
- If the estimated support is positive, values which are zero or negative are considered incidents.
- If the estimated support is non-negative, negative values are considered incidents.
Configuring Difference Thresholds
Difference thresholds monitor a metric and alerts when the difference of the metric value between windows deviates from a specified absolute value or percentage value for consecutive windows.
The following table lists the parameters for configuring a difference threshold. Configuration options differ depending on the type of validator.
Parameter Name | Parameter Value | Validator Type |
---|---|---|
Difference Type | Absolute Percentage | All |
Operator | Decreasing (<=) Increasing (>=) Strictly decreasing (<) Strictly increasing (>) | All |
Unit | Second Minute Hour Day Week Month Year | Freshness Relative Time |
Value | Numeric value. For absolute, the value is in the scale of the metric. For percentage, the value is the percentage points. | All |
Number of Windows | Number of consecutive windows for which the threshold event must happen before an incident is created. | All |
Updated about 1 month ago