Categoria: Seminari e Convegni
Stato: Archiviata
1 March 2021 - 5,30 pm ONLINE

Quantifying the confidence of anomaly detectors in their example-wise predictions

Microsoft Teams

Anomaly detection focuses on identifying examples in the data that somehow deviate from what is expected or typical. Algorithms for this task usually assign a score to each example that represents how anomalous the example is.
Then, a threshold on the scores turns them into concrete predictions.
However, each algorithm uses a different approach to assign the scores, which makes them difficult to interpret and can quickly erode a user’s trust in the predictions.
To overcome this limitation, we introduce an approach for assessing the reliability of any anomaly detector’s example-wise predictions.
To do so, we propose a Bayesian approach for converting anomaly scores to probability estimates. This enables the anomaly detector to assign a confidence score to each prediction which captures its uncertainty in that prediction. Specifically, the confidence measures the probability that any example’s predicted class would change if a different training set was observed. We theoretically analyze the convergence behaviour of our confidence estimate.
Empirically, we demonstrate the effectiveness of the framework in quantifying a detector’s confidence in its predictions on a large benchmark of datasets.

Lorenzo Perini
Location: Microsoft Teams – click here to join