Traditionally, machine learning models are assessed using methods that estimate an average performance against samples drawn from a particular distribution. Examples include the use of cross-validation or hold0out to estimate classification error, F-score, precision, and recall.
While these measures provide valuable information, they do not tell us a model's certainty relative to particular regions of the input space. Typically there are regions where the model can differentiate the classes with certainty, and regions where the model is much less certain about its predictions.
In this dissertation we explore numerous approaches for quantifying uncertainty in the individual predictions made by supervised machine learning models. We develop an uncertainty measure we call minimum prediction deviation which can be used to assess the quality of the individual predictions made by supervised two-class classifiers. We show how minimum prediction deviation can be used to differentiate between the samples that model predicts credibly, and the samples for which further analysis is required.
machine learning, uncertainty quantification
Sandia National Laboratories LDRD Program
Level of Degree
Electrical and Computer Engineering
First Committee Member (Chair)
Second Committee Member
Third Committee Member
Fourth Committee Member
Fifth Committee Member
Darling, Michael C.. "Using Uncertainty To Interpret Supervised Machine Learning Predictions." (2019). https://digitalrepository.unm.edu/ece_etds/485