Electrical and Computer Engineering ETDs
Publication Date
Fall 11-1-2019
Abstract
Traditionally, machine learning models are assessed using methods that estimate an average performance against samples drawn from a particular distribution. Examples include the use of cross-validation or hold0out to estimate classification error, F-score, precision, and recall.
While these measures provide valuable information, they do not tell us a model's certainty relative to particular regions of the input space. Typically there are regions where the model can differentiate the classes with certainty, and regions where the model is much less certain about its predictions.
In this dissertation we explore numerous approaches for quantifying uncertainty in the individual predictions made by supervised machine learning models. We develop an uncertainty measure we call minimum prediction deviation which can be used to assess the quality of the individual predictions made by supervised two-class classifiers. We show how minimum prediction deviation can be used to differentiate between the samples that model predicts credibly, and the samples for which further analysis is required.
Keywords
machine learning, uncertainty quantification
Sponsors
Sandia National Laboratories LDRD Program
Document Type
Dissertation
Language
English
Degree Name
Computer Engineering
Level of Degree
Doctoral
Department Name
Electrical and Computer Engineering
First Committee Member (Chair)
Don Hush
Second Committee Member
Ramiro Jordan
Third Committee Member
Trilce Estrada
Fourth Committee Member
Chouki Abdallah
Fifth Committee Member
David Stracuzzi
Recommended Citation
Darling, Michael C.. "Using Uncertainty To Interpret Supervised Machine Learning Predictions." (2019). https://digitalrepository.unm.edu/ece_etds/485