Computer Science ETDs

Publication Date

Spring 5-8-2018

Abstract

When reviewing the performance of Intelligent Virtual Assistants (IVAs), it is desirable to prioritize conversations involving misunderstood human inputs. These conversations uncover error in natural language understanding and help prioritize and expedite improvements to the IVA. As human reviewer time is valuable and manual analysis is time consuming, prioritizing the conversations where misunderstanding has likely occurred reduces costs and speeds improvement. A system for measuring the posthoc risk of missed intent associated with a single human input is presented. Numerous indicators of risk are explored and implemented. These indicators are combined using various means and evaluated on real world data. In addition, the ability for the system to adapt to different domains of language is explored. Finally, the system performance in identifying errors in IVA understanding is compared to that of human reviewers and multiple aspects of system deployment for commercial use are discussed.

Language

English

Keywords

Intelligent Virtual Assistants, Natural Language Understanding, Natural Language Processing, Human-Computer Interfaces

Document Type

Dissertation

Degree Name

Computer Science

Level of Degree

Doctoral

Department Name

Department of Computer Science

First Committee Member (Chair)

Abdullah Mueen

Second Committee Member

George Luger

Third Committee Member

Lance Williams

Fourth Committee Member

Paul De Palma

Fifth Committee Member

Charles Wooters

Share

COinS