Computer Science ETDs

Publication Date

Summer 5-7-2021

Abstract

Question answering systems are models that can perform natural language processing (NLP) on a question, retrieve an answer from a datasource, and communicate it to a user. In question answering systems, it is important for the system to learn an underlying representation for a piece of text. There are many systems that have achieved incredible accuracy on question answering datasets such as the Stanford Question and Answer Dataset (SQuAD), but these systems often encode their knowledge in a manner that is impossible to verify. Many current models would benefit more from verifiability, than marginal accuracy improvements.

We propose a method to learn representations for a piece of text in a manner that is human-auditable. The model accomplishes these goals by leveraging the power of modern transformer neural network models and a unique dataset to create a model that is accurate and interpretable.

Language

English

Keywords

natural language processing, transformers, NLP, intermediate representations

Document Type

Thesis

Degree Name

Computer Science

Level of Degree

Masters

Department Name

Department of Computer Science

First Committee Member (Chair)

Lydia Tapia

Second Committee Member

George Luger

Third Committee Member

Leah Buechley

Share

COinS