Computer Science ETDs

Publication Date

Fall 11-15-2018


Though computational models typically assume all program steps execute flawlessly, that does not imply all steps are equally important if a failure should occur. In the "Constrained Reliability Allocation" problem, sufficient resources are guaranteed for operations that prompt eventual program termination on failure, but those operations that only cause output errors are given a limited budget of some vital resource, insufficient to ensure correct operation for each of them.

In this dissertation, I present a novel representation of failures based on a combination of their timing and location combined with criticality assessments---a method used to predict the behavior of systems operating outside their design criteria. I observe that strictly correct error measures hide interesting failure relationships, failure importance is often determined by failure timing, and recursion plays an important role in structuring output error. I employ these observations to improve the output error of two matrix multiplication methods through an economization procedure that moves failures from worse to better locations, thus providing a possible solution to the constrained reliability allocation problem. I show a 38% to 63% decrease in absolute value error on matrix multiplication algorithms, despite nearly identical failure counts between control and experimental studies. Finally, I show that efficient sorting algorithms are less robust at large scale than less efficient sorting algorithms.




Criticality, Failure Interface, Failure Shaping, Fault Tolerance, Quantified Correctness, Scalable Robustness

Document Type


Degree Name

Computer Science

Level of Degree


Department Name

Department of Computer Science

First Committee Member (Chair)

David H. Ackley

Second Committee Member

Lance R. Williams

Third Committee Member

Thomas P. Caudell

Fourth Committee Member

George F. Luger

Fifth Committee Member

Darko Stefanovic