Electrical and Computer Engineering ETDs

Publication Date

Spring 5-14-2022


Four microcontrollers were programmed to execute a simple counting program. Pulsed RF signals – also known as Intentional ElectroMagnetic Interference (IEMI) – were injected into the clock input of the microcontrollers. At the same time, the output lines were monitored to determine whether the IEMI signal altered the output of the counting program – referred to as an upset. A state-of-the-art automated testing apparatus was used to collect and process 120,960 samples of IEMI upset data. The data was used to perform a traditional upset trends study and train a series of machine learning (ML) techniques – k-Nearest Neighbors, Support Vector Machines, and Decision Trees – to predict IEMI upset using information about the IEMI waveform and injection time. It was determined through comparisons of the traditional and classifier-based trends that same-architecture devices shared remarkably similar trends, and the different architecture device had trends that were similar, but were offset to suggest higher resistance to IEMI upset. The Weighted k-Nearest vii Neighbors (k-NN) technique was identified as the best overall method, having the highest prediction accuracy and second-lowest training time compared to multiple variations of Support Vector Machines, Decision Trees, and k-NN algorithms. Ten features from the IEMI waveform characteristics, such as frequency, power, and pulse width, were used to train a Weighted k-Nearest Neighbors Machine Learning classifier. MATLAB provided the means to train, validate, and export 1023 different classifiers using the ten features in all possible combinations, such that the relative importance of each feature and the best feature combinations could be determined. Key results include: 1) Classifiers trained with data from a single microcontroller can make reasonably accurate (P > 85%) predictions when validated against the other devices’ datasets, even when the microcontrollers have different architectures. 2) The optimal training set used data from all four devices to result in an average accuracy of P = 91.58%. 3) Using data from only MCU1 and MCU2 – which are different instances of the same device - resulted in a median accuracy of 91.27% across all four devices. 4) There was a ~2% decrease in prediction accuracy when only 10% of the entire dataset (randomly chosen) was used as training input and a ~10% decrease when using only 1% (randomly chosen) of the data. These results suggest meaningful upset predictions across multiple architectures can be made using k-NN classifiers, even with sparse datasets.


Upset, IEMI, Effects, Microcontrollers, k-Nearest Neighbors, Machine Learning

Document Type




Degree Name

Electrical Engineering

Level of Degree


Department Name

Electrical and Computer Engineering

First Committee Member (Chair)

Christos Christodoulou

Second Committee Member

Edl Schamiloglu

Third Committee Member

Nathan Jackson

Third Advisor

Sameer Hemmady

Fourth Committee Member

Timothy Clarke

Fifth Committee Member

F. Mark Lehr