Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Prashanthi Mallojula

On the Security of Mobile and Auto Companion Apps

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Bo Luo, Chair
Alex Bardas
Fengjun Li
Hongyang Sun
Huazhen Fang

Abstract

The rapid development of mobile apps on modern smartphone platforms has raised critical concerns regarding user data privacy and the security of app-to-device communications, particularly with companion apps that interface with external IoT or cyber-physical systems (CPS). In this dissertation, we investigate two major aspects of mobile app security: the misuse of permission mechanisms and the security of app to device communication in automotive companion apps.

Mobile apps seek user consent for accessing sensitive information such as location and personal data. However, users often blindly accept these permission requests, allowing apps to abuse this mechanism. As long as a permission is requested, state-of-the-art security mechanisms typically treat it as legitimate. This raises a critical question: Are these permission requests always valid? To explore this, we validate permission requests using statistical analysis on permission sets extracted from groups of functionally similar apps. We identify mobile apps with abusive permission access and quantify the risk of information leakage posed by each app. Through a large-scale statistical analysis of permission sets from over 200,000 Android apps, our findings reveal that approximately 10% of the apps exhibit highly risky permission usage. 

Next, we present a comprehensive study of automotive companion apps, a rapidly growing yet underexplored category of mobile apps. These apps are used for vehicle diagnostics, telemetry, and remote control, and they often interface with in-vehicle networks via OBD-II dongles, exposing users to significant privacy and security risks. Using a hybrid methodology that combines static code analysis, dynamic runtime inspection, and network traffic monitoring, we analyze 154 publicly available Android automotive apps. Our findings uncover a broad range of critical vulnerabilities. Over 74% of the analyzed apps exhibit vulnerabilities that could lead to private information leakage, property theft, or even real-time safety risks while driving. Specifically, 18 apps were found to connect to open OBD-II dongles without requiring any authentication, accept arbitrary CAN bus commands from potentially malicious users, and transmit those commands to the vehicle without validation. 16 apps were found to store driving logs in external storage, enabling attackers to reconstruct trip histories and driving patterns. We demonstrate several real-world attack scenarios that illustrate how insecure data storage and communication practices can compromise user privacy and vehicular safety. Finally, we discuss mitigation strategies and detail the responsible disclosure process undertaken with the affected developers.


Past Defense Notices

Dates

SIVA PRAMOD BOBBILI

Static Disassembly of Binary using Symbol Table Information

When & Where:


250 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Andy Gill
Jerzy Grzymala-Busse


Abstract

Static binary analysis is an important challenge with many applications in security and performance optimization. One of the main challenges with analyzing an executable file statically is to discover all the instructions in the binary executable. It is often difficult to discover all program instructions due to a well-known limitation in static binary analysis, called the code discovery problem. Some of the main contributors to the code discovery problem are variable length CISC instructions, data interspersed with code, padding bytes for branch target alignment and indirect jumps. All these problems manifest themselves in x86 binary files, which is unfortunate since x86 is the most popular architecture format in desktop and server domains. 
Although much of the research work in the recent times have stated that the symbol table might be of help to overcome the difficulties of code discovery, the extent to which it can actually help in the code discovery problem is still in question. This work focuses on assessing the benefit of using the symbol table information to overcome the limitations of the code discovery problem and identify more or all instructions in x86 binary executable files. We will discuss the details, extent, limitations and impact of instruction discovery with and without symbol table information in this work. 


JONATHAN LUTES

SafeExit: Exit Node Protection for TOR

When & Where:


2001B Eaton Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Prasad Kulkarni


Abstract

TOR is one of the most important networks for providing anonymity over the internet. However, in some cases its exit node operators open themselves up to various legal challenges, a fact which discourages participation in the network. In this paper, we propose a mechanism for allowing some users to be voluntarily verified by trusted third parties, providing a means by which an exit node can verify that they are not the true source of traffic. This is done by extending TOR’s anonymity model to include 
another class of user, and using a web of trust mechanism to create chains of trust. 


KAVYASHREE PILAR

Digital Down Conversion and Compression of Radar Data

When & Where:


317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Shannon Blunt
Glenn Prescott


Abstract

Storage and handling of huge amount of received data samples is one of the major challenges associated with Radar system design. Radar data samples potentially have high temporal and spatial correlation depending on the target characteristics and radar settings. This correlation can be utilized to compress them without any loss in sensitivity in post processed products. This project focuses on reducing the storage requirement of a Radar used for remote sensing of ice sheets. At the front-end of Radar receiver, the data sample rate can be reduced at real-time by performing frequency down-conversion and decimation of the incoming data. The decimated signal can be further compressed by applying suitable data compression algorithm. The project implements a digital down-converter, decimator and a data compression module on FPGA. Literature survey suggests that there are quite a few research works being done towards developing customized Radar data compression algorithms. This project analyses the possibility of using general-purpose algorithms like GZIP, JPEG-2000 (lossless) to compress Radar data. It also considers a simple floating point compression technique to convert 16 bit data to 8 bit data, guaranteeing a 50% reduction in data size. The project implements the 16-to-8 bit conversion, JPEG 2000 lossless and GZIP algorithms in Matlab and compares their SNR performance with Radar data. Simulations suggest that all of them have similar SNR performance but JPEG 2000, GZIP algorithms offer a compression ratio of over 90%. However, 16-to-8-bit compression is implemented in this project because of its simplicity. 
A hardware test bed is implemented to integrate the digital radar electronics with the Matlab Simulink Simulation tools in a hardware in the loop (HIL) configuration. The digital down converter, decimator and the data compression module are prototyped on SimuLink. The design is later implemented on FPGA using Verilog code. The functionality is tested at various stages of development using ModelSim simulations, Altera DSPBuilder’s HDL import, HIL co-simulation and using SignalTap. This test bed can also be used for future development efforts. 


SURYA TEJ NIMMAKAYALA

Exploring Causes of Performance Overhead during Dynamic Binary Translation

When & Where:


250 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Fengjun Li
Bo Luo


Abstract

Dynamic Binary Translators (DBT) have applications ranging from program 
portability, instrumentation, optimizations, and improving software security. To achieve these goals and maintain control over the application's execution, DBTs translate and run the original source/guest programs in a sand-boxed environment. DBT systems apply several optimization techniques like code caching, trace creation, etc. to reduce the translation overhead and 
enhance program performance at run-time. However, even with these 
optimizations, DBTs typically impose a significant performance overhead, 
especially for short-running applications. This performance penalty has 
restricted the more wide-spread adoption of DBT technology, in spite of its obvious need. 

The goal of this work is to determine the different factors that contribute to the performance penalty imposed by dynamic binary translators. In this thesis, we describe the experiments that we designed to achieve our goal and report our results and observations. We use a popular and sophisticated DBT, DynamoRio, for our test platform, and employ the industry-standard SPEC CPU2006 benchmarks to capture run-time statistics. Our experiments find that DynamoRio executes a large number of additional instructions when compared to the native application execution. We further measure that this increase in the number of executed instructions is caused by the DBT frequently exiting 
the code cache to perform various management tasks at run-time, including 
code translation, indirect branch resolution and trace formation. We also 
find that the performance loss experienced by the DBT is directly 
proportional to the number of code cache exits. We will discuss the details on the experiments, results, observations, and analysis in this work.


XUN WU

A Global Discretization Approach to Handle Numerical Attributes as Preprocessing Presenter

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Heechul Yun


Abstract

Discretization is a common technique to handle numerical attributes in data mining, and it divides continuous values into several intervals by defining multiple thresholds. Decision tree learning algorithms, such as C4.5 and random forests, are able to deal with numerical attributes by applying discretization technique and transforming them into nominal attributes based on one impurity-based criterion, such as information gain or Gini gain. However, there is no doubt that a considerable amount of distinct values are located in the same interval after discretization, through which digital information delivered by the original continuous values are lost. 
In this thesis, we proposed a global discretization method that is able to keep the information within the original numerical attributes by expanding them into multiple nominal ones based on each of the candidate cut-point values. The discretized data set, which includes only nominal attributes, evolves from the original data set. We analyzed the problem by applying two decision tree learning algorithms, namely C4.5 and random forests, respectively to each of the twelve pairs of data sets (original and discretized data sets) and evaluating the performances (prediction accuracy rate) of the obtained classification models in Weka Experimenter. This is followed by two separate Wilcoxon tests (each test for one learning algorithm) to decide whether there is a level of statistical significance among these paired data sets. Results of both tests indicate that there is no clear difference in terms of performances by using the discretized data sets compared to the original ones. 


YUFEI CHENG

Future Internet Routing Design for Massive Failures and Attacks

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Fengjun Li
Gary Minden
Michael Vitevitch

Abstract

With the increasing frequency of natural disasters and intentional attacks that challenge the optical network, vulnerability to cascading and regional-correlated challenges is escalating. Given the high complexity and large traffic load of the optical networks, the correlated challenges pose great damage to reliable network communication. We start our research by proposing a critical regional identification mechanism and study different vulnerability scales using real-world physical network topologies. We further propose geographical diversity and incorporate it into a new graph resilience metric cTGGD (compensated Total Geographical Graph Diversity), which is capable of characterizing and differentiating resiliency level from different physical networks. We propose path geodiverse problem (PGD) and two heuristics for solving the problem with less complexity compared to the optimal algorithm. The geodiverse paths are optimized with a delay-skew optimization formulation for optimal traffic allocation. We implement GeoDivRP in ns-3 to employ the optimized paths and demonstrate their effectiveness compared to OSPF Equal-Cost Multi-Path routing (ECMP) in terms of both throughput and overall link utilization. As from the attackers perspective, we have analyzed the mechanism by which the attackers could use to maximize the attack impact with a limited budget and demonstrate the effectiveness of different network restoration plans.


DARSHAN RAMESH

Configuration of Routing Protocols on Routers using Quagga

When & Where:


246 Nichols Hall

Committee Members:

Joseph Evans, Chair
Victor Frost
Glenn Prescott


Abstract

With the increasing number of devices being connected to the network, efficient connection of those devices to the network is very important. The routing protocols have evolved through time. I have used Mininet and Quagga to implement the routing protocols in a topology with ten routers and eleven host machines. Initially the basic configuration of the routers is done to bring its interfaces administratively up and the IP addresses are assigned. Static routes are configured on the routers using Quagga zebra daemons. With the amount of overhead observed, static protocol is replaced with RIPv2 using the Quagga ripd daemon and the features of RIPv2 are implemented like MD5 authentication and split horizon. RIPv2 is replaced with OSPF routing protocol. The differences between static and dynamic protocol are observed. Complex OSPF applications are implemented using the Quagga ospfd daemon. The best route to the neighboring routers is changed using the OSPF cost attribute. Next the networks in the lab are 
assumed to belong to different autonomous systems and BGP is implemented using the Quagga bgpd daemon. The routing updates are filtered using the access list attributes. The path to the neighboring routers is changed using BGP metrics such as MED, weight, AS_PATH and local_pref. Load balancing is also implemented and the results are verified using traceroute and routing tables.


RUXIN XIE

Single-Fiber-Laser-Based Multimodal Coherent Raman System

When & Where:


250 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Shannon Blunt
Victor Frost
Carey Johnson

Abstract

Single-fiber-laser-based coherent Raman scattering (CRS) spectroscopy and microscopy system can automatically maintain frequency synchronization between pump and Stokes beam, which dramatically simplifies the setup configuration. The Stokes frequency shift is generated by soliton self-frequency shift (SSFS) through a photonic crystal fiber. The impact of pulse chirping on the signal power reduction of coherent anti-Stokes Raman scattering (CARS) and stimulated Raman scattering (SRS) have been investigate through theoretical analysis and experiment. The strategies of system optimization is discussed. 
Our multimodal system provides measurement diversity among CARS, SRS and photothermal, which can be used for comparison and offering complementary information. Distribution of hemoglobin in human red blood cells and lipids in sliced mouse brain sample have been imaged. Frequency and power dependency of photothermal signal is characterized. 
Instead of using intensity modulated pump, the polarization switched SRS method is applied to our system by changing the polarization of the pump. Based on the polarization dependency of the third-order susceptibility of the material, this method is able to eliminate the nonresonant photothermal signal from the resonant SRS signal. Red blood cells and sliced mouse brain samples were imaged to demonstrate the capability of the proposed technique. The result shows that polarization switched SRS removes most of the photothermal signal. 


VENU GOPAL BOMMU

Performance Analysis of Various Implementations of Machine Learning Algorithms

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Luke Huan
Bo Luo


Abstract

Rapid development in technologies and database systems result in producing and storing large amounts of data. With such an enormous increase in data over the last few decades, data mining became a useful tool to discover the knowledge hidden in large data. Domain experts often use machine learning algorithms for finding theories that would explain their data. 
In this project we compare Weka implementation of CART and C4.5 with their original implementation on different datasets from University of California Irvine (UCI). Comparisons of these implementations has been carried in terms of accuracy, decision tree complexity and area under ROC curve (AUC). Results from our experiments show that the decision tree complexity of C4.5 is much higher than CART and that the original implementation of these algorithms perform slightly better than their corresponding Weka implementation in terms of accuracy and AUC. 


SRI HARSHA KOMARINA

System Logging and Analysis using Time Series Databases

When & Where:


2001B Eaton Hall

Committee Members:

Joseph Evans, Chair
Prasad Kulkarni
Bo Luo


Abstract

Logging system information and its metrics provides us with a valuable resource to monitor the system for unusual activity and understand the various factors affecting its performance. Though there are several tools that are available to log and analyze the system locally, it is inefficient to individually analyze every system and is seldom effective in case of hardware failure. Having centralized access to this information aids the system administrators in performing their operational tasks. Here we present a centralized logging solution for system logs and metrics by using Time Series Databases (TSDB). We provide reliable storage and efficient access to system information by storing the parsed system logs and metrics in a TSDB. In this project, we develop a solution to store the system’s default log storage - syslog as well as the system metrics like CPU load, disk load, and network traffic load into a TSDB. We further extend our ability to monitor and analyze the data in our TSDB by using an open source graphing tool.