Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Prashanthi Mallojula

On the Security of Mobile and Auto Companion Apps

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Bo Luo, Chair
Alex Bardas
Fengjun Li
Hongyang Sun
Huazhen Fang

Abstract

The rapid development of mobile apps on modern smartphone platforms has raised critical concerns regarding user data privacy and the security of app-to-device communications, particularly with companion apps that interface with external IoT or cyber-physical systems (CPS). In this dissertation, we investigate two major aspects of mobile app security: the misuse of permission mechanisms and the security of app to device communication in automotive companion apps.

Mobile apps seek user consent for accessing sensitive information such as location and personal data. However, users often blindly accept these permission requests, allowing apps to abuse this mechanism. As long as a permission is requested, state-of-the-art security mechanisms typically treat it as legitimate. This raises a critical question: Are these permission requests always valid? To explore this, we validate permission requests using statistical analysis on permission sets extracted from groups of functionally similar apps. We identify mobile apps with abusive permission access and quantify the risk of information leakage posed by each app. Through a large-scale statistical analysis of permission sets from over 200,000 Android apps, our findings reveal that approximately 10% of the apps exhibit highly risky permission usage. 

Next, we present a comprehensive study of automotive companion apps, a rapidly growing yet underexplored category of mobile apps. These apps are used for vehicle diagnostics, telemetry, and remote control, and they often interface with in-vehicle networks via OBD-II dongles, exposing users to significant privacy and security risks. Using a hybrid methodology that combines static code analysis, dynamic runtime inspection, and network traffic monitoring, we analyze 154 publicly available Android automotive apps. Our findings uncover a broad range of critical vulnerabilities. Over 74% of the analyzed apps exhibit vulnerabilities that could lead to private information leakage, property theft, or even real-time safety risks while driving. Specifically, 18 apps were found to connect to open OBD-II dongles without requiring any authentication, accept arbitrary CAN bus commands from potentially malicious users, and transmit those commands to the vehicle without validation. 16 apps were found to store driving logs in external storage, enabling attackers to reconstruct trip histories and driving patterns. We demonstrate several real-world attack scenarios that illustrate how insecure data storage and communication practices can compromise user privacy and vehicular safety. Finally, we discuss mitigation strategies and detail the responsible disclosure process undertaken with the affected developers.


Past Defense Notices

Dates

EVAN AUSTIN

Theorem Provers as Libraries — An Approach to Formally Verifying Functional Programs

When & Where:


246 Nichols Hall

Committee Members:

Perry Alexander, Chair
Arvin Agah
Andy Gill
Prasad Kulkarni
Erik Van Vleck

Abstract

Property-directed verification of functional programs tends to take one of two paths. 
First, is the traditional testing approach, where properties are expressed in the original programming language and checked with a collection of test data. 
Alternatively, for those desiring a more rigorous approach, properties can be written and checked with a formal tool; typically, an external proof system. 
This dissertation details a hybrid approach that captures the best of both worlds: the formality of a proof system paired with the native integration of an embedded, domain specific language (EDSL) for testing. 

At the heart of this hybridization is the titular concept -- \emph{a theorem prover as a library}. 
The verification capabilities of this prover, HaskHOL, are introduced to a Haskell development environment as a GHC compiler plugin. 
Operating at the compiler level provides for a comparatively simpler integration and allows verification to co-exist with the numerous other passes that stand between source code and program. 

The logical connection between language and proof library is formalized, and the open problems related to this connection are documented. 
Additionally, the resultant, novel verification workflow is applied to two major classes of problems, type class laws and polymorphic test cases, to judge the real-world feasibility of compiler-directed verification. 
These applications and formalization serve to position this work relative to existing work and to highlight potential, future extensions.


CAMERON LEWIS

Ice Shelf Melt Rates and 3D Imaging

When & Where:


317 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Chris Allen
Carl Leuschen
Fernando Rodriguez-Morales
Rick Hale

Abstract

Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves plays an important role in both the mass balance of the ice sheet and the global climate system. Airborne- and satellite based remote sensing systems can perform thickness measurements of ice shelves. Time separated repeat flight tracks over ice shelves of interest generate data sets that can be used to derive basal melt rates using traditional glaciological techniques. Many previous melt rate studies have relied on surface elevation data gathered by airborne- and satellite based altimeters. These systems infer melt rates by assuming hydrostatic equilibrium, an assumption that may not be accurate, especially near an ice shelf’s grounding line. Moderate bandwidth, VHF, ice penetrating radar has been used to measure ice shelf profiles with relatively coarse resolution. This study presents the application of an ultra wide bandwidth (UWB), UHF, ice penetrating radar to obtain finer resolution data on the ice shelves. These data reveal significant details about the basal interface, including the locations and depth of bottom crevasses and deviations from hydrostatic equilibrium. While our single channel radar provides new insight into ice shelf structure, it only images a small swatch of the shelf, which is assumed to be an average of the total shelf behavior. This study takes an additional step by investigating the application of a 3 D imaging technique to a data set collected using a ground based multi channel version of the UWB radar. The intent is to show that the UWB radar could be capable of providing a wider swath 3 D image of an ice shelf. The 3 D images can then be used to obtain a more complete estimate of bottom melt rates of ice shelves.


RALPH BAIRD

Isomorphic Routing Protocol

When & Where:


250 Nichols Hall

Committee Members:

Victor Frost, Chair
Bo Luo
Hossein Saiedian


Abstract

A mobile ad-hoc network (MANET) routing algorithm defines the path packets take to reach their destination using measurements of attributes such as adjacency and distance. Graph theory is increasingly applied in many fields of research today to model the properties of data on a graph plane. Graph theory is applied to in networking to form structures from patterns of nodes. Conventional MANET protocols are often based on path measurements from wired network algorithms and do not implement mechanisms to mitigate route entropy, defined as the procession of a converged path to a path loss state as a result of increasing random movement. Graph isomorphism measures equality beginning in the individual node and in sets of nodes and edges. The measurement of isomorphism is applied in this research to form paths from an aggregate set of route inputs, such as adjacency, cardinality to impending nodes in a path, and network width. A routing protocol based on the presence of isomorphism in a MANET topology is then tested to measure the performance of the proposed routing protocol.


DAIN VERMAAK

Application of Half Spaces in Bounding Wireless Internet Signals for use in Indoor Positioning

When & Where:


246 Nichols Hall

Committee Members:

Joseph Evans, Chair
Jim Miller
Gary Minden


Abstract

The problem of outdoor positioning has been largely solved via the use of GPS. This thesis addresses the problem of determining position in areas where GPS is unavailable. No clear solution exists for indoor localization and all approximation methods offer unique drawbacks. To mitigate the drawbacks, robust systems combine multiple complementary approaches. In this thesis, fusion of wireless internet access points and inertial sensors was used to allow indoor positioning without the need for prior information regarding surroundings. Implementation of the algorithm involved development of three separate systems. The first system simply combines inertial sensors on the Android Nexus 7 to form a step counter capable of providing marginally accurate initial measurements. Having achieved reliable initial measurements, the second system receives signal strength from nearby wireless internet access points, augmenting the sensor data in order to generate half-planes. The half-planes partition the available space and bound the possible region in which each access point can exist. Lastly, the third system addresses the tendency of the step counter to lose accuracy over time by using the recently established positions of the access points to correct flawed values. The resulting process forms a simple feedback loop.


ANDREW FARMER

HERMIT: Mechanized Reasoning during Compilation in the Glasgow Haskell Compiler

When & Where:


250 Nichols Hall

Committee Members:

Andy Gill, Chair
Perry Alexander
Prasad Kulkarni
Jim Miller
Chris Depcik

Abstract

It is difficult to write programs which are both correct and fast. A promising approach, functional programming, is based on the idea of using pure, mathematical functions to construct programs. With effort, it is possible to establish a connection between a specification written in a functional language, which has been proven correct, and a fast implementation, via program transformation. 

When practiced in the functional programming community, this style of reasoning is still typically performed by hand, by either modifying the source code or using pen-and-paper. Unfortunately, performing such semi-formal reasoning by directly modifying the source code often obfuscates the program, and pen-and-paper reasoning becomes outdated as the program changes over time. Even so, this semi-formal reasoning prevails because formal reasoning is time-consuming, and requires considerable expertise. Formal reasoning tools often only work for a subset of the target language, or require programs to be implemented in a custom language for reasoning. 

This dissertation investigates a solution, called HERMIT, which mechanizes reasoning during compilation. HERMIT can be used to prove properties about programs written in the Haskell functional programming language, or transform them to improve their performance. 
Reasoning in HERMIT proceeds in a style familiar to practitioners of pen-and-paper reasoning, and mechanization allows these techniques to be applied to real-world programs with greater confidence. HERMIT can also re-check recorded reasoning steps on subsequent compilations, enforcing a connection with the program as the program is developed. 

HERMIT is the first system capable of directly reasoning about the full Haskell language. The design and implementation of HERMIT, motivated both by typical reasoning tasks and HERMIT's place in the Haskell ecosystem, is presented in detail. Three case studies investigate HERMIT's capability to reason in practice. These case studies demonstrate that semi-formal reasoning with HERMIT lowers the barrier to writing programs which are both correct and fast. 


JAY McDANIEL

Design, Integration, and Miniaturization of a Multichannel Ultra-Wideband Snow Radar Receiver and Passive Microwave Components

When & Where:


129 Nichols

Committee Members:

Carl Leuschen, Chair
Stephen Yan
Prasad Gogineni


Abstract

To meet the demand for additional snow characterization from the Intergovernmental Panel on Climate Change (IPCC), a new “Airborne” Multichannel, Quad-Polarized 2-18GHz Snow Radar has been proposed. With tight size and weight constraints from the airborne platforms deploying with the Navy Research Laboratory (NRL), the need for integrated and miniaturized receivers for cost and size reduction is crucial for future deployments. 

A set of heterodyne microwave receivers were developed to enable snow thickness measurements from a survey altitude of 500 feet to 5000 feet while nadir looking, and estimation of SWE from polarimetric backscattered signals at low elevation 30 degree off nadir. The individual receiver has undergone a five times size reduction with respect to initial prototype design, while achieving a sensitivity of -125 dBm on average across the 2-18 GHz bandwidth, enabling measurements with a vertical range resolution of 1.64 cm in snow. The design of a compact enclosure was defined to accommodate up to 18 individual receiver modules allowing for multichannel quad-polarized measurements over the entire 16 GHz bandwidth. The receiver bank was tested individually and with the entire system in a full multichannel loop-back measurement, using a 2.95 μs optical delay line, resulting in a beat frequency of 200 MHz with 20 dB range side lobes. Due to the multi-angle, multi-polarization, and multi-frequency content from the data , the number of free parameters in the SWE estimation can thus be significantly reduced. 

Design equations have been derived and a new method for modeling Suspended Substrate Stripline (SSS) filters in ADS for rapid-prototyping has been accomplished. Two SSS filters were designed which include an Optimized Chebyshev SSS Low Pass Filter (LPF) with an 18 GHz cutoff frequency and a Broadside Coupled SSS High Pass Filter (HPF) with a 2 GHz cutoff frequency. Also, a 2-18 GHz three-port Transverse Electromagnetic (TEM) Mode Hybrid 8:1 power combiner was designed and modeled at CReSIS. This design will be integrated into the Vivaldi Dual Polarized antenna array with 8 active dual-polarized elements to implement a lightweight and compact array structure, eliminating cable and connector cost and losses. 


VADIRAJ HARIBAL

Modelling of ATF-38143 P-HEMT Driven Resistive Mixer for VHF KNG P-150 Portable Radios

When & Where:


250 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Alessandro Salandrino


Abstract

FET resistive mixers play a key role in providing high linearity and low noise figure levels. HEMT technology with low threshold voltage has popularized mobile phone market and milli-meter wave technologies. The project analyzes working of a down-conversion VHF FET resistive mixer model designed using ultra-low noise ATF -38143 P-HEMT. Its widely used in KNG-P150 portable mobile radios manufactured by RELM Wireless Corporation. The mixer is designed to function within RF frequency range from 136Mhz -174Mhz at an IF frequency of 51.50Mhz. Statz model has been used to simulate the working of P-HEMT under normal conditions. Transfer function of matching circuits at each port have been obtained using simulink modelling. Effect of change in Q factor at the RF port and IF port have been considered. Analytical modelling of the mixer is performed and simulated results are compared with experimental data obtained at constant 5dbm LO power. IF transfer function has been modelled to closely match the practical circuits by applying adequate amplitude damping to the response of LC circuits at the RF port, in order to provide the required IF bandwidth and conversion gain. Effect of stray capacitances and inductances have been neglected during the modelling, and changes in series resistance of inductors at RF port and IF port have been made to match experimental results.


MOHAMMED ALENAZI

Network Resilience Improvement and Evaluation Using Link Additions

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Lingjia Liu
Bo Luo
Tyrone Duncan

Abstract

Computer networks are getting more involved in providing services for most of our daily life activities related to education, business, health care, social life, and government. Publicly available computer networks are prone to targeted attacks and natural disasters that could disrupt normal operation and services. Building highly resilient networks is an important aspect of their design and implementation. For existing networks, resilience against such challenges can be improved by adding more links. In fact, adding links to form a full mesh yields the most resilient network but it incurs an unfeasible high cost. In this research, we investigate the resilience improvement of real-world networks via adding a cost-efficient set of links. Adding a set of links to obtain optimal solution using an exhaustive search is impracticable for large networks. Using a greedy algorithm, a feasible solution is obtained by adding a set of links to improve network connectivity by increasing a graph robustness metric such as algebraic connectivity or total path diversity. We use a graph metric called flow robustness as a measure for network resilience. To evaluate the improved networks, we apply three centrality-based attacks and study their resilience. The flow robustness results of the attacks show that the improved networks are more resilient than the non-improved networks. 


WENRONG ZENG

Content-Based Access Control

When & Where:


250 Nichols Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Jerzy Grzymala-Busse
Prasad Kulkarni
Alfred Tat-Kei

Abstract

In conventional database, the most popular access control model specifies policies explicitly for each role of every user against each data object manually. Nowadays, in large-scale content-centric data sharing, conventional approaches could be impractical due to exponential explosion of the data growth and the sensitivity of data objects. What’s more, conventional database access control policy will not be functional when the semantic content of data is expected to play a role in access decisions. Users are often over-privileged, and ex post facto auditing is enforced to detect misuse of the privileges. Unfortunately, it is usually difficult to reverse the damage, as (large amount of) data has been disclosed already. In this dissertation, we first introduce Content-Based Access Control (CBAC), an innovative access control model for content-centric information sharing. As a complement to conventional access control models, the CBAC model makes access control decisions based on the content similarity between user credentials and data content automatically. In CBAC, each user is allowed by a meta-rule to access "a subset" of the designated data objects of a content-centric database, while the boundary of the subset is dynamically determined by the textual content of data objects. We then present an enforcement mechanism for CBAC that exploits Oracles Virtual Private Database (VPD) to implement a row-wise access control and to prevent data objects from being abused by unneccessary access admission. To further improve the performance of the proposed approach, we introduce a content-based blocking mechanism to improve the efficiency of CBAC enforcement to further reveal a more relavant part of the data objects comparing with only using the user credentials and data content. We also utilized several tagging mechanisms for more accurate textual content matching for short text snippets (e.g. short VarChar attributes) to extract topics other than pure word occurences to represent the content of data. In the tagging mechanism, the similarity of content is calculated not purely dependent on the word occurences but the semantic topics underneath the text content. Experimental results show that CBAC makes accurate access control decisions with a small overhead.


RANJITH KRISHNAN

The Xen Hypervisor : Construction of a Test Environment and Validation by Performing Performance Evaluation of Native Linux versus Xen Guests

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Bo Luo
Heechul Yun


Abstract

Modern computers are powerful enough to comfortably support running multiple Operating Systems at the same time. Enabling this is the Xen hypervisor, an open-source tool which is one of most widely used System Virtualization solutions in the market. Xen enables Guest Virtual Machines to run at near native speeds by using a concept called Paravirtualization. The primary goal of this project is to construct a development/test environment where we can investigate the different types of virtualization Xen supports. We start on a base of Fedora onto which Xen is built and installed. Once Xen is running, we configure both Paravirtualized and Hardware Virtualized Guests. 
The second goal of the project is to validate the environment constructed by doing a performance evaluation of constructed test environment. Various performance benchmarks are run on native Linux, Xen Host and the two important types of Xen Guests. As expected, our results show that the performance of the Xen guest machines are close to native Linux. We also see proof of why virtualization-aware Paravirtualization performs better than Hardware Virtualization which runs without any knowledge of the underlying virtualization infrastructure.