Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Prashanthi Mallojula

On the Security of Mobile and Auto Companion Apps

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Bo Luo, Chair
Alex Bardas
Fengjun Li
Hongyang Sun
Huazhen Fang

Abstract

The rapid development of mobile apps on modern smartphone platforms has raised critical concerns regarding user data privacy and the security of app-to-device communications, particularly with companion apps that interface with external IoT or cyber-physical systems (CPS). In this dissertation, we investigate two major aspects of mobile app security: the misuse of permission mechanisms and the security of app to device communication in automotive companion apps.

Mobile apps seek user consent for accessing sensitive information such as location and personal data. However, users often blindly accept these permission requests, allowing apps to abuse this mechanism. As long as a permission is requested, state-of-the-art security mechanisms typically treat it as legitimate. This raises a critical question: Are these permission requests always valid? To explore this, we validate permission requests using statistical analysis on permission sets extracted from groups of functionally similar apps. We identify mobile apps with abusive permission access and quantify the risk of information leakage posed by each app. Through a large-scale statistical analysis of permission sets from over 200,000 Android apps, our findings reveal that approximately 10% of the apps exhibit highly risky permission usage. 

Next, we present a comprehensive study of automotive companion apps, a rapidly growing yet underexplored category of mobile apps. These apps are used for vehicle diagnostics, telemetry, and remote control, and they often interface with in-vehicle networks via OBD-II dongles, exposing users to significant privacy and security risks. Using a hybrid methodology that combines static code analysis, dynamic runtime inspection, and network traffic monitoring, we analyze 154 publicly available Android automotive apps. Our findings uncover a broad range of critical vulnerabilities. Over 74% of the analyzed apps exhibit vulnerabilities that could lead to private information leakage, property theft, or even real-time safety risks while driving. Specifically, 18 apps were found to connect to open OBD-II dongles without requiring any authentication, accept arbitrary CAN bus commands from potentially malicious users, and transmit those commands to the vehicle without validation. 16 apps were found to store driving logs in external storage, enabling attackers to reconstruct trip histories and driving patterns. We demonstrate several real-world attack scenarios that illustrate how insecure data storage and communication practices can compromise user privacy and vehicular safety. Finally, we discuss mitigation strategies and detail the responsible disclosure process undertaken with the affected developers.


Past Defense Notices

Dates

SIDDHARTHA BISWAS

MBProtector: Dynamic Memory Bandwidth Protection Tool

When & Where:


246 Nichols Hall

Committee Members:

Heechul Yun, Chair
Victor Frost
Prasad Kulkarni
Bo Luo

Abstract

Computer systems have moved from unicore platforms to multicore platforms in modern days as they offer higher performance and efficiency. However, when multiple programs are executed in parallel on different cores on a multicore platform, performance isolation among the programs is difficult to achieve because of contention in shared hardware resources. This is problematic for real-time applications where a certain performance guarantee must be provided. 

In this work, we first present a case study that depicts the difficulties faced by a memory intensive real-time application, WebRTC---an open source, plugin free communication framework that provides the capability of Real-Time Communications(RTC) to browsers and mobile applications---when running in a multi-core plat-form along with other memory intensive co-running applications. We then present a tool, MBProtector that dynamically protects the performance of memory intensive code sectors in real-time applications. MBProtector uses BWLOCK, a mechanism for memory bandwidth control, and Pin, a binary instrumentation framework, to automatically insert BWLOCKs in memory intensive code sections in program binary. Our evaluation shows that the tool achieves up to 60% performance improvement in WebRTC. 


MEENAKSHI MISHRA

Task Relationship Modeling in Lifelong Multitask Learning

When & Where:


246 Nichols Hall

Committee Members:

Luke Huan, Chair
Arvin Agah
Swapan Chakrabarti
Ron Hui
Zhou Wang

Abstract

Multitask Learning with task relationship modeling is a learning framework which identifies and shares training information among multiple related tasks to improve the generalization error of each task. The utilization of task relationships in static multitask learning framework, where all the tasks are known beforehand and all the data is present before the training, has been studied in considerable detail for past several years. However, in the case of lifelong multitask learning, where the tasks arrive in an online fashion and information about all the tasks is not known beforehand, modeling the task relationship is very challenging. The main contribution of this thesis is to propose a framework for modeling task relationships in lifelong multitask learning. The task relationship models in lifelong multitask learning needs to be flexible and dynamic such that it can be easily updated with each new task coming in. Also, a new task needs to readily learn its position in the existing task network using the task relationship model. Traditionally, task relationships are represented using fixed sized matrices, which describe the task network. These matrices are not capable of dynamically changing with each incoming task, and can be rather expensive to update. Here, we propose learning functions to represent the relationships between tasks. Learning functions is faster and computationally less expensive for depicting the task relationship models. The functions partition the task space such that the similar tasks remain in the same region and enforce similar tasks to depend on similar features. Learning both the task parameters and relationships is done in a supervised manner. In this thesis, we show that the algorithm we developed provides significantly better accuracy and is much faster than the state of the art lifelong learning algorithm. For some dataset, our algorithm provides a better accuracy than even the static multitask learning method.


ERIK HORNBERGER

Partially Constrained Adaptive Beamforming

When & Where:


246 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Erik Perrins
James Stiles


Abstract

The ReIterative Super-Resolution (RISR) was developed based on an iterative implementation of the Minimum Mean Squared Error (MMSE) estimator. A novel approach to direction of arrival estimation, coined partially constrained beamforming is introduced by building from existing work on the RISR algorithm. First, RISR is rederived with the addition of a unit gain constraint, with the result dubbed Gain Constrained RISR (GC-RISR), but the outcome exhibits some loss in resolution, so middle ground is sought between GC-RISR and RISR. By taking advantage of the similarstructure of RISR and GC-RISR, they can be combined using a geometric mean, and a weighting term is added to form a partially constrained version of RISR, which wedenote as PC-RISR. Simulations are used to characterize PC-RISR’s performance, where it is shown that the geometric weighting term can be used to control convergence. It is also demonstrated that this weighting term enables increased super-resolution capability compared to RISR, improves robustness to low sample support for super-resolving signals with low SNR, and the ability to detect and super-resolve signals with an SNR as low as -10dB given higher sample support.


THERESA STUMPF

A Wideband Direction of Arrival Technique for Multibeam, Wide-Swath Imaging of Ice Sheet Basal Morphology

When & Where:


317 Nichols Hall

Committee Members:

Prasad Gogineni, Chair
Carl Leuschen
John Paden


Abstract

Multichannel, ice sounder data can be processed to three-dimensionally map ice sheet bed topography and basal reflectivity using tomographic imaging techniques. When ultra-wideband (UWB) signals are used to interrogate a glaciological target, fine resolution maps can be obtained. These data sets facilitate both process studies of ice sheet dynamics and also continental-scale ice sheet modeling needed to predict future sea level. The socioeconomic importance of these data as well as the cost and logistical challenge of procuring them justifies the need to image ice sheet basal morphology over a wider swath. Imaging wide swaths with UWB signals poses challenges for the array processing methods that have been used to localize scattering in the cross-track dimension. Both MUltiple SIgnal Classification (MUSIC) and the Maximum Likelihood Estimator (MLE) have been applied to the ice sheet tomography problem. These techniques are formulated assuming a narrowband model of the array that breaks down in wideband signal environments when the direction of arrival (DOA) increases further off nadir. 
The Center for Remote Sensing of Ice Sheets (CReSIS) developed a UWB multichannel SAR with a large cross-track array for sounding and imaging polar ice from a Basler BT-67 aircraft. In 2013, this sensor collected data in a multibeam mode over the West Antarctic Ice Sheet to demonstrate wide swath imaging. To reliably estimate the arrival angles of echoes from the edges of the swath, a parametric space-time direction of arrival estimator was developed that obtains an estimate of the DOA by fitting the observed space-time covariance structure to a model. This thesis focuses on the development and optimization of the algorithm and describes its predicted performance based on simulation. Its measured performance is analyzed with 3D tomographic basal maps of an ice stream in West Antarctica that were generated using the technique. 


AKSHATHA RAO

Fountain codes

When & Where:


250 Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Victor Frost
Jonathan Brumberg

Abstract

Fountain codes are forward-error correcting codes suitable for erasure channels. A binary erasure channel is a memoryless channel, in which the symbols are either transmitted correctly or they are erased. The advantage of fountain codes is that it requires few encoded symbols for decoding. The source symbols can be decoded using any set of encoded symbols. Since fountain codes are rateless, they can adapt to changing channel conditions. They are beneficial for broadcasting and multicasting applications where channels have different erasure probability. 
The project involves the implementation of two different fountain codes: LT code and Raptor code. 
The goal of the project is to measure the performance of the code based on how many encoded symbols are required for successful decoding. The encoders and decoders for the two codes are designed in Matlab. The number of encoded symbols required for decoding of the source symbols for different degree distributions are plotted. 


QI SHI

Application of Split-Step Fourier Method and Gaussian Noise Model in the Calculation of Nonlinear Interference in Uncompensated Optical Coherent WDM System

When & Where:


246 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Erik Perrins


Abstract

Wavelength division multiplexing (WDM) is a technology of combining a number of independent information-carrying signals with different wavelengths into the same fiber. This enables us to transmit several channels of high quality, large capacity optical signals in only one fiber simultaneously. WDM is the most popular long distance transmission solution nowadays, which is widely utilized in terrestrial backbone and intercontinental undersea fiber optics transmission system. Extremely effective and efficient analysis method of WDM system is always indispensable due to two reasons. In the first place, the deployment of WDM system is usually a time and money consuming project so that an accurate design is always required before construction. Secondly, optical network routing protocol is based on expeditious and veracious real-time evaluation and prediction of network performance. Two main distinct phenomena affecting the overall WDM system performance are amplified spontaneous emission (ASE) noise accumulation and nonlinear interference (NLI) due to the Kerr effect. The ASE noise has already been well understood but the calculation of NLI is complicated. A popular way called Split-Step Fourier (SSF) method, which directly solves the nonlinear propagation equation numerically is widely used to understand the pulse propagation in nonlinear dispersive media. Though the SSF method can provide an accurate result of NLI, its high computation expense prohibits satisfying the efficiency requirement mentioned above. Fortunately, Gaussian Noise (GN) model, which to a large extent resolves this issue has been proposed and developed in recent years.


RAKSHA GANESH

Structured-Irregular Repeat Accumulate Codes

When & Where:


250 Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Ron Hui


Abstract

There is a strong need for efficient and reliable communication systems in the present day context. To design an efficient transmission system the errors that occur during transmission should be minimized. This can be achieved by channel encoding. The Irregular repeat accumulate codes are a class of serially concatenated codes that have a linear encoding algorithm, flexibility in code rate and code lengths and good performance. 

Here we implement a design technique for Structured Irregular repeat accumulate codes. S-IRA codes can be decoded reliably using the iterative log likelihood decoding (sum-product) algorithm at low error rates. We perform encoding, decoding and performance analysis of S-IRA codes of different code rates and code word lengths and compare their performances on the AWGN channel. In this project we also design codes with different column weights for the parity check matrices and compare their performances on the AWGN channel with the already designed codes. 


MADHURI MORLA

Effect of SOA Nonlinearities on CO-OFDM System

When & Where:


2001B Eaton Hall

Committee Members:

Ron Hui, Chair
Victor Frost
Erik Perrins


Abstract

The use of Semiconductor Optical Amplifier (SOA) for amplification in Coherent Optical-Orthogonal Frequency Division Multiplexing (CO-OFDM) system has been of interest in recent studies. The gain saturation of SOA induces inter-channel crosstalk. This effect is analyzed by simulation and compared with some recent experimental results. Performance of the optical transmission system is measured using Error Vector Magnitude (EVM) which is the measure of deviation of received symbols from their ideal positions in the constellation diagram. EVM as a function of input power to SOA is investigated. Improvement in EVM has been observed in linear region with the increase of input power due to the increase of Optical Signal to Noise Ratio (OSNR). In the nonlinear region, increase of the input optical power to SOA results in the degradation of EVM due to the nonlinear saturation of SOA The effect of gain saturation on EVM as a function of number of subcarriers is investigated. 
The relation between different evaluation techniques like Bit Error Rate (BER), SNR and EVM is also presented. EVM is analytically estimated from OSNR by considering the ideal case of additive white Gaussian noise (AWGN) without nonlinearities. Bit Error Rate (BER) is estimated from the analytical and simulated EVM. The role of Peak to Average Power Ratio (PAPR) in the degradation of EVM in the nonlinear region is also studied through numerical simulation. 


SAMBHAV SETHIA

Sentiment Analysis on Wikipedia People Pages Using Enhanced Naive Bayes Model

When & Where:


246 Nichols Hall

Committee Members:

Fengjun Li, Chair
Bo Luo
Jerzy Grzymala-Busse
Prasad Kulkarni

Abstract

Sentiment Analysis involves capturing the viewpoint or opinion expressed by the people on various objects. These objects are diverse set of things like a movie, an article, a person of interest, a product, basically anything on which we can opine about. The opinions that are expressed can take different forms, like a review of a movie, feedback on a product, an article in a newspaper expressing the sentiment of the author on the given topic or even a Wikipedia page on a person. The key challenge of sentiment analysis is to classify the underlying text to the correct class i.e., positive, negative or neutral. Sentiment analysis also deals with the computational treatment of opinion, sentiment and the subjectivity in a text. 
Wikipedia provides a large repository of pages of people around the world. This project conducts large scale experiment using one of the popular sentiment analysis tools, which is modeled on an enhanced version of Naïve Bayes. Here a sentence by sentence sentiment analysis is done for each biographical page retrieved from Wikipedia. The overall sentiment of a person is then calculated by taking an average of every sentiment value of all the sentences related to that particular person. There are advantages of doing this type of analysis. First, the results obtained are better calibrated on a decimal scale which provides a clearer distinction about the sentiment value associated with the individual as compared to the standard result provided by the tool which is based on tri-scale i.e., positive, negative and neutral. Second, this will allow us to understand statistically the viewpoint of Wikipedia on those people. Finally, this project enables us to perform large-scale temporal and geographical analysis, e.g., examine the overall sentiment associated with the people of each state, and thus helping us to analyze the opinion trend. 


XIAOMENG SU

A comparison of the Quality of Rule Induction from Inconsistent Data sets and Incomplete Data Sets

When & Where:


246 Nichols Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Zongbo Wang


Abstract

In data mining, decision rules inducted from known examples are used to classify unseen cases. There are various rule induction algorithms, such as LEM1 (Learning from Examples Module version 1), LEM2 (Learning from Examples Module version 2) and MLEM2 (Modified Learning from Examples Module version 2). In the real world, many data sets are imperfect, either inconsistent or incomplete. The idea of lower and upper approximations, or more generally, the probabilistic approximation, provides an effective way to induct rules from inconsistent data sets and incomplete data sets. But the accuracy of rule sets inducted from imperfect data sets are expected to be lower. The objective of this project is to investigate which kind of imperfect data sets (inconsistent or incomplete) is worse in terms of the quality of inducted rule set. In this project, experiments were conducted on eight inconsistent data sets and eight incomplete data sets with lost values. We implemented the MLEM2 algorithm to induct certain and possible rules from inconsistent data sets, and implemented the local probabilistic version of MLEM2 algorithm to induct certain and possible rules from incomplete data sets. A program called Rule Checker was also developed to classify unseen cases with inducted rules and measure the classification error rate. Ten-cross fold validation was carried out and the average error rate was used as the criteria for comparison. The Mann-Whitney nonparametric test was performed to compare, separately for certain and possible rules, incompleteness with inconsistency. The results show that there is no significant difference between inconsistent and incomplete data sets in terms of the quality of rule induction.