Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Past Defense Notices

Dates

DANIEL HEIN

Detecting Attack Prone Software Using Architecture and Repository Mined Change Metrics

When & Where:


2001B Eaton Hall

Committee Members:

Hossein Saiedian, Chair
Arvin Agah
Perry Alexander
Prasad Kulkarni
Reza Barati

Abstract

Billions of dollars are lost every year to successful cyber attacks that are fundamentally enabled by software vulnerabilities. Modern cyber attacks increasingly threaten individuals, organizations, and governments, causing service disruption, inconvenience,and costly incident response. Given that such attacks are primarily enabled by software vulnerabilities, this work examines whether or not existing change metrics, along with architectural modularity and maintainability metrics can be used to predict modules and files that might be analyzed or tested further to excise vulnerabilities prior to release. 
The problem addressed by this research is the residual vulnerability problem, or vulnerabilities that evade detection and persist in released software. Many modern software projects are over a million lines of code, composed of reused components of varying maturity. The sheer size of modern software, along with the reuse of existing open source modules, complicates the questions of where to look, and in what order to look, for residual vulnerabilities. Prediction models based on various code and change metrics (e.g.,churn) have shown promise as indicators of vulnerabilities at the file level. 
This work investigates whether change metrics, along with architectural metrics quantifying modularity and maintainability can be used to identify attack-prone modules. In addition to identifying or predicting attack prone files, this work also examines prioritization and ranking said predictions.


BEN PANZER

Estimating Geophysical Properties of Snow and Sea Ice from Data Collected by an Airborne, Ultra-Wideband Radar

When & Where:


317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Chris Allen
Prasad Gogineni
Fernando Rodriguez-Morales
Richard Hale

Abstract

Large-scale spatial observations of the global sea ice thickness and distribution rely on multiple satellite-based altimeters. Laser altimeters, such as the GLAS instrument aboard ICESat-1 and ATLAS instrument aboard ICESat-2, measure freeboard which is the snow and ice thickness above mean sea level. Deriving sea-ice thickness from these data requires estimating the snow depth on the sea ice. Current means of estimating the snow depth are climatological history, daily precipitation products, and/or data from passive microwave sensors, such as AMSR-E. Radar altimeters, such as SIRAL aboard CryoSat-2, do not have sufficient vertical range resolution to resolve both the air-snow and snow-ice interfaces over sea-ice. Additionally, there is significant ambiguity on the location of the peak return due to penetration into the snow cover. Regardless of the sensor, any error in snow-depth estimation amplifies sea-ice thickness errors due to the assumption of hydrostatic equilibrium used in deriving sea-ice thickness. There clearly is a need for an airborne sensor to provide spatially large-scale measurements of the snow cover in both polar regions to improve the accuracy of sea-ice thickness estimates and provide validation for the satellite-based sensors. 
The Snow Radar was developed at the Center for Remote Sensing of Ice Sheets and deployed as part of NASA Operation IceBridge since 2009 to directly measure snow thickness over sea ice. The radar is an ultra-wideband, frequency-modulated, continuous-wave radar now working over the frequency range of 2 GHz to 8 GHz, resulting in a vertical range resolution of approximately 4 cm after post-processing. The radar has been shown to be capable of detecting snow depth over sea ice from 10 cm to more than 2 meters and results from the radar compare well to multiple in-situ measurements and passive-microwave measurements. 
The focus of the proposed research is estimation of useful geophysical properties of snow-covered sea ice beyond snow depth and subsequent refinement and validation of the snow depth extraction. Geophysical properties of interest are: snow density and wetness, air-snow and snow-ice surface roughness, and sea ice temperature and salinity. Through forward modeling of the radar backscatter response and the corresponding inversion, large-scale estimation of these properties may be possible.


GOUTHAM SELVAKUMAR

Constructing an Environment and Providing a Performance Assesment of Android's Dalvik Virtual Machine on x86 and

When & Where:


250 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Victor Frost
Xin Fu


Abstract

Android is one of the most popular operating systems (OS) for mobile touchscreen devices, including smart-phones and tablet computers. Dalvik is a process virtual machine (VM) that provides an abstraction layer over the Android OS, and runs the Java-based Android applications. The first goal of this project is to construct a development environment for conveniently investigating the properties of Android's Dalvik VM on contemporary x86 and ARM architectures. The normal development environment restricts the Dalvik VM to run on top of Android, and requires an updated Android image to be built and installed on the target device after any change to the Dalvik code. This update-build-install process unnecessarily slows down any Dalvik VM exploration. We have now discovered a (undisclosed) configuration that enables us to study the Dalvik VM as a stand-alone application on top of the Linux OS. 
The second goal of this project is to understand the translation/compilation sub-system in the Dalvik VM, experiment with various modifications to determine the best translation parameters, and compare the Dalvik VM's just-in-time (JIT) compilation characteristics (such as quality of code generated and compilation time) on the x86 and ARM systems with a state-of-the-art Java VM. As expected, we find that JIT compilation is able to significantly improve application performance over basic interpretation. Comparing Dalvik's generated code quality with the Java HotSpot VM, we observe that Dalvik's ARM target is a much more mature compared to Dalvik-x86. However, Dalvik's simple trace-based compilation generates code quality that is much worse as compared to HotSpot. Finally, our experiments also reveal the most effective JIT compilation parameters for the Dalvik VM, and its effect of benchmark performance and memory usage.


ADAM CRIFASI

Framework of Real-Time Optical Nyquist-WDM Receiver using Matlab and Simulink

When & Where:


2001B Eaton Hall

Committee Members:

Ron Hui, Chair
Shannon Blunt
Erik Perrins


Abstract

I investigate an optical Nyquist-WDM Bit Error Rate (BER) detection system. A transmitter and receiver system is simulated, using Matlab and Simulink, to form a working algorithm and to study the effects of the different processes of the data chain. The inherent lack of phase information in the N-WDM scheme presents unique challenges and requires a precise phase recovery system to accurately decode a message. Furthermore, resource constraints are applied by a cost-effective Field Programmable Gate Array (FPGA). To compensate for the speed, gate, and memory constraints of a budget FPGA, several techniques are employed to design the best possible receiver. I study the resource intensive operations and vary their resource utilization to discover the effect on the BER. To conclude, a full VHDL design is delineated, including peripheral initialization, input data sorting and storage, timing synchronization, state machine and control signal implementation, N-WDM demodulation, phase recovery, QAM decoding, and BER calculation.


TIANCHEN LI

Radar Cross-Section Enhancement of a 40 Percent Yak54 Unmanned Aerial Vehicle

When & Where:


2001B Eaton Hall

Committee Members:

Chris Allen, Chair
Ken Demarest
Ron Hui


Abstract

With increasing civilian use of unmanned aerial vehicles (UAVs), flight safety of these unmanned devices in populated area has become one of the most concerned issues among the operators and users. To reduce the rate of colliding, anti-collision systems based on airborne radar system and enhanced autopilot programs are developed. However, for most civilian UAVs being made of non-metal materials which has considerably low radar cross-section (RCS), those UAVs are really hard or even impossible to be detected by radars. This project aims to design a light-weight UAV mounted RCS enhancement device that can increase the visibility of the UAV for airborne radars which work in the frequency band near 
1.445 GHz. In this project, a 40% YAK54 radio controlled UAV is used as the subject UAV. The report also concentrates on the design of passive Van Atta Array reflector approach.


REID CROWE

Development and Implementation of a VHF High Power Amplifier for the Multi-Channel Coherent Radar Depth Sounder/Imager System

When & Where:


317 Nichols Hall

Committee Members:

Fernando Rodriguez-Morales, Chair
Chris Allen
Carl Leuschen


Abstract

This thesis presents the implementation and characterization of a VHF high power amplifier developed for the Multi-channel Coherent Radar Depth Sounder/Imager (MCoRDS/I) system. MCoRDS/I is used to collect data on the thickness and basal topography of polar ice sheets, ice sheet margins, and fast-flowing glaciers from airborne platforms. Previous surveys have indicated that higher transmit power is needed to improve the performance of the radar, particularly when flying over challenging areas. 
The VHF high power amplifier system presented here consists of a 50-W driver amplifier and a 1-kW output stage operating in Class C. Its performance was characterized and optimized to obtain the best tradeoff between linearity, output power, efficiency, and conducted and radiated noise. A waveform pre-distortion technique to correct for gain variations (dependent on input power and operating frequency) was demonstrated using digital techniques. 
The amplifier system is a modular unit that can be expanded to handle a larger number of transmit channels as needed for future applications. The system can support sequential transmit/receive operations on a single antenna by using a high-power circulator and a duplexer circuit composed of two 90° hybrid couplers and anti-parallel diodes. The duplexer is advantageous over switches based on PIN-diodes due to the moderately high power handling capability and fast switching time. The system presented here is also smaller and lighter than previous implementations with comparable output power levels.


KENNETH DEWAYNE BROWN

A Mobile Wireless Channel State Recognition Algorithm

When & Where:


2001B Eaton Hall

Committee Members:

Glenn Prescott, Chair
Chris Allen
Gary Minden
Erik Perrins
Richard Hale

Abstract

The scope of this research is a blind mobile wireless channel state recognition (CSR) algorithm that detects channel time and frequency dispersion. Hidden Markov Models (HMM) are utilized to represent the statistical relationship between these hidden channel dispersive state process and an observed received waveform process. The HMMs provide sufficient sensitivity to detect the hidden channel dispersive state process. First-order and second-order statistical features are assumed to be sufficient to discriminate channel state from the receive waveform observations. State hard decisions provide sufficient information, and can be combined, to increase the reliability of a time block channel state estimate. To investigate the feasibility of the proposed CSR algorithm, this research effort has architected, designed, and verified a blind statistical feature recognition process capable of detecting whether a mobile wireless channel is coherent, single time, single frequency, or dual dispersive. Channel state waveforms are utilized to compute the transition and output probability parameters for a set of feature recognition HMMs. Time and frequency statistical features are computed from consecutive sample blocks and input into the set of trained HMMs which compute a state sequence conditional probability for each feature. The conditional probabilities identify how well the input waveform statistically agrees with the previous training waveforms. Hard decisions were produced from each feature state probability estimate and combined to produce a single output channel dispersive state estimate for each input time block. To verify the CSR algorithm performance, combinations of state sequence blocks were input to the process and state recognition accuracy was characterized. Initial results suggest that CSR based on blind waveform statistical feature recognition is feasible.


WENRONG ZENG

Content-Based Access Control

When & Where:


250 Nichols Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Jerzy Grzymala-Busse
Prasad Kulkarni
Alfred Tat-Kei Ho

Abstract

In conventional database access control models, access control policies are explicitly specified for each role against each data object manually. Nowadays, in large-scale content-centric data sharing, 
conventional approaches could be impractical due to exponential explosion and the sensitivity of data objects. In this proposal, we first introduce Content-Based Access Control (CBAC), an innovative access control model for content-centric information sharing. As a complement to conventional access control models, the CBAC model makes access control decisions based on the content similarity between user credentials and data content automatically. In CBAC, each user is allowed by a meta-rule to access “a subset” of the designated data objects of the whole database, while the boundary of the subset is dynamically determined by the textual content of data objects. We then present an enforcement mechanism for CBAC that exploits Oracle’s Virtual Private Database (VPD). To further improve the performance of the proposed approach, we introduce a content-based blocking mechanism to improve the efficiency of CBAC enforcement to further 
reveal a more relavant part of the data objects comparing with the user credentials and data content. We also utilized a tagging mechanism for more accurate textual content matching for short text snippets (e.g. short VarChar attributes) to extract topics other than pure word occurences to 
represent the content of data. Experimental results show that CBAC makes reasonable access control decisions with a small overhead.


MARIANNE JANTZ

Detecting and Understanding Dynamically Dead Instructions for Contemporary Machines

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Xin Fu
Man Kong


Abstract

Instructions executed by the processor are dynamically dead if the values they produce are not used by the program. Researchers have discovered that a surprisingly large fraction of executed instructions are dynamically dead. Dynamically dead instructions (DDI) can potentially slow-down program execution and waste power. Unfortunately, although the issue of DDI is well-known, there has not been any comprehensive study to understand and explain the occurence of DDI, evaluate its performance impact, and resolve the problem, especially for contemporary architectures. 
The goals of our research are to quantify and understand the properties of DDI, as well as, systematically characterize them for existinng state-of-the-art compilers and popular architectures in order to develop compiler and/or architectural techniques to avoid their execution at runtime. In this thesis, we describe our GCC-based framework to instrument binary programs to generate control-flow and data-flow (registers and memory) traces at runtime. We present the percentage of DDI in our benchmark programs, as well as, characteristics of the DDI. We display that context information can have a siginificant impact on the probability that an instruction will be dynamically dead. We show that a low percentage of static instructions actually contribute to the overall DDI in our benchmark programs. We also describe the outcome of our manual study to analyze and categorize the instances of dead instructions in our x86 benchmarks into seven distinct categories. We briefly describe our plan to develop compiler and architecture based techniques to eliminate each category of DDI in future programs. And finally, we find that x86 and ARM programs, compiled with GCC, generally contain a significant amount of DDI. However, x86 programs present fewer DDI than the ARM benchmarks, which display similar percentages of DDI as earlier research for other architectures. Therefore, we suggest that the ARM architecture observes a non-negligible fraction of DDI and should be examined further. Overall, we believe that a close synergy between static code generation and program execution techniques may be the most effective strategy to eliminate DDI.


YUHAO YANG

Protecting Attributes and Contents in Online Social Networks

When & Where:


2001B Eaton Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Luke Huan
Prasad Kulkarni
Alfred Tat-Kei Ho

Abstract

With the extreme popularity of online social networks, security and privacy issues become critical. In particular, it is important to protect user privacy without preventing them from normal socialization. User privacy in the context of data publishing and structural re-identification attacks has been well studied. However, protection of attributes and data content was mostly neglected in the research community. While social network data is rarely published, billions of messages are shared in various social networks on a daily basis. Therefore, it is more important to protect attributes and textual content in social networks. 

We first study the vulnerabilities of user attributes and contents, in particular, the identifiability of the users when the adversary learns a small piece of information about the target. We have presented two attribute-reidentification attacks that exploit information retrieval and web search techniques. We have shown that large portions of users with online presence are very identifiable, even with a small piece of seed information, and the seed information could be inaccurate. 
To protect user attributes and content, we will adopt the social circle model derived from the concepts of “privacy as user perception” and “information boundary”. Users will have different social circles, and share different information in different circles. We propose to automatically identify social circles based on three observations: (1) friends in the same circle are connected and share many friends in common; (2) friends in the same circle are more likely to interact; (3) friends in the same circle tend to have similar interests and share similar content. We propose to adopt multi-view clustering to model and integrate such observations to identify implicit circles in a user’s personal network. Moreover, we propose an evaluation mechanism that evaluates the quality of the clusters (circles). 
Furthermore, we propose to exploit such circles for cross-site privacy protection for users –new messages (blogs, micro-blogs, updates, etc) will be evaluated and distributed to the most relevant circle(s). We monitor information distributed to each circle to protect users against information aggregation attacks, and also enforce circle boundaries to prevent sensitive information leakage.