Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Syed Abid Sahdman

Soliton Generation and Pulse Optimization using Nonlinear Transmission Lines

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Alessandro Salandrino, Chair
Shima Fardad
Morteza Hashemi


Abstract

Nonlinear Transmission Lines (NLTLs) have gained significant interest due to their ability to generate ultra-short, high-power RF pulses, which are valuable in applications such as ultrawideband radar, space vehicles, and battlefield communication disruption. The waveforms generated by NLTLs offer frequency diversity not typically observed in High-Power Microwave (HPM) sources based on electron beams. Nonlinearity in lumped element transmission lines is usually introduced using voltage-dependent capacitors due to their simplicity and widespread availability. The periodic structure of these lines introduces dispersion, which broadens pulses. In contrast, nonlinearity causes higher-amplitude regions to propagate faster. The interaction of these effects results in the formation of stable, self-localized waveforms known as solitons.
Soliton propagation in NLTLs can be described by the Korteweg-de Vries (KdV) equation. In this thesis, the Bäcklund Transformation (BT) method has been used to derive both single and two-soliton solutions of the KdV equation. This method links two different partial differential equations (PDEs) and their solutions to produce solutions for nonlinear PDEs. The two-soliton solution is obtained from the single soliton solution using a nonlinear superposition principle known as Bianchi’s Permutability Theorem (BPT). Although the KdV model is suitable for NLTLs where the capacitance-voltage relationship follows that of a reverse-biased p-n junction, it cannot generally represent arbitrary nonlinear capacitance characteristics.
To address this limitation, a Finite Difference Time Domain (FDTD) method has been developed to numerically solve the NLTL equation for soliton propagation. To demonstrate the pulse sharpening and RF generation capability of a varactor-loaded NLTL, a 12-section lumped element circuit has been designed and simulated using LTspice and verified with the calculated result. In airborne radar systems, operational constraints such as range, accuracy, data rate, environment, and target type require flexible waveform design, including variation in pulse widths and pulse
repetition frequencies. A gradient descent optimization technique has been employed to generate pulses with varying amplitudes and frequencies by optimizing the NLTL parameters. This work provides a theoretical analysis and numerical simulation to study soliton propagation in NLTLs and demonstrates the generation of tunable RF pulses through optimized circuit design.


Past Defense Notices

Dates

YI ZHU

Matrix and Tensor-based ESPRIT Algorithm for Joint Angle and Delay Estimation in 2D Active Massive MIMO Systems and Analysis of Direction of Arrival Estimation Algorithms for Basal Ice Sheet Tomography

When & Where:


246 Nichols Hall

Committee Members:

Lingjia Liu, Chair
Shannon Blunt
John Paden
Erik Perrins

Abstract

In this thesis, we apply and analyze three direction of arrival (DoA) algorithms to tackle two distinct problems: one belongs to wireless communication, the other to radar signal processing. Though the essence of these two problems is DoA estimation, their formulation, underlying assumptions, application scenario, etc. are totally different. Hence, we write them separately, with ESPRIT algorithm the focus of Part I and MUSIC and MLE detailed in Part II. 

For wireless communication scenario, mobile data traffic is expected to have an exponential growth in the future. In “massive MIMO” systems, a base station will rely on the uplink sounding signals from mobile stations to figure out the spatial information to perform MIMO beamforming. Accordingly, multi-dimensional parameter estimation of a ray-based multipath wireless channel becomes crucial for such systems to realize the predicted capacity gains. We study joint angle and delay estimation for such system and results suggest that the dimension of the antenna array at the base station plays an important role in determining the estimation performance. These insights will be useful for designing practical “massive MIMO” systems in future mobile wireless communications. 

For the problem of radar sensing of ice sheet topography, one of the key requirements for deriving more realistic ice sheet models is to obtain a good set of basal measurements that enables accurate estimation of bed roughness and conditions. For this purpose, 3D tomography of the ice bed has been successfully implemented with the help of DoA. The SAR focused datasets provide a good case study. For the antenna array geometry and sample support used in our tomographic application, MUSIC performs better originally using a cross-over analysis where the estimated topography from crossing flight lines are compared for consistency. However, after several improvements applied to MLE, MLE outperforms MUSIC. We observe that, the spatial bottom smoothing, aiming to remove the artifacts made by MLE algorithm, is the most essential step in the post-processing procedure. The 3D tomography we obtained lays a good foundation for further analysis and modeling of ice sheets. 


YUHAO YANG

Protecting Attributes and Contents in Online Social Networks

When & Where:


250 Nichols Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Luke Huan
Prasad Kulkarni
Alfred Ho

Abstract

With the fast development of computer and information technologies, online social networks grow dramatically. While huge amount of information is distributed expeditiously in online social networking sites, privacy concerns arise. 
In this dissertation, we first study the vulnerabilities of user attributes and contents, in particular, the identifiability of the users when the adversary learns a small piece of information about the target. We further employ an information theory based approach to quantitatively evaluate the threats of attribute-based re-identification. We have shown that large portions of users with online presence are highly identifiable. 
The notion of privacy as control and information boundary has been introduced by the user-oriented privacy research community, and partly adopted in commercial social networking platforms. However, such functions are not widely accepted by the users, mainly because it is tedious and labor-intensive to manually assign friends into such circles. To tackle this problem, we introduce a social circle discovery approach using multi-view clustering. We present our observations on the key features of social circles, including friendship links, content similarity and social interactions. We treat each feature as one view, and propose a one-side co-trained spectral clustering technique, which is tailored for the sparse nature of our data. We evaluate our approach on real-world online social network data, and show that the proposed approach significantly outperforms structure-based clustering. Finally, we build a proof-of-concept demonstration of the automatic circle detection and recommendation approaches.


JAMUNA GOPAL

I Know Your Family: An Hybrid Information Retrieval Approach to Extract Family Information

When & Where:


250 Nichols Hall

Committee Members:

Bo Luo, Chair
Jerzy Grzymala-Busse
Prasad Kulkarni


Abstract

The aim of this project is to identify the family related information of a person from their Twitter Data. We use their personal details, tweets and their friends’ details in order to achieve this. Since, we deal with the modern world short text data; we have used a hybrid information retrieval methodology taking into account the Parts of Speech of the data, Phrase Similarity and the Semantic Similarity of the data along with the openly available twitter data. The future use of this research is to develop a Client Side protection tool that will help users validate the data to be posted for privacy breech.


KAIGE YAN

Power and Performance Co-optimization for Emerging Mobile Platforms

When & Where:


250 Nichols Hall

Committee Members:

Xin Fu, Chair
Prasad Kulkarni
Heechul Yun


Abstract

The mobile devices emerge as the most popular computing platform since 2011. Different from the traditional PC, the mobile devices are more power-constraint and performance-sensitive due to its size. In order to reduce the power consumption and improve the performance, we focus on the Last Level Cache (LLC), which is the power-hungry structure and critical to the performance in mobile platforms. In this project, we first integrate the McPAT power model into the Gem5 simulator. We also introduce the emerging memory technologies, such as Sprin-Transfer Torque RAM (STT-RAM) and embedded DRAM (eDRAM), into the cache design and compare their power and performance effectiveness with the conventional SRAM-based cache. Additionally, we identify that the frequent execution switch between the kernel and user code is the major reason for the high LLC miss in mobile applications. This is because blocks belonging to kernel and user space have severe interferences. We further propose static and dynamic way partition schemes to separate the cache blocks from kernel and user space. The experiment results show promising power reduction and performance improvement with our proposed techniques.


MICHAEL JANTZ

Exploring Dynamic Compilation and Cross-Layer Object Management Policies for Managed Language Applications

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Xin Fu
Andy Gill
Bo Luo
Karen Nordheden

Abstract

Recent years have witnessed the widespread adoption of managed programming languages that are designed to execute on virtual machines. Virtual machine architectures provide several powerful software engineering advantages over statically compiled binaries, such as portable program represenations, additional safety guarantees, and automatic memory and thread management, which have largely driven their success. To support and facilitate the use of these features, virtual machines implement a number of services that adaptively manage and optimize application behavior during execution. Such runtime services often require tradeoffs between efficiency and effectiveness, and different policies can have major implications on the system's performance and energy requirements. 

In this work, we extensively explore policies for the two runtime services that are most important for achieving performance and energy efficiency: dynamic (or Just-In-Time (JIT)) compilation and memory management. First, we examine the properties of single-tier and multi-tier JIT compilation policies in order to find strategies that realize the best program performance for existing and future machines. We perform hundreds of experiments with different compiler aggressiveness and optimization levels to evaluate the performance impact of varying if and when methods are compiled. Next, we investigate the issue of how to optimize program regions to maximize performance in JIT compilation environments. For this study, we conduct a thorough analysis of the behavior of optimization phases in our dynamic compiler, and construct a custom experimental framework to determine the performance limits of phase selection during dynamic compilation. Lastly, we explore innovative memory management strategies to improve energy efficiency in the memory subsystem. We propose and develop a novel cross-layer approach to memory management that integrates information and analysis in the VM with fine-grained management of memory resources in the operating system. Using custom as well as standard benchmark workloads, we perform detailed evaluation that demonstrates the energy-saving potential of our approach.


JINGWEIJIA TAN

Modeling and Improving the GPGPU Reliability in the Presence of Soft Errors

When & Where:


250 Nichols Hall

Committee Members:

Xin Fu, Chair
Prasad Kulkarni
Heechul Yun


Abstract

GPGPUs (general-purpose computing on graphics processing units) emerge as a highly attractive platform for HPC (high performance computing) applications due to its strong computing power. Unlike the graphic processing applications, HPC applications have rigorous requirement on execution correctness, which is generally ignored in the traditional GPU design. Soft Errors, which are failures caused by high-energy neutron or alpha particle strikes in integrated circuits, become a major reliability concern due to the shrinking of feature sizes and growing integration density. In this project, we first build a framework GPGPU-SODA to model the soft-error vulnerability of GPGPU microarchitecture using a publicly available simulator. Based on the framework, we identified the streaming processors are reliability hot-spot in GPGPUs. We further observe that the streaming processors are not fully utilized during the branch divergence and pipeline stalls caused by the long latency operations. We then propose a technique RISE to recycle the streaming processors idle time for soft-error detection in GPGPUs. Experimental results show that RISE obtains the good fault coverage with negligible performance degradation.


KARTHIK PODUVAL

HGS Schedulers for Digital Audio Workstation like Applications

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Victor Frost
Jim Miller


Abstract

Digital Audio Workstation (DAW) applications are real-time applications that have special timing constraints. HGS is a real-time scheduling framework that allows developers implement custom schedulers based on any scheduling algorithm through a process of direct interaction between client threads and their schedulers. Such scheduling could extend well beyond the common priority model that currently exists and could be a representation of arbitrary application semantics that can be well understood and acted upon by its associated scheduler. We like to term it "need based scheduling". In this thesis we firstly study some DAW implementations and later create a few different HGS schedulers aimed at assisting DAW applications meet their needs.


NEIZA TORRICO PANDO

High Precision Ultrasound Range Measurement System

When & Where:


2001B Eaton Hall

Committee Members:

Chris Allen, Chair
Swapan Chakrabarti
Ron Hui


Abstract

Real-time, precise range measurement between objects is useful for a variety of applications. The slow propagation of acoustic signals (330 m/s) in air makes the use of ultrasound frequencies an ideal approach to measure an accurate time of flight. The time of flight can then be used to calculate the range between two objects. The objective of this project is to achieve a precise range measurement within 10 cm uncertainty and an update rate of 30 ms for distances up to 10 m between unmanned aerial vehicles (UAVs) when flying in formation. Both transmitter and receiver are synchronized with a 1 pulse per second signal coming from a GPS. The time of flight is calculated using the cross-correlation of the transmitted and received waves. To allow for various users, a 40 kHz signal is phase modulated with Gold or Kasami codes.


CAMERON LEWIS

3D Imaging of Ice Sheets

When & Where:


317 Nichols Hall

Committee Members:

Prasad Gogineni, Chair
Chris Allen
Carl Leuschen
Fernando Rodriguez-Morales
Rick Hale

Abstract

Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves affect both the mass balance of the ice sheet and the global climate system. This melting and refreezing influences the development of Antarctic Bottom Water, which help drive the oceanic thermohaline circulation, a critical component of the global climate system. Basal melt rates can be estimated through traditional glaciological techniques, relying on conversation of mass. However, this requires accurate knowledge of the ice movement, surface accumulation and ablation, and firn compression. Boreholes can provide direct measurement of melt rates, but only provide point estimates and are difficult and expensive to perform. Satellite altimetry measurements have been heavily relied upon for the past few decades. Thickness and melt rate estimates require the same conservation of mass a priori knowledge, with the additional assumption that the ice shelf is in hydrostatic equilibrium. Even with newly available, ground truthed density and geoid estimates, satellite data derived ice shelf thickness and melt rate estimates suffers from relatively course spatial resolution and interpolation induced error. Non destructive radio echo sounding (RES) measurements from long range airborne platforms provide best solution for fine spatial and temporal resolution over long survey traverses and only require a priori knowledge of firn density and surface accumulation. Previously, RES data derived basal melt rate experiments have been limited to ground based experiments with poor coverage and spatial resolution. To improve upon this, an airborne multi channel wideband radar has been developed for the purpose of imaging shallow ice and ice shelves. A moving platform and cross track antenna array will allow for fine resolution 3 D imaging of basal topography. An initial experiment will use a ground based system to image shallow ice and generate 3 D imagery as a proof of concept. This will then be applied to ice shelf data collected by an airborne system.


TRUC ANH NGUYEN

Transfer Control for Resilient End-to-End Transport

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Gary Minden


Abstract

Residing between the network layer and the application layer, the transport 
layer exchanges application data using the services provided by the network. Given the unreliable nature of the underlying network, reliable data transfer has become one of the key requirements for those transport-layer protocols such as TCP. Studying the various mechanisms developed for TCP to increase the correctness of data transmission while fully utilizing the network's bandwidth provides us a strong background for our study and development of our own resilient end-to-end transport protocol. Given this motivation, in this thesis, we study the dierent 
TCP's error control and congestion control techniques by simulating them under dierent network scenarios using ns-3. For error control, we narrow our research to acknowledgement methods such as cumulative ACK - the traditional TCP's way of ACKing, SACK, NAK, and SNACK. The congestion control analysis covers some TCP variants including Tahoe, Reno, NewReno, Vegas, Westwood, Westwood+, and TCP SACK.