Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

No upcoming defense notices for now!

Past Defense Notices

Dates

SAHANA RAGHUNANDAN

Analysis of Angle of Arrival Estimation Algorithms for Basal Ice Sheet Tomography

When & Where:


317 Nichols Hall

Committee Members:

John Paden, Chair
Shannon Blunt
Carl Leuschen


Abstract

One of the key requirements for deriving more realistic ice sheet models is to obtain a good set of basal measurements that enable accurate estimation of bed roughness and conditions. For this purpose, 3D tomography of the ice bed has been successfully implemented with the help of angle of arrival estimation (AoA) algorithms such as multiple signal classification (MUSIC) and maximum likelihood estimation (MLE) techniques. These methods have enabled fine resolution in the cross-track dimension using synthetic aperture radar (SAR) images obtained from single pass multichannel data. This project analyzes and compares the results obtained from the spectral MUSIC algorithm, an alternating projection (AP) based MLE technique, and a relatively recent approach called the reiterative superresolution (RISR) algorithm. While the MUSIC algorithm is more attractive computationally compared to MLE, the performance of the latter is known to be superior in a low signal to noise ratio regime. The RISR algorithm takes a completely different approach by using a recursive implementation of the minimum mean square error (MMSE) estimation technique instead of using the sample covariance matrix (SCM) that is central to subspace based algorithms. This renders the algorithm more robust in scenarios where there is a very low sample support. The SAR focused datasets provide a good case study to explore the performance of the three techniques to the application of ice sheet bed elevation estimation.


EHSAN HOSSEINI

Synchronization Techniques for Burst-Mode Continuous Phase Modulation

When & Where:


250 Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Lingjia Liu
Dave Petr
Tyrone Duncan

Abstract

Synchronization is a critical operation in digital communication systems, which establishes and maintains an operational link between transmitter and the receiver. As the advancement of digital modulation and coding schemes continues, the synchronization task becomes more and more challenging since the new standards require high-throughput functionality at low signal-to-noise ratios (SNRs). In this work, we address feedforward synchronization of continuous phase modulations (CPMs) using data-aided (DA) methods, which are best suited for burst-mode communications. In our transmission model, a known training sequence is appended to the beginning of each burst, which is then affected by additive white Gaussian noise (AWGN), and unknown frequency, phase, and timing offsets. 

Based on our transmission model, we derive the optimum training sequence for DA synchronization of CPM signals using the Cramer-Rao bound (CRB), which is a lower bound on the estimation error variance. It is shown that the proposed sequence minimizes the CRB for all three synchronization parameters, and can be applied to the entire CPM family. We take advantage of the structure of the optimized training sequence in order to derive a maximum likelihood joint timing and carrier recovery algorithm. Moreover, a frame synchronization algorithm is proposed, and hence, a complete synchronization scheme is presented in this work. 

The proposed training sequence and synchronization algorithm are extended to shaped-offset quadrature phase-shift keying (SOQPSK) modulation, which is considered for next generation aeronautical telemetry systems. Here, it is shown that the optimized training sequence outperforms the one that is defined in the draft telemetry standard as long as estimation error variances are considered. The overall bit error rate suggest that the optimized training sequence with a shorter length can be utilized such that the SNR loss is less than 0.5 dB of an ideal synchronization scenario.


MARTIN KUEHNHAUSEN

A Framework for Knowledge Derivation Incorporating Trust and Quality of Data

When & Where:


246 Nichols Hall

Committee Members:

Victor Frost, Chair
Luke Huan
Bo Luo
Gary Minden
Tyrone Duncan

Abstract

Today, across all major industries gaining insight from data is seen as an essential part of business. However, while data gathering is becoming inexpensive and relatively easy, analysis and ultimately deriving knowledge from it is increasingly difficult. In many cases, there is the problem of too much data such that important insights are hard to find. The problem is often not lack of data but whether knowledge derived from it is trustworthy. This means distinguishing “good” from “bad” insights based on factors such as context and reputation. Still, modeling trust and quality of data is complex because of the various conditions and relationships in heterogeneous environments. 

The new TrustKnowOne framework and architecture developed in this dissertation addresses these issues by describing an approach to fully incorporate trust and quality of data with all its aspects into the knowledge derivation process. This is based on Berlin, an abstract graph model we developed that can be used to model various approaches to trustworthiness and relationship assessment as well as decision making processes. In particular, processing, assessment, and evaluation approaches are implemented as graph expressions that are evaluated on graph components modeling the data. 

We have implemented and applied our framework to three complex scenarios using real data from public data repositories. As part of their evaluation we highlighted how our approach exhibits both the formalization and flexibility necessary to model each of the realistic scenarios. The implementation and evaluation of these scenarios confirms the advantages of the TrustKnowOne framework over current approaches.


YUANLIANG MENG

Building an Intelligent Knowledgebase of Brachiopod Paleontology

When & Where:


246 Nichols Hall

Committee Members:

Luke Huan, Chair
Brian Potetz
Bo Luo


Abstract

Science advances not only because of new discoveries, but also due to revolutionary ideas drawn from accumulated data. The quality of studies in paleontology, in particular, depends on accessibility of fossil data. This research builds an intelligent system based on brachiopod fossil images and their descriptions published in Treatise on Invertebrate Paleontology. The project is still developing and some significant achievements will be discussed here. 
This thesis has two major parts. The first part describes the digitization, organization and integration of information extracted from the Treatise. The Treatise is in PDF format and it is non-trivial to convert large volumes into a structured, easily accessible digital library. Three important topics will be discussed: (1) how to extract data entries from the text, and save them in a structured manner; (2) how to crop individual specimen images from figures automatically, and associate each image with text entries; (3) how to build a search engine to perform both keyword search and natural language search. The search engine already has a web interface and many useful tasks can be done with ease. 
Verbal descriptions are second-hand information of fossil images and thus have limitations. The second part of the thesis develops an algorithm to compare fossil images directly, without referring to textual information. After similarities between fossil images are calculated, we can use the results in image search, fossil classification, and so on. The algorithm is based on deformable templates, and utilizes expectation propagation to find the optimal deformation. Specifically, I superimpose a ``warp'' on each image. Each node of the warp encapsulates a vector of local texture features, and comparing two images involves two steps: (1) deform the warp to the optimal configuration, so the energy function is minimized; and (2) based on the optimal configuration, compute the distance of two images. Experiment results confirmed that the method is reasonable and robust.


WILLIAM DINKEL

Instrumentation and Evaluation of Distributed Computations

When & Where:


246 Nichols Hall

Committee Members:

Victor Frost, Chair
Arvin Agah
Prasad Kulkarni


Abstract

Distributed computations are a very important aspect of modern computing, especially given the rise of distributed systems used for applications such as web search, massively multiplayer online games, financial trading, and cloud computing. When running these computations across several physical machines it becomes much more difficult to determine exactly what is occurring on each system at a specific point in time. This is due to each server having an independent clock, thus making event timestamps inherently inaccurate across machine boundaries. Another difficulty with evaluating distributed experiments is the coordination required to launch daemons, executables, and logging across all machines, followed by the necessary gathering of all related output data. The goal of this research is to overcome these obstacles and construct a single, global timeline of events from all servers. 
We employ high-resolution clock synchronization to bring all servers within microseconds as measured by a modified version of the Network Time Protocol implementation. Kernel and user-level events with wall-clock timestamps are then logged during basic network socket experiments. These data are then collected from each server and merged into a single dataset, sorted by timestamp, and plotted on a timeline. The entire experiment, from setup to teardown to data collection, is coordinated from a single server. The timeline visualizations provide a narrative of not only how packets flow between servers, but also how kernel interrupt handlers and other events shape an experiment's execution.


DANIEL HEIN

Detecting Attack Prone Software Using Architecture and Repository Mined Change Metrics

When & Where:


2001B Eaton Hall

Committee Members:

Hossein Saiedian, Chair
Arvin Agah
Perry Alexander
Prasad Kulkarni
Reza Barati

Abstract

Billions of dollars are lost every year to successful cyber attacks that are fundamentally enabled by software vulnerabilities. Modern cyber attacks increasingly threaten individuals, organizations, and governments, causing service disruption, inconvenience,and costly incident response. Given that such attacks are primarily enabled by software vulnerabilities, this work examines whether or not existing change metrics, along with architectural modularity and maintainability metrics can be used to predict modules and files that might be analyzed or tested further to excise vulnerabilities prior to release. 
The problem addressed by this research is the residual vulnerability problem, or vulnerabilities that evade detection and persist in released software. Many modern software projects are over a million lines of code, composed of reused components of varying maturity. The sheer size of modern software, along with the reuse of existing open source modules, complicates the questions of where to look, and in what order to look, for residual vulnerabilities. Prediction models based on various code and change metrics (e.g.,churn) have shown promise as indicators of vulnerabilities at the file level. 
This work investigates whether change metrics, along with architectural metrics quantifying modularity and maintainability can be used to identify attack-prone modules. In addition to identifying or predicting attack prone files, this work also examines prioritization and ranking said predictions.


BEN PANZER

Estimating Geophysical Properties of Snow and Sea Ice from Data Collected by an Airborne, Ultra-Wideband Radar

When & Where:


317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Chris Allen
Prasad Gogineni
Fernando Rodriguez-Morales
Richard Hale

Abstract

Large-scale spatial observations of the global sea ice thickness and distribution rely on multiple satellite-based altimeters. Laser altimeters, such as the GLAS instrument aboard ICESat-1 and ATLAS instrument aboard ICESat-2, measure freeboard which is the snow and ice thickness above mean sea level. Deriving sea-ice thickness from these data requires estimating the snow depth on the sea ice. Current means of estimating the snow depth are climatological history, daily precipitation products, and/or data from passive microwave sensors, such as AMSR-E. Radar altimeters, such as SIRAL aboard CryoSat-2, do not have sufficient vertical range resolution to resolve both the air-snow and snow-ice interfaces over sea-ice. Additionally, there is significant ambiguity on the location of the peak return due to penetration into the snow cover. Regardless of the sensor, any error in snow-depth estimation amplifies sea-ice thickness errors due to the assumption of hydrostatic equilibrium used in deriving sea-ice thickness. There clearly is a need for an airborne sensor to provide spatially large-scale measurements of the snow cover in both polar regions to improve the accuracy of sea-ice thickness estimates and provide validation for the satellite-based sensors. 
The Snow Radar was developed at the Center for Remote Sensing of Ice Sheets and deployed as part of NASA Operation IceBridge since 2009 to directly measure snow thickness over sea ice. The radar is an ultra-wideband, frequency-modulated, continuous-wave radar now working over the frequency range of 2 GHz to 8 GHz, resulting in a vertical range resolution of approximately 4 cm after post-processing. The radar has been shown to be capable of detecting snow depth over sea ice from 10 cm to more than 2 meters and results from the radar compare well to multiple in-situ measurements and passive-microwave measurements. 
The focus of the proposed research is estimation of useful geophysical properties of snow-covered sea ice beyond snow depth and subsequent refinement and validation of the snow depth extraction. Geophysical properties of interest are: snow density and wetness, air-snow and snow-ice surface roughness, and sea ice temperature and salinity. Through forward modeling of the radar backscatter response and the corresponding inversion, large-scale estimation of these properties may be possible.


GOUTHAM SELVAKUMAR

Constructing an Environment and Providing a Performance Assesment of Android's Dalvik Virtual Machine on x86 and

When & Where:


250 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Victor Frost
Xin Fu


Abstract

Android is one of the most popular operating systems (OS) for mobile touchscreen devices, including smart-phones and tablet computers. Dalvik is a process virtual machine (VM) that provides an abstraction layer over the Android OS, and runs the Java-based Android applications. The first goal of this project is to construct a development environment for conveniently investigating the properties of Android's Dalvik VM on contemporary x86 and ARM architectures. The normal development environment restricts the Dalvik VM to run on top of Android, and requires an updated Android image to be built and installed on the target device after any change to the Dalvik code. This update-build-install process unnecessarily slows down any Dalvik VM exploration. We have now discovered a (undisclosed) configuration that enables us to study the Dalvik VM as a stand-alone application on top of the Linux OS. 
The second goal of this project is to understand the translation/compilation sub-system in the Dalvik VM, experiment with various modifications to determine the best translation parameters, and compare the Dalvik VM's just-in-time (JIT) compilation characteristics (such as quality of code generated and compilation time) on the x86 and ARM systems with a state-of-the-art Java VM. As expected, we find that JIT compilation is able to significantly improve application performance over basic interpretation. Comparing Dalvik's generated code quality with the Java HotSpot VM, we observe that Dalvik's ARM target is a much more mature compared to Dalvik-x86. However, Dalvik's simple trace-based compilation generates code quality that is much worse as compared to HotSpot. Finally, our experiments also reveal the most effective JIT compilation parameters for the Dalvik VM, and its effect of benchmark performance and memory usage.


ADAM CRIFASI

Framework of Real-Time Optical Nyquist-WDM Receiver using Matlab and Simulink

When & Where:


2001B Eaton Hall

Committee Members:

Ron Hui, Chair
Shannon Blunt
Erik Perrins


Abstract

I investigate an optical Nyquist-WDM Bit Error Rate (BER) detection system. A transmitter and receiver system is simulated, using Matlab and Simulink, to form a working algorithm and to study the effects of the different processes of the data chain. The inherent lack of phase information in the N-WDM scheme presents unique challenges and requires a precise phase recovery system to accurately decode a message. Furthermore, resource constraints are applied by a cost-effective Field Programmable Gate Array (FPGA). To compensate for the speed, gate, and memory constraints of a budget FPGA, several techniques are employed to design the best possible receiver. I study the resource intensive operations and vary their resource utilization to discover the effect on the BER. To conclude, a full VHDL design is delineated, including peripheral initialization, input data sorting and storage, timing synchronization, state machine and control signal implementation, N-WDM demodulation, phase recovery, QAM decoding, and BER calculation.


TIANCHEN LI

Radar Cross-Section Enhancement of a 40 Percent Yak54 Unmanned Aerial Vehicle

When & Where:


2001B Eaton Hall

Committee Members:

Chris Allen, Chair
Ken Demarest
Ron Hui


Abstract

With increasing civilian use of unmanned aerial vehicles (UAVs), flight safety of these unmanned devices in populated area has become one of the most concerned issues among the operators and users. To reduce the rate of colliding, anti-collision systems based on airborne radar system and enhanced autopilot programs are developed. However, for most civilian UAVs being made of non-metal materials which has considerably low radar cross-section (RCS), those UAVs are really hard or even impossible to be detected by radars. This project aims to design a light-weight UAV mounted RCS enhancement device that can increase the visibility of the UAV for airborne radars which work in the frequency band near 
1.445 GHz. In this project, a 40% YAK54 radio controlled UAV is used as the subject UAV. The report also concentrates on the design of passive Van Atta Array reflector approach.