Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

No upcoming defense notices for now!

Past Defense Notices

Dates

ADITYA BALASUBRAMANIAN

Study and Performance Analysis of OFDM using GNURadio and USRP

When & Where:


250 Nichols Hall

Committee Members:

Lingjia Liu, Chair
Joe Evans
James Sterbenz


Abstract

Software defined radios (SDR) are a rapidly evolving technology which are used widely in industry and academia today. They offer a very low cost and flexible alternative for implementing and testing wireless technologies since most of the physical layer functionalities are implemented in 
software instead of hardware. Universal Software Defined Radio Peripheral (USRP) is one of the most popular products belong to the family of SDR. GNURadio, a software development kit comprising of C++ and Python libraries is widely used with USRP as a hardware platform to create SDR applications. 
In this project a tested is implemented for performance analysis of an OFDM communication system using GNURadio and USRP. The performance is analyzed and studied in a practical laboratory environment using GNURadio and USRP. The packet error rate versus SNR is calculated in different 
environmental settings .The effect of Interference and obstruction is also taken into account in studying the performance.


LOGAN SMITH

Validation of CReSIS Synthetic Aperture Radar Processor and Optimal Processing Parameters

When & Where:


317 Nichols Hall

Committee Members:

John Paden, Chair
Chris Allen
Carl Leuschen


Abstract

Sounding the ice sheets of Greenland and Antarctica is a vital component in determining the affect of global warming on sea level rise. Of particular importance to measure are the outlet glaciers that transport ice from the interior to the edge of the ice sheet. These outlet glaciers are difficult to sound due to crevassing caused by the relatively fast movement of the ice in the glacial channel and higher signal attenuation caused by warmer ice. The Center for Remote Sensing of Ice Sheets (CReSIS) uses multi-channel airborne radars with methods for achieving better resolution and signal-to-noise ratio (SNR) in the three major dimension to sound outlet glaciers. Synthetic aperture radar (SAR) techniques are used in the along-track dimension, pulse compression in the range dimension, and an antenna array in the cross-track dimension. 

CReSIS has developed a SAR processor to effectively and efficiently process the data collected by these radars in each dimension. To validate the performance of this processor a SAR simulator was developed with the functionality to test multiple aspects of the SAR processor. In addition to the implementation of this simulator for validation of processing the data in the along-track, cross-track and range dimensions, there are a number of data-dependent processing steps that can affect the quality of the final data product. These include creating matched filters for each dimension of the data, removing phase and amplitude differences between receive channels, and determining the optimal along-track beamwidth to use for processing the data. All of these factors can improve the ability to obtain the maximum amount of information from the collected data. The validation and optimal processing parameters and their theory are discussed here. 


H. SHANKER RAO

Dominant Attribute and Multiple Scanning Approaches for Discretization of Numerical Attributes

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Perry Alexander
Doina Caragea


Abstract

Rapid development of high throughput technologies and database management systems has made it possible to produce and store large amount of data. However, making sense of big data and discovering knowledge from it is a compounding challenge. Generally, data mining techniques search for information in datasets and express gained knowledge in the form of trends, regularities, patterns or rules. Rules are frequently identified automatically by a technique called rule induction, which is the most important technique in data mining and machine learning and it was developed primarily to handle symbolic data. However, real life data often contain numerical attributes and therefore, in order to fully utilize the power of rule induction techniques, an essential preprocessing step of converting numeric data into symbolic data called discretization is employed in data mining. 
Here we present two entropy based discretization techniques known as dominant attribute approach and multiple scanning approach, respectively. These approaches were implemented as two explicit algorithms in a JAVA programming language and experiments were conducted by applying each algorithm separately on seventeen well known numerical data sets. The resulting discretized data sets were used for rule induction by LEM2 or Learning from Examples Module 2 algorithm. For each dataset in multiple scanning approach, experiments were repeated with incremental scans until interval counts were stabilized. Preliminary results from this study indicated that multiple scanning approach performed better than dominant attribute approach in terms of producing comparatively smaller and simpler rule sets. 


YI ZHU

Matrix and Tensor-based ESPRIT Algorithm for Joint Angle and Delay Estimation in 2D Active Massive MIMO Systems and Analysis of Direction of Arrival Estimation Algorithms for Basal Ice Sheet Tomography

When & Where:


246 Nichols Hall

Committee Members:

Lingjia Liu, Chair
Shannon Blunt
John Paden
Erik Perrins

Abstract

In this thesis, we apply and analyze three direction of arrival (DoA) algorithms to tackle two distinct problems: one belongs to wireless communication, the other to radar signal processing. Though the essence of these two problems is DoA estimation, their formulation, underlying assumptions, application scenario, etc. are totally different. Hence, we write them separately, with ESPRIT algorithm the focus of Part I and MUSIC and MLE detailed in Part II. 

For wireless communication scenario, mobile data traffic is expected to have an exponential growth in the future. In “massive MIMO” systems, a base station will rely on the uplink sounding signals from mobile stations to figure out the spatial information to perform MIMO beamforming. Accordingly, multi-dimensional parameter estimation of a ray-based multipath wireless channel becomes crucial for such systems to realize the predicted capacity gains. We study joint angle and delay estimation for such system and results suggest that the dimension of the antenna array at the base station plays an important role in determining the estimation performance. These insights will be useful for designing practical “massive MIMO” systems in future mobile wireless communications. 

For the problem of radar sensing of ice sheet topography, one of the key requirements for deriving more realistic ice sheet models is to obtain a good set of basal measurements that enables accurate estimation of bed roughness and conditions. For this purpose, 3D tomography of the ice bed has been successfully implemented with the help of DoA. The SAR focused datasets provide a good case study. For the antenna array geometry and sample support used in our tomographic application, MUSIC performs better originally using a cross-over analysis where the estimated topography from crossing flight lines are compared for consistency. However, after several improvements applied to MLE, MLE outperforms MUSIC. We observe that, the spatial bottom smoothing, aiming to remove the artifacts made by MLE algorithm, is the most essential step in the post-processing procedure. The 3D tomography we obtained lays a good foundation for further analysis and modeling of ice sheets. 


YUHAO YANG

Protecting Attributes and Contents in Online Social Networks

When & Where:


250 Nichols Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Luke Huan
Prasad Kulkarni
Alfred Ho

Abstract

With the fast development of computer and information technologies, online social networks grow dramatically. While huge amount of information is distributed expeditiously in online social networking sites, privacy concerns arise. 
In this dissertation, we first study the vulnerabilities of user attributes and contents, in particular, the identifiability of the users when the adversary learns a small piece of information about the target. We further employ an information theory based approach to quantitatively evaluate the threats of attribute-based re-identification. We have shown that large portions of users with online presence are highly identifiable. 
The notion of privacy as control and information boundary has been introduced by the user-oriented privacy research community, and partly adopted in commercial social networking platforms. However, such functions are not widely accepted by the users, mainly because it is tedious and labor-intensive to manually assign friends into such circles. To tackle this problem, we introduce a social circle discovery approach using multi-view clustering. We present our observations on the key features of social circles, including friendship links, content similarity and social interactions. We treat each feature as one view, and propose a one-side co-trained spectral clustering technique, which is tailored for the sparse nature of our data. We evaluate our approach on real-world online social network data, and show that the proposed approach significantly outperforms structure-based clustering. Finally, we build a proof-of-concept demonstration of the automatic circle detection and recommendation approaches.


JAMUNA GOPAL

I Know Your Family: An Hybrid Information Retrieval Approach to Extract Family Information

When & Where:


250 Nichols Hall

Committee Members:

Bo Luo, Chair
Jerzy Grzymala-Busse
Prasad Kulkarni


Abstract

The aim of this project is to identify the family related information of a person from their Twitter Data. We use their personal details, tweets and their friends’ details in order to achieve this. Since, we deal with the modern world short text data; we have used a hybrid information retrieval methodology taking into account the Parts of Speech of the data, Phrase Similarity and the Semantic Similarity of the data along with the openly available twitter data. The future use of this research is to develop a Client Side protection tool that will help users validate the data to be posted for privacy breech.


KAIGE YAN

Power and Performance Co-optimization for Emerging Mobile Platforms

When & Where:


250 Nichols Hall

Committee Members:

Xin Fu, Chair
Prasad Kulkarni
Heechul Yun


Abstract

The mobile devices emerge as the most popular computing platform since 2011. Different from the traditional PC, the mobile devices are more power-constraint and performance-sensitive due to its size. In order to reduce the power consumption and improve the performance, we focus on the Last Level Cache (LLC), which is the power-hungry structure and critical to the performance in mobile platforms. In this project, we first integrate the McPAT power model into the Gem5 simulator. We also introduce the emerging memory technologies, such as Sprin-Transfer Torque RAM (STT-RAM) and embedded DRAM (eDRAM), into the cache design and compare their power and performance effectiveness with the conventional SRAM-based cache. Additionally, we identify that the frequent execution switch between the kernel and user code is the major reason for the high LLC miss in mobile applications. This is because blocks belonging to kernel and user space have severe interferences. We further propose static and dynamic way partition schemes to separate the cache blocks from kernel and user space. The experiment results show promising power reduction and performance improvement with our proposed techniques.


MICHAEL JANTZ

Exploring Dynamic Compilation and Cross-Layer Object Management Policies for Managed Language Applications

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Xin Fu
Andy Gill
Bo Luo
Karen Nordheden

Abstract

Recent years have witnessed the widespread adoption of managed programming languages that are designed to execute on virtual machines. Virtual machine architectures provide several powerful software engineering advantages over statically compiled binaries, such as portable program represenations, additional safety guarantees, and automatic memory and thread management, which have largely driven their success. To support and facilitate the use of these features, virtual machines implement a number of services that adaptively manage and optimize application behavior during execution. Such runtime services often require tradeoffs between efficiency and effectiveness, and different policies can have major implications on the system's performance and energy requirements. 

In this work, we extensively explore policies for the two runtime services that are most important for achieving performance and energy efficiency: dynamic (or Just-In-Time (JIT)) compilation and memory management. First, we examine the properties of single-tier and multi-tier JIT compilation policies in order to find strategies that realize the best program performance for existing and future machines. We perform hundreds of experiments with different compiler aggressiveness and optimization levels to evaluate the performance impact of varying if and when methods are compiled. Next, we investigate the issue of how to optimize program regions to maximize performance in JIT compilation environments. For this study, we conduct a thorough analysis of the behavior of optimization phases in our dynamic compiler, and construct a custom experimental framework to determine the performance limits of phase selection during dynamic compilation. Lastly, we explore innovative memory management strategies to improve energy efficiency in the memory subsystem. We propose and develop a novel cross-layer approach to memory management that integrates information and analysis in the VM with fine-grained management of memory resources in the operating system. Using custom as well as standard benchmark workloads, we perform detailed evaluation that demonstrates the energy-saving potential of our approach.


JINGWEIJIA TAN

Modeling and Improving the GPGPU Reliability in the Presence of Soft Errors

When & Where:


250 Nichols Hall

Committee Members:

Xin Fu, Chair
Prasad Kulkarni
Heechul Yun


Abstract

GPGPUs (general-purpose computing on graphics processing units) emerge as a highly attractive platform for HPC (high performance computing) applications due to its strong computing power. Unlike the graphic processing applications, HPC applications have rigorous requirement on execution correctness, which is generally ignored in the traditional GPU design. Soft Errors, which are failures caused by high-energy neutron or alpha particle strikes in integrated circuits, become a major reliability concern due to the shrinking of feature sizes and growing integration density. In this project, we first build a framework GPGPU-SODA to model the soft-error vulnerability of GPGPU microarchitecture using a publicly available simulator. Based on the framework, we identified the streaming processors are reliability hot-spot in GPGPUs. We further observe that the streaming processors are not fully utilized during the branch divergence and pipeline stalls caused by the long latency operations. We then propose a technique RISE to recycle the streaming processors idle time for soft-error detection in GPGPUs. Experimental results show that RISE obtains the good fault coverage with negligible performance degradation.


KARTHIK PODUVAL

HGS Schedulers for Digital Audio Workstation like Applications

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Victor Frost
Jim Miller


Abstract

Digital Audio Workstation (DAW) applications are real-time applications that have special timing constraints. HGS is a real-time scheduling framework that allows developers implement custom schedulers based on any scheduling algorithm through a process of direct interaction between client threads and their schedulers. Such scheduling could extend well beyond the common priority model that currently exists and could be a representation of arbitrary application semantics that can be well understood and acted upon by its associated scheduler. We like to term it "need based scheduling". In this thesis we firstly study some DAW implementations and later create a few different HGS schedulers aimed at assisting DAW applications meet their needs.