Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Jennifer Quirk

Aspects of Doppler-Tolerant Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Patrick McCormick
Charles Mohr
James Stiles
Zsolt Talata

Abstract

The Doppler tolerance of a waveform refers to its behavior when subjected to a fast-time Doppler shift imposed by scattering that involves nonnegligible radial velocity. While previous efforts have established decision-based criteria that lead to a binary judgment of Doppler tolerant or intolerant, it is also useful to establish a measure of the degree of Doppler tolerance. The purpose in doing so is to establish a consistent standard, thereby permitting assessment across different parameterizations, as well as introducing a Doppler “quasi-tolerant” trade-space that can ultimately inform automated/cognitive waveform design in increasingly complex and dynamic radio frequency (RF) environments. 

Separately, the application of slow-time coding (STC) to the Doppler-tolerant linear FM (LFM) waveform has been examined for disambiguation of multiple range ambiguities. However, using STC with non-adaptive Doppler processing often results in high Doppler “cross-ambiguity” side lobes that can hinder range disambiguation despite the degree of separability imparted by STC. To enhance this separability, a gradient-based optimization of STC sequences is developed, and a “multi-range” (MR) modification to the reiterative super-resolution (RISR) approach that accounts for the distinct range interval structures from STC is examined. The efficacy of these approaches is demonstrated using open-air measurements. 

The proposed work to appear in the final dissertation focuses on the connection between Doppler tolerance and STC. The first proposal includes the development of a gradient-based optimization procedure to generate Doppler quasi-tolerant random FM (RFM) waveforms. Other proposals consider limitations of STC, particularly when processed with MR-RISR. The final proposal introduces an “intrapulse” modification of the STC/LFM structure to achieve enhanced sup pression of range-folded scattering in certain delay/Doppler regions while retaining a degree of Doppler tolerance.


Mary Jeevana Pudota

Assessing Processor Allocation Strategies for Online List Scheduling of Moldable Task Graphs

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni


Abstract

Scheduling a graph of moldable tasks, where each task can be executed by a varying number of

processors with execution time depending on the processor allocation, represents a fundamental

problem in high-performance computing (HPC). The online version of the scheduling problem

introduces an additional constraint: each task is only discovered when all its predecessors have

been completed. A key challenge for this online problem lies in making processor allocation

decisions without complete knowledge of the future tasks or dependencies. This uncertainty can

lead to inefficient resource utilization and increased overall completion time, or makespan. Recent

studies have provided theoretical analysis (i.e., derived competitive ratios) for certain processor

allocation algorithms. However, the algorithms’ practical performance remains under-explored,

and their reliance on fixed parameter settings may not consistently yield optimal performance

across varying workloads. In this thesis, we conduct a comprehensive evaluation of three processor

allocation strategies by empirically assessing their performance under widely used speedup models

and diverse graph structures. These algorithms are integrated into a List scheduling framework that

greedily schedules ready tasks based on the current processor availability. We perform systematic

tuning of the algorithms’ parameters and report the best observed makespan together with the

corresponding parameter settings. Our findings highlight the critical role of parameter tuning in

obtaining optimal makespan performance, regardless of the differences in allocation strategies.

The insights gained in this study can guide the deployment of these algorithms in practical runtime

systems.


Past Defense Notices

Dates

Usman Sajid

ZiZoNet: A Zoom-In and Zoom-Out Mechanism for Crowd Counting in Static Images

When & Where:


246 Nichols Hall

Committee Members:

Guanghui Wang, Chair
Bo Luo
Heechul Yun


Abstract

As people gather during different social, political or musical events, automated crowd analysis can lead to effective and better management of such events to prevent any unwanted scene as well as avoid political manipulation of crowd numbers. Crowd counting remains an integral part of crowd analysis and also an active research area in the field of computer vision. Existing methods fail to perform where crowd density is either too high or too low in an image, thus resulting in either overestimation or underestimation. These methods also mix crowd-like cluttered background regions (e.g. tree leaves or small and continuous patterns) in images with actual crowd, resulting in further crowd overestimation. In this work, we present a novel deep convolutional neural network (CNN) based framework ZiZoNet for automated crowd counting in static images in very low to very high crowd density scenarios to address above issues. ZiZoNet consists of three modules namely Crowd Density Classifier (CDC), Decision Module (DM) and Count Regressor Module (CRM). The test image, divided into 224x224 patches, passes through crowd density classifier (CDC) that classifies each patch to a class label (no-crowd (NC), low-crowd (LC), medium-crowd (MC), high-crowd (HC)). Based on the CDC information and using either heuristic Rule-set Engine (RSE) or machine learning based Random Forest based Decision Block (RFDB), DM decides which mode (zoom-in, normal or zoom-out) this image should use for crowd counting. CRM then performs patch-wise crowd estimate for this image accordingly as decided or instructed by the DM module. Extensive experiments on three diverse and challenging crowd counting benchmarks (UCF-QNRF, ShanghaiTech, AHU-Crowd) show that our method outperforms current state-of-the-art models under most of the evaluation criteria.​


Ernesto Alexander Ramos

Tunable Surface Plasmon Dynamics

When & Where:


2001 B Eaton Hall

Committee Members:

Alessandro Salandrino, Chair
Christopher Allen
Rongqing Hui


Abstract

Due to their extreme spatial confinement, surface plasmon resonances show great potential in the design of future devices that would blur the boundaries between electronics and optics. Traditionally, plasmonic interactions are induced with geometries involving noble metals and dielectrics. However, accessing these plasmonic modes requires delicate election of material parameters with little margin for error, controllability, or room for signal bandwidth. To rectify this, two novel plasmonic mechanisms with a high degree of control are explored: For the near infrared region, transparent conductive oxides (TCOs) exhibit tunability not only in "static" plasmon generation (through material doping) but could also allow modulation on a plasmon carrier through external bias induced switching. These effects rely on the electron accumulation layer that is created at the interface between an insulator and a doped oxide. Here a rigorous study of the electromagnetic characteristics of these electron accumulation layers is presented. As a consequence of the spatially graded permittivity profiles of these systems it will be shown that these systems display unique properties. The concept of Accumulation-layer Surface Plasmons (ASP) is introduced and the conditions for the existence or for the suppression of surface-wave eigenmodes are analyzed. A second method could allow access to modes of arbitrarily high order. Sub-wavelength plasmonic nanoparticles can support an infinite discrete set of orthogonal localized surface plasmon modes, however only the lowest order resonances can be effectively excited by incident light alone. By allowing the background medium to vary in time, novel localized surface plasmon dynamics emerge. In particular, we show that these temporal permittivity variations lift the orthogonality of the localized surface plasmon modes and introduce coupling among different angular momentum states. Exploiting these dynamics, surface plasmon amplification of high order resonances can be achieved under the action of a spatially uniform optical pump of appropriate frequency.


Nishil Parmar

A Comparison of Quality of Rules Induced using Single Local Probabilistic Approximations vs Concept Probabilistic Approximations

When & Where:


1415A LEEP2

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo


Abstract

This project report presents results of experiments on rule induction from incomplete data using probabilistic approximations. Mining incomplete data using probabilistic approximations is a well-established technique. Main goal of this report is to present research on a comparison carried out on two different approaches to mining incomplete data using probabilistic approximations: single local probabilistic approximations approach and concept probabilistic approximations. These approaches were implemented in python programming language and experiments were carried out on incomplete data sets with two interpretations of missing attribute values: lost values and do not care conditions. Our main objective was to compare concept and single local approximations in terms of the error rate computed using double hold-out method for validation. For our experiments we used seven incomplete data sets with many missing attribute values. The best results were accomplished by concept probabilistic approximations for five data sets and by single local probabilistic approximations for remaining two data sets.


Victor Berger da Silva

Probabilistic graphical techniques for automated ice-bottom tracking and comparison between state-of-the-art solutions

When & Where:


317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
John Paden
Guanghui Wang


Abstract

Multichannel radar depth sounding systems are able to produce two-dimensional and three-dimensional imagery of the internal structure of polar ice sheets. One of the relevant features typically present in this imagery is the ice-bedrock interface, which is the boundary between the bottom of the ice-sheet and the bedrock underneath. Crucial information regarding the current state of the ice sheets, such as the thickness of the ice, can be derived if the location of the ice-bedrock interface is extracted from the imagery. Due to the large amount of data collected by the radar systems employed, we seek to automate the extraction of the ice-bedrock interface and allow for efficient manual corrections when errors occur in the automated method. We present improvements made to previously proposed solutions which pose feature extraction in polar radar imagery as an inference problem on a probabilistic graphical model. The improvements proposed here are in the form of novel image pre-processing steps and empirically-derived cost functions that allow for the integration of further domain-specific knowledge into the models employed. Along with an explanation of our modifications, we demonstrate the results obtained by our proposed models and algorithms, including significantly decreased mean error measurements such as a 47% reduction in average tracking error in the case of three-dimensional imagery. We also present the results obtained by several state-of-the-art ice-interface tracking solutions, and compare all automated results with manually-corrected ground-truth data. Furthermore, we perform a self-assessment of tracking results by analyzing the differences found between the automatically extracted ice-layers in cases where two separate radar measurements have been made at the same location.


Dain Vermaak

Visualizing, and Analyzing Student Progress on Learning Maps

When & Where:


1 Eaton Hall, Dean's Conference Room

Committee Members:

James Miller, Chair
Man Kong
Suzanne Shontz
Guanghui Wang
Bruce Frey

Abstract

A learning map is an unweighted directed graph containing relationships between discrete skills and concepts with edges defining the prerequisite hierarchy. They arose as a means of connecting student instruction directly to standards and curriculum and are designed to assist teachers in lesson planning and evaluating student response. As learning maps gain popularity there is an increasing need for teachers to quickly evaluate which nodes have been mastered by their students. Psychometrics is a field focused on measuring student performance and includes the development of processes used to link a student's response to multiple choice questions directly to their understanding of concepts. This dissertation focuses on developing modeling and visualization capabilities to enable efficient analysis of data pertaining to student understanding generated by psychometric techniques.

Such analysis naturally includes that done by classroom teachers. Visual solutions to this problem clearly indicate the current understanding of a student or classroom in such a way as to make suggestions that can guide future learning. In response to these requirements we present various experimental approaches which augment the original learning map design with targeted visual variables.

As well as looking forward, we also consider ways in which data visualization can be used to evaluate and improve existing teaching methods. We present several graphics based on modelling student progression as information flow. These methods rely on conservation of data to increase edge information, reducing the load carried by the nodes and encouraging path comparison.

In addition to visualization schemes and methods, we present contributions made to the field of Computer Science in the form of algorithms developed over the course of the research project in response to gaps in prior art. These include novel approaches to simulation of student response patterns, ranked layout of weighted directed graphs with variable edge widths, and enclosing certain groups of graph nodes in envelopes.

Finally, we present a final design which combines the features of key experimental approaches into a single visualization tool capable of meeting both predictive and validation requirements along with the methods used to measure the effectiveness and correctness of the final design.


Priyanka Saha

Complexity of Rule Sets Induced from Incomplete Data with Lost Values and Attribute-Concept Values

When & Where:


2001 B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Taejoon Kim
Cuncong Zhong


Abstract

Data is a very rich source of knowledge and information. However, special techniques need to be implemented in order to extract interesting facts and discover patterns in large data sets. This is achieved using the technique called Data Mining. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal to extract information from a data set and transform the information into a comprehensible structure for further use. Rule induction is a Data Mining technique in which formal rules are extracted from a set of observations. The rules induced may represent a full scientific model of the data, or merely represent local patterns in the data.

The data sets, however, is not always complete and might contain missing values. Data mining also provides techniques to handle the missing values in a data set. In this project, we’ve implemented lost value and attribute-concept value interpretations of incomplete data. Experiments were conducted on 176 datasets using three types of approximations (lower, middle and upper) of the concept and Modified Learning from Examples Module, version 2 (MLEM2) rule induction algorithm was used to induce rule sets.

The goal of the project was to prove that the complexity of rule sets derived from datasets having missing attributes is better for attribute-concept value interpretation compared to the lost value interpretation. The size of the rule set was always smaller for the attribute-concept value interpretation. Also, as a secondary objective, we tried to explore what type of approximation provides the smallest size of the rule sets.


Mohanad Al-Ibadi

Array Processing Techniques for Estimating and Tracking of an Ice-Sheet Bottom

When & Where:


317 Nichols Hall

Committee Members:

Shannon Blunt, Chair
John Paden
Christopher Allen
Erik Perrins
James Stiles

Abstract

   Ice bottom topography layers are an important boundary condition required to model the flow dynamics of an ice sheet. In this work, using low frequency multichannel radar data, we locate the ice bottom using two types of automatic trackers.

   First, we use the multiple signal classification (MUSIC) beamformer to determine the pseudo-spectrum of the targets at each range-bin. The result is passed into a sequential tree-reweighted message passing belief-propagation algorithm to track the bottom of the ice in the 3D image. This technique is successfully applied to process data collected over the Canadian Arctic Archipelago ice caps, and produce digital elevation models (DEMs) for 102 data frames. We perform crossover analysis to self-assess the generated DEMs, where flight paths cross over each other and two measurements are made at the same location. Also, the tracked results are compared before and after manual corrections. We found that there is a good match between the overlapping DEMs, where the mean error of the crossover DEMs is 38+7 m, which is small relative to the average ice-thickness, while the average absolute mean error of the automatically tracked ice-bottom, relative to the manually corrected ice-bottom, is 10 range-bins.

  Second, a direction of arrival (DOA)-based tracker is used to estimate the DOA of the backscatter signals sequentially from range bin to range bin using two methods: a sequential maximum a posterior probability (S-MAP) estimator and one based on the particle filter (PF). A dynamic flat earth transition model is used to model the flow of information between range bins. A simulation study is performed to evaluate the performance of these two DOA trackers. The results show that the PF-based tracker can handle low-quality data better than S-MAP, but, unlike S-MAP, it saturates quickly with increasing numbers of snapshots. Also, S-MAP is successfully applied to track the ice-bottom of several data frames collected over Russell glacier, and the results are compared against those generated by the beamformer-based tracker. The results of the DOA-based techniques are the final tracked surfaces, so there is no need for an additional tracking stage as there is with the beamformer technique.


Jason Gevargizian

MSRR: Leveraging dynamic measurement for establishing trust in remote attestation

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Arvin Agah
Perry Alexander
Bo Luo
Kevin Leonard

Abstract

Measurers are critical to a remote attestation (RA) system to verify the integrity of a remote untrusted host. Runtime measurers in a dynamic RA system sample the dynamic program state of the host to form evidence in order to establish trust by a remote system (appraisal system). However, existing runtime measurers are tightly integrated with specific software. Such measurers need to be generated anew for each software, which is a manual process that is both challenging and tedious. 

In this paper we present a novel approach to decouple application-specific measurement policies from the measurers tasked with performing the actual runtime measurement. We describe the MSRR (MeaSeReR) Measurement Suite, a system of tools designed with the primary goal of reducing the high degree of manual effort required to produce measurement solutions at a per application basis.

The MSRR suite prototypes a novel general-purpose measurement system, the MSRR Measurement System, that is agnostic of the target application. Furthermore, we describe a robust high-level measurement policy language, MSRR-PL, that can be used to write per application policies for the MSRR Measurer. Finally, we provide a tool to automatically generate MSRR-PL policies for target applications by leveraging state of the art static analysis tools.

In this work, we show how the MSRR suite can be used to significantly reduce the time and effort spent on designing measurers anew for each application. We describe MSRR's robust querying language, which allows the appraisal system to accurately specify the what, when, and how to measure. We describe the capabilities and the limitations of our measurement policy generation tool. We evaluate MSRR's overhead and demonstrate its functionality by employing real-world case studies. We show that MSRR has an acceptable overhead on a host of applications with various measurement workloads.


Surya Nimmakayala

Heuristics to predict and eagerly translate code in DBTs

When & Where:


2001 B Eaton Hall

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Fengjun Li
Bo Luo
Shawn Keshmiri

Abstract

Dynamic Binary Translators(DBTs) have a variety of uses, like instrumentation,
profiling, security, portability, etc. In order for the desired application to run
with these enhanced additional features(not originally part of its design), it is to be run
under the control of Dynamic Binary Translator. The application can be thought of as the
guest application, to be run with in a controlled environment of the translator,
which would be the host application. That way, the intended application execution
flow can be enforced by the translator, thereby inducing the desired behavior in
the application on the host platform(combination of Operating System and Hardware).

However, there will be a run-time/execution-time overhead in the translator, when performing the
additional tasks to run the guest application in a controlled fashion. This run-time
overhead has been limiting the usage of DBT's on a large scale, where response times can be critical.
There is often a trade-off between the benefits of using a DBT against the overall application response
time. So, there is a need to research/explore ways to faster application execution through DBT's(given
their large code-base).

With the evolution of the multi-core and GPU hardware architectures, multilpe concurrent threads can get
more work done through parallelization. A proper design of parallel applications or parallelizing parts of existing
serial code, can lead to improved application run-time's through hardware architecture support.

We explore the possibility of improving the performance of a DBT named DynamoRIO. The basic idea is to improve
its performance by speeding-up the process of guest code translation, through multiple threads translating
multiple pieces of code concurrently. In an ideal case, all the required code blocks for application
execution are readily available ahead of time without any stalls. For efficient eager translation, there is
also a need for heuristics to better predict the next code block to be executed. That could potentially
bring down the less productive code translations at run-time. The goal is to get application speed-up through
eager translation and block prediction heuristics, with execution time close to native run.


FARHAD MAHMOOD

Modeling and Analysis of Energy Efficiency in Wireless Handset Transceiver Systems

When & Where:


Apollo Room, Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Victor Frost
Lingjia Liu
Bozenna Pasik-Duncan

Abstract

As wireless communication devices are taking a significant part in our daily life, research steps toward making these devices even faster and smarter are accelerating rapidly. The main limiting factors are energy and power consumption. Many techniques are utilized to increase the battery’s capacity (Ampere per Hour), but that comes with a cost of raising the safety concerns. The other way to increase the battery’s life is to decrease the energy consumption of the devices. In this work, we analyze energy-efficient communications for wireless devices based on an advanced energy consumption model that takes into account a broad range of parameters. The developed model captures relationships between transmission power, transceiver distance, modulation order, channel fading, power amplifier (PA) effects, power control, multiple antennas, as well as other circuit components in the radio frequency (RF) transceiver. Based the developed model, we are able to identify the optimal modulation order in terms of energy efficiency under different situations (e.g., different transceiver distance, different PA classes and efficiencies, different pulse shape, etc). Furthermore, we capture the impact of system level and the impact of network level on the PA energy via peak to average ratio (PAR) and power control. We are also able to identify the impact of multiple antennas at the handset on the energy consumption and the transmitted bit rate for few and many antennas (conventional multiple-input-multiple-output (MIMO) and  massive MIMO) at the base station. This work provides an important framework for analyzing energy-efficient communications for different wireless systems ranging from cellular networks to wireless internet of things.