Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Jennifer Quirk

Aspects of Doppler-Tolerant Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Patrick McCormick
Charles Mohr
James Stiles
Zsolt Talata

Abstract

The Doppler tolerance of a waveform refers to its behavior when subjected to a fast-time Doppler shift imposed by scattering that involves nonnegligible radial velocity. While previous efforts have established decision-based criteria that lead to a binary judgment of Doppler tolerant or intolerant, it is also useful to establish a measure of the degree of Doppler tolerance. The purpose in doing so is to establish a consistent standard, thereby permitting assessment across different parameterizations, as well as introducing a Doppler “quasi-tolerant” trade-space that can ultimately inform automated/cognitive waveform design in increasingly complex and dynamic radio frequency (RF) environments. 

Separately, the application of slow-time coding (STC) to the Doppler-tolerant linear FM (LFM) waveform has been examined for disambiguation of multiple range ambiguities. However, using STC with non-adaptive Doppler processing often results in high Doppler “cross-ambiguity” side lobes that can hinder range disambiguation despite the degree of separability imparted by STC. To enhance this separability, a gradient-based optimization of STC sequences is developed, and a “multi-range” (MR) modification to the reiterative super-resolution (RISR) approach that accounts for the distinct range interval structures from STC is examined. The efficacy of these approaches is demonstrated using open-air measurements. 

The proposed work to appear in the final dissertation focuses on the connection between Doppler tolerance and STC. The first proposal includes the development of a gradient-based optimization procedure to generate Doppler quasi-tolerant random FM (RFM) waveforms. Other proposals consider limitations of STC, particularly when processed with MR-RISR. The final proposal introduces an “intrapulse” modification of the STC/LFM structure to achieve enhanced sup pression of range-folded scattering in certain delay/Doppler regions while retaining a degree of Doppler tolerance.


Past Defense Notices

Dates

VADIRAJ HARIBAL

Modelling of ATF-38143 P-HEMT Driven Resistive Mixer for VHF KNG P-150 Portable Radios

When & Where:


250 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Alessandro Salandrino


Abstract

FET resistive mixers play a key role in providing high linearity and low noise figure levels. HEMT technology with low threshold voltage has popularized mobile phone market and milli-meter wave technologies. The project analyzes working of a down-conversion VHF FET resistive mixer model designed using ultra-low noise ATF -38143 P-HEMT. Its widely used in KNG-P150 portable mobile radios manufactured by RELM Wireless Corporation. The mixer is designed to function within RF frequency range from 136Mhz -174Mhz at an IF frequency of 51.50Mhz. Statz model has been used to simulate the working of P-HEMT under normal conditions. Transfer function of matching circuits at each port have been obtained using simulink modelling. Effect of change in Q factor at the RF port and IF port have been considered. Analytical modelling of the mixer is performed and simulated results are compared with experimental data obtained at constant 5dbm LO power. IF transfer function has been modelled to closely match the practical circuits by applying adequate amplitude damping to the response of LC circuits at the RF port, in order to provide the required IF bandwidth and conversion gain. Effect of stray capacitances and inductances have been neglected during the modelling, and changes in series resistance of inductors at RF port and IF port have been made to match experimental results.


MOHAMMED ALENAZI

Network Resilience Improvement and Evaluation Using Link Additions

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Lingjia Liu
Bo Luo
Tyrone Duncan

Abstract

Computer networks are getting more involved in providing services for most of our daily life activities related to education, business, health care, social life, and government. Publicly available computer networks are prone to targeted attacks and natural disasters that could disrupt normal operation and services. Building highly resilient networks is an important aspect of their design and implementation. For existing networks, resilience against such challenges can be improved by adding more links. In fact, adding links to form a full mesh yields the most resilient network but it incurs an unfeasible high cost. In this research, we investigate the resilience improvement of real-world networks via adding a cost-efficient set of links. Adding a set of links to obtain optimal solution using an exhaustive search is impracticable for large networks. Using a greedy algorithm, a feasible solution is obtained by adding a set of links to improve network connectivity by increasing a graph robustness metric such as algebraic connectivity or total path diversity. We use a graph metric called flow robustness as a measure for network resilience. To evaluate the improved networks, we apply three centrality-based attacks and study their resilience. The flow robustness results of the attacks show that the improved networks are more resilient than the non-improved networks. 


WENRONG ZENG

Content-Based Access Control

When & Where:


250 Nichols Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Jerzy Grzymala-Busse
Prasad Kulkarni
Alfred Tat-Kei

Abstract

In conventional database, the most popular access control model specifies policies explicitly for each role of every user against each data object manually. Nowadays, in large-scale content-centric data sharing, conventional approaches could be impractical due to exponential explosion of the data growth and the sensitivity of data objects. What’s more, conventional database access control policy will not be functional when the semantic content of data is expected to play a role in access decisions. Users are often over-privileged, and ex post facto auditing is enforced to detect misuse of the privileges. Unfortunately, it is usually difficult to reverse the damage, as (large amount of) data has been disclosed already. In this dissertation, we first introduce Content-Based Access Control (CBAC), an innovative access control model for content-centric information sharing. As a complement to conventional access control models, the CBAC model makes access control decisions based on the content similarity between user credentials and data content automatically. In CBAC, each user is allowed by a meta-rule to access "a subset" of the designated data objects of a content-centric database, while the boundary of the subset is dynamically determined by the textual content of data objects. We then present an enforcement mechanism for CBAC that exploits Oracles Virtual Private Database (VPD) to implement a row-wise access control and to prevent data objects from being abused by unneccessary access admission. To further improve the performance of the proposed approach, we introduce a content-based blocking mechanism to improve the efficiency of CBAC enforcement to further reveal a more relavant part of the data objects comparing with only using the user credentials and data content. We also utilized several tagging mechanisms for more accurate textual content matching for short text snippets (e.g. short VarChar attributes) to extract topics other than pure word occurences to represent the content of data. In the tagging mechanism, the similarity of content is calculated not purely dependent on the word occurences but the semantic topics underneath the text content. Experimental results show that CBAC makes accurate access control decisions with a small overhead.


RANJITH KRISHNAN

The Xen Hypervisor : Construction of a Test Environment and Validation by Performing Performance Evaluation of Native Linux versus Xen Guests

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Bo Luo
Heechul Yun


Abstract

Modern computers are powerful enough to comfortably support running multiple Operating Systems at the same time. Enabling this is the Xen hypervisor, an open-source tool which is one of most widely used System Virtualization solutions in the market. Xen enables Guest Virtual Machines to run at near native speeds by using a concept called Paravirtualization. The primary goal of this project is to construct a development/test environment where we can investigate the different types of virtualization Xen supports. We start on a base of Fedora onto which Xen is built and installed. Once Xen is running, we configure both Paravirtualized and Hardware Virtualized Guests. 
The second goal of the project is to validate the environment constructed by doing a performance evaluation of constructed test environment. Various performance benchmarks are run on native Linux, Xen Host and the two important types of Xen Guests. As expected, our results show that the performance of the Xen guest machines are close to native Linux. We also see proof of why virtualization-aware Paravirtualization performs better than Hardware Virtualization which runs without any knowledge of the underlying virtualization infrastructure. 


JUSTIN METCALF

Signal Processing for Non-Gaussian Statistics: Clutter Distribution Identification and Adaptive Threshold Estimation

When & Where:


129 Nichols

Committee Members:

Shannon Blunt, Chair
Luke Huan
Lingjia Liu
Jim Stiles
Tyrone Duncan

Abstract

We examine the problem of determining a decision threshold for the binary hypothesis test that naturally arises when a radar system must decide if there is a target present in a range cell under test. Modern radar systems require predictable, low, constant rates of false alarm (i.e. when unwanted noise and clutter returns are mistaken for a target). Measured clutter returns have often been fitted to heavy tailed, non-Gaussian distributions. The heavy tails on these distributions cause an unacceptable rise in the number of false alarms. We use the class of spherically invariant random vectors (SIRVs) to model clutter returns. SIRVs arise from a phenomenological consideration of the radar sensing problem, and include both the Gaussian distribution and most commonly reported non-Gaussian clutter distributions (e.g. K distribution, Weibull distribution). 

We propose an extension of a prior technique called the Ozturk algorithm. The Ozturk algorithm generates a graphical library of points corresponding to known SIRV distributions. These points are generated from linked vectors whose magnitude is derived from the order statistics of the SIRV distributions. Measured data is then compared to the library and a distribution is chosen that best approximates the measured data. Our extension introduces a framework of weighting functions and examines both a distribution classification technique as well as a method of determining an adaptive threshold in data that may or may not belong to a known distribution. The extensions are then compared to neural networking techniques. Special attention is paid to producing a robust, adaptive estimation of the detection threshold. Finally, divergence measures of SIRVs are examined. 


ALHANOOF ALTHNIAN

Evolutionary Learning of Goal-Oriented Communication Strategies in Multi-Agent Systems

When & Where:


246 Nichols Hall

Committee Members:

Arvin Agah, Chair
Jerzy Grzymala-Busse
Prasad Kulkarni
Bo Luo
Sara Kieweg

Abstract

Multi-agent systems are a common paradigm for building distributed systems in different domains such as networking, health care, swarm sensing, robotics, and transportation. Performance goals can vary from one application to the other according to the domain's specifications and requirements. Yet, performance goals can vary over the course of task execution. For example, agents may initially be interested in completing the task as fast as possible, but if their energy hits a specific level while still working on the task, they might, then need to switch their goal to minimize energy consumption. Previous studies in multi-agent systems have observed that varying the type of information that agents communicate, such as goals and beliefs, has a significant impact on the performance of the system with respect to different, usually conflicting, performance metrics, such as speed of solution, communication efficiency, and travel distance/cost. Therefore, when designing a communication strategy for a multi-agent system, it is unlikely that one strategy can perform well with respect to all of performance metrics. Yet, it is not clear in advance, which strategy or communication decisions will be the best with respect to each metric. Previous approaches to communication decisions in multi-agent systems either manually design a single/multiple fixed communication strategies, extend agents' capabilities and use heuristics, or allow learning a strategy with respect to a single predetermined performance goal. To address this issue, this research introduces goal-oriented communication strategy, where communication decisions are determined based on the desired performance goal. This work proposes an evolutionary approach for learning a goal-oriented communication strategy in multi-agent systems. The approach enables learning an effective communication strategy with respect to simple or complex measurable performance goals. The learned strategy will determine what, when, and to whom the information should be communicated during the course of task execution.


JASON GEVARGIZIAN

Executables from Program Slices for Java Programs

When & Where:


250 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Andy Gill


Abstract

Program slicing is a popular program decomposition and analysis technique 
that extracts only those program statements that are relevant to particular points 
of interest. Executable slices are program slices that are independently executable 
and that correctly compute the values in the slicing criteria. Executable slices 
can be used during debugging and to improve program performance through 
parallelization of partially overlapping slices. 

While program slicing and the construction of executable slicers has been 
studied in the past, there are few acceptable executable slicers available, 
even for popular languages such as Java. 
In this work, we provide an extension to the T. J. Watson Libraries for 
Analysis (WALA), an open-source Java application static analysis suite, to 
generate fully executable slices. 

We analyze the problem of executable slice generation in the context 
of the capabilities provided and algorithms used by the WALA library. 
We then employ this understanding to augment the existing WALA static SSA slicer 
to efficiently track non-SSA datapendence, and couple this component with 
our exectuable slicer backend. 
We evaluate our slicer extension and find that it produces accurate 
exectuable slices for all programs that fall within the limitations of the 
WALA SSA slicer itself. 
Our extension to generate executable program slices facilitates one of the 
requirements of our larger project for a Java application automatic 
partitioner and parallelizer.


DAVID HARVIE

Targeted Scrum: Software Development Inspired by Mission Command

When & Where:


246 Nichols Hall

Committee Members:

Arvin Agah, Chair
Bo Luo
James Miller
Hossein Saiedian
Prajna Dhar

Abstract

Software engineering and mission command are two separate but similar fields, as both are instances of complex problem solving in environments with ever changing requirements. Both fields have followed similar paths from using industrial age decomposition to deal with large problems to striving to be more agile and resilient. Our research hypothesis is that modifications to agile software development based on inspirations from mission command can improve the software engineering process in terms of planning, prioritizing, and communication of software requirements and progress, as well as improving the overall software product. Targeted Scrum is a modification of Traditional Scrum based on three inspirations from Mission Command: End State, Line of Effort, and Targeting. These inspirations have led to the introduction of the Product Design Meeting and modifications of some current Scrum meetings and artifacts. We tested our research hypothesis using a semester-long undergraduate level software engineering class. Students in teams developed two software projects, one using Traditional Scrum and the other using Targeted Scrum. We then assessed how well both methodologies assisted the software development teams in planning and developing the software architecture, prioritizing requirements, and communicating progress. We also evaluated the software product produced by both methodologies. It was determined that Targeted Scrum did better in assisting the software development teams in the planning and prioritization of the requirements. However, Targeted Scrum had a negligible effect on improving the software development teams’ external and internal communications. Finally, Targeted Scrum did not have an impact on the product quality by the top performing and worst performing teams. Targeted Scrum did assist the product quality of the teams in the middle of the performance spectrum.

 

 


BRAD TORRENCE

The Life Changing HERMIT: A Case Study of the Worker/Wrapper Transformation

When & Where:


2001B Eaton Hall

Committee Members:

Andy Gill, Chair
Perry Alexander
Prasad Kulkarni


Abstract

In software engineering, altering a program's original implementation disconnects it from the model that produced it. Reconnecting the model and new implementations must be done in a way that does not decrease confidence in the design's correctness and performance. This thesis demonstrates that it is possible, in practice, to connect the model of Conway’s Game of Life with new implementations, using the worker/wrapper transformation theory. This connection allows development to continue without the sacrifice of re-implementation. 

HERMIT is a tool that allows programs implemented in Haskell to be transformed during the compilation process, and has features capable of performing worker/wrapper transformations. Specifically in these experiments, HERMIT is used to apply syntax transformations to replace Life's linked-list based implementation with one that uses other data structures in an effort to explore alternative implementations and improve overall performance. 

Previous work has successfully performed the worker/wrapper conversion on an individual function using HERMIT. This thesis presents the first time that a programmer-directed worker/wrapper transformation has been attempted on an entire program. From this experiment, substantial observations have been made. These observations have led to proposed improvements to the HERMIT system, as well as a formal approach to the worker/wrapper transformation process in general.


RAMA KRISHNAMOORTHY

Adding Collision Detection to Functional Active Programming

When & Where:


2001B Eaton Hall

Committee Members:

Andy Gill, Chair
Luke Huan
Prasad Kulkarni


Abstract

Active is a Haskell library for creating animations driven by time. The key concept is that every animation has its own starting and ending time and the motion of each element can be defined as a function of time. This underlying idea is intuitive and simple enough for the users to understand that it has created a space for simple animations, called “Functional Active programming”. Although there are many FRP libraries available, FRP libraries are often challenging to use for simple animations. 
In this project, we have added some reactive features to the Active library as an attempt to enhance the active programming space without complicating the underlying principles. This will let Active elements to detect collisions, or a mouse click event, and change their behavior accordingly. Having built-in reactive features equips the Active programmers with extra tools at their disposal and significantly reduces the efforts needed to code such reactions. These reactive features have been implemented on top of the Blank Canvas.