Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

No upcoming defense notices for now!

Past Defense Notices

Dates

REID CROWE

Development and Implementation of a VHF High Power Amplifier for the Multi-Channel Coherent Radar Depth Sounder/Imager System

When & Where:


317 Nichols Hall

Committee Members:

Fernando Rodriguez-Morales, Chair
Chris Allen
Carl Leuschen


Abstract

This thesis presents the implementation and characterization of a VHF high power amplifier developed for the Multi-channel Coherent Radar Depth Sounder/Imager (MCoRDS/I) system. MCoRDS/I is used to collect data on the thickness and basal topography of polar ice sheets, ice sheet margins, and fast-flowing glaciers from airborne platforms. Previous surveys have indicated that higher transmit power is needed to improve the performance of the radar, particularly when flying over challenging areas. 
The VHF high power amplifier system presented here consists of a 50-W driver amplifier and a 1-kW output stage operating in Class C. Its performance was characterized and optimized to obtain the best tradeoff between linearity, output power, efficiency, and conducted and radiated noise. A waveform pre-distortion technique to correct for gain variations (dependent on input power and operating frequency) was demonstrated using digital techniques. 
The amplifier system is a modular unit that can be expanded to handle a larger number of transmit channels as needed for future applications. The system can support sequential transmit/receive operations on a single antenna by using a high-power circulator and a duplexer circuit composed of two 90° hybrid couplers and anti-parallel diodes. The duplexer is advantageous over switches based on PIN-diodes due to the moderately high power handling capability and fast switching time. The system presented here is also smaller and lighter than previous implementations with comparable output power levels.


KENNETH DEWAYNE BROWN

A Mobile Wireless Channel State Recognition Algorithm

When & Where:


2001B Eaton Hall

Committee Members:

Glenn Prescott, Chair
Chris Allen
Gary Minden
Erik Perrins
Richard Hale

Abstract

The scope of this research is a blind mobile wireless channel state recognition (CSR) algorithm that detects channel time and frequency dispersion. Hidden Markov Models (HMM) are utilized to represent the statistical relationship between these hidden channel dispersive state process and an observed received waveform process. The HMMs provide sufficient sensitivity to detect the hidden channel dispersive state process. First-order and second-order statistical features are assumed to be sufficient to discriminate channel state from the receive waveform observations. State hard decisions provide sufficient information, and can be combined, to increase the reliability of a time block channel state estimate. To investigate the feasibility of the proposed CSR algorithm, this research effort has architected, designed, and verified a blind statistical feature recognition process capable of detecting whether a mobile wireless channel is coherent, single time, single frequency, or dual dispersive. Channel state waveforms are utilized to compute the transition and output probability parameters for a set of feature recognition HMMs. Time and frequency statistical features are computed from consecutive sample blocks and input into the set of trained HMMs which compute a state sequence conditional probability for each feature. The conditional probabilities identify how well the input waveform statistically agrees with the previous training waveforms. Hard decisions were produced from each feature state probability estimate and combined to produce a single output channel dispersive state estimate for each input time block. To verify the CSR algorithm performance, combinations of state sequence blocks were input to the process and state recognition accuracy was characterized. Initial results suggest that CSR based on blind waveform statistical feature recognition is feasible.


WENRONG ZENG

Content-Based Access Control

When & Where:


250 Nichols Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Jerzy Grzymala-Busse
Prasad Kulkarni
Alfred Tat-Kei Ho

Abstract

In conventional database access control models, access control policies are explicitly specified for each role against each data object manually. Nowadays, in large-scale content-centric data sharing, 
conventional approaches could be impractical due to exponential explosion and the sensitivity of data objects. In this proposal, we first introduce Content-Based Access Control (CBAC), an innovative access control model for content-centric information sharing. As a complement to conventional access control models, the CBAC model makes access control decisions based on the content similarity between user credentials and data content automatically. In CBAC, each user is allowed by a meta-rule to access “a subset” of the designated data objects of the whole database, while the boundary of the subset is dynamically determined by the textual content of data objects. We then present an enforcement mechanism for CBAC that exploits Oracle’s Virtual Private Database (VPD). To further improve the performance of the proposed approach, we introduce a content-based blocking mechanism to improve the efficiency of CBAC enforcement to further 
reveal a more relavant part of the data objects comparing with the user credentials and data content. We also utilized a tagging mechanism for more accurate textual content matching for short text snippets (e.g. short VarChar attributes) to extract topics other than pure word occurences to 
represent the content of data. Experimental results show that CBAC makes reasonable access control decisions with a small overhead.


MARIANNE JANTZ

Detecting and Understanding Dynamically Dead Instructions for Contemporary Machines

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Xin Fu
Man Kong


Abstract

Instructions executed by the processor are dynamically dead if the values they produce are not used by the program. Researchers have discovered that a surprisingly large fraction of executed instructions are dynamically dead. Dynamically dead instructions (DDI) can potentially slow-down program execution and waste power. Unfortunately, although the issue of DDI is well-known, there has not been any comprehensive study to understand and explain the occurence of DDI, evaluate its performance impact, and resolve the problem, especially for contemporary architectures. 
The goals of our research are to quantify and understand the properties of DDI, as well as, systematically characterize them for existinng state-of-the-art compilers and popular architectures in order to develop compiler and/or architectural techniques to avoid their execution at runtime. In this thesis, we describe our GCC-based framework to instrument binary programs to generate control-flow and data-flow (registers and memory) traces at runtime. We present the percentage of DDI in our benchmark programs, as well as, characteristics of the DDI. We display that context information can have a siginificant impact on the probability that an instruction will be dynamically dead. We show that a low percentage of static instructions actually contribute to the overall DDI in our benchmark programs. We also describe the outcome of our manual study to analyze and categorize the instances of dead instructions in our x86 benchmarks into seven distinct categories. We briefly describe our plan to develop compiler and architecture based techniques to eliminate each category of DDI in future programs. And finally, we find that x86 and ARM programs, compiled with GCC, generally contain a significant amount of DDI. However, x86 programs present fewer DDI than the ARM benchmarks, which display similar percentages of DDI as earlier research for other architectures. Therefore, we suggest that the ARM architecture observes a non-negligible fraction of DDI and should be examined further. Overall, we believe that a close synergy between static code generation and program execution techniques may be the most effective strategy to eliminate DDI.


YUHAO YANG

Protecting Attributes and Contents in Online Social Networks

When & Where:


2001B Eaton Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Luke Huan
Prasad Kulkarni
Alfred Tat-Kei Ho

Abstract

With the extreme popularity of online social networks, security and privacy issues become critical. In particular, it is important to protect user privacy without preventing them from normal socialization. User privacy in the context of data publishing and structural re-identification attacks has been well studied. However, protection of attributes and data content was mostly neglected in the research community. While social network data is rarely published, billions of messages are shared in various social networks on a daily basis. Therefore, it is more important to protect attributes and textual content in social networks. 

We first study the vulnerabilities of user attributes and contents, in particular, the identifiability of the users when the adversary learns a small piece of information about the target. We have presented two attribute-reidentification attacks that exploit information retrieval and web search techniques. We have shown that large portions of users with online presence are very identifiable, even with a small piece of seed information, and the seed information could be inaccurate. 
To protect user attributes and content, we will adopt the social circle model derived from the concepts of “privacy as user perception” and “information boundary”. Users will have different social circles, and share different information in different circles. We propose to automatically identify social circles based on three observations: (1) friends in the same circle are connected and share many friends in common; (2) friends in the same circle are more likely to interact; (3) friends in the same circle tend to have similar interests and share similar content. We propose to adopt multi-view clustering to model and integrate such observations to identify implicit circles in a user’s personal network. Moreover, we propose an evaluation mechanism that evaluates the quality of the clusters (circles). 
Furthermore, we propose to exploit such circles for cross-site privacy protection for users –new messages (blogs, micro-blogs, updates, etc) will be evaluated and distributed to the most relevant circle(s). We monitor information distributed to each circle to protect users against information aggregation attacks, and also enforce circle boundaries to prevent sensitive information leakage.


MICHAEL JANTZ

Automatic Cross-Layer Framework to Improve Memory Power and Efficiency

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Xin Fu
Andy Gill
Bo Luo
Karen Nordheden

Abstract

Recent computing trends include an increased focus on power and energy consumption and the need to support multi-tenant use cases in which physical resources need to be multiplexed efficiently without causing performance interference. Many recent works have focused on how to best allocate CPU, storage and network resources to meet competing service quality objectives and reduce power. At the same time, data-intensive computing is placing larger demands on physical memory systems than ever before. In comparison to other resources, however, it is challenging to obtain precise control over distribution of memory capacity, bandwidth, or power, when virtualizing and multiplexing system memory. That is because these effects intimately depend upon the results of activity across multiple layers of the vertical execution stack, which are often not available in any individual component. 

The goal of our proposed work is to exercise collaboration between the compiler, operating system, and memory controller for a hybrid memory architecture to reduce energy consumption, while balancing performance trade-offs. Analysis, data structure partitioning, and code layout transformations will be conducted by the compiler and two-way communication between the applications and OS will guide memory management. The OS, together with the hardware memory controller, will allocate, map, and migrate pages to minimize energy consumption for a specified performance tolerance.


NIRANJAN SUNDARARAJAN

Study of Balanced and Unbalanced RFID Tags Attached to Charge Pumps

When & Where:


246 Nichols Hall

Committee Members:

Ken Demarest, Chair
Dan Deavours
Jim Stiles


Abstract

Ultra High frequency Radio Frequency Identification (UHF RFID) technology has gained wide prominence in recent years. The main drawback of a UHF RFID tag antenna is that it is sensitive to the environment in which it is placed. That is the performance of a RFID tag deteriorates when placed on conductive or dielectric objects. Most UHF RFID antennas use variations of a balanced folded dipole, such as a T-match antenna. In this project, we answer the question, would it be beneficial having an unbalanced version of a T-match antenna (Gamma match antenna) in a RFID tag compared to having a conventional balanced T-match antenna? To test this we analyzed the performance of a gamma match and T-match antenna, when attached to a charge pump, which generally acts as a load for a RFID antenna in a RFID tag. Also, we propose a procedure to find out the best impedance to drive a charge pump and outline a simple procedure to design a balanced T-match antenna for any desirable input impedance. Later, we transform a balanced T-match antenna into a unbalanced Gamma match antenna and tested to see that a Gamma match antenna is able to deliver more power and voltage to a charge pump than a T-match antenna. Finally we validate these results by studying and comparing the Z-parameters of a Gamma match and T-match antenna.


HARIPRASAD SAMPATHKUMAR

A Framework for Information Retrieval and Knowledge Discovery from Online Healthcare Social Networks

When & Where:


246 Nichols Hall

Committee Members:

Bo Luo, Chair
Xue-Wen Chen
Jerzy Grzymala-Busse
Prasad Kulkarni
Jie Zhang

Abstract

Information used to assist biomedical research has largely comprised of data available in published sources like scientific literature or clinical sources like patient health records. Information from such sources, though extensive and organized, is often not readily available due to its proprietary and/or privacy-sensitive nature. Collecting such information through clinical and pharmaceutical studies is expensive and the information is limited to the diversity of people involved in the study. With the growth of Web 2.0, more and more people openly share their health experiences with other similar patients on healthcare related social networks. The data available in these networks can act as a new source that provides for unrestricted, high volume, highly diverse and up-to-date information needed for assisting biomedical and pharmaceutical research. However, this data is often unstructured, noisy and scattered, making it unsuitable for use in its current form. The goal of this research is to develop an Information Retrieval and Knowledge Discovery framework that is capable of automatically collecting such data from online healtcare networks, extracting useful information and representing it in a form that would facilitate knowledge discovery in biomedical and pharmaceutical research. Information retrieval, Text mining and Ontology modeling techniques are employed in building this framework. An Adverse Drug Reaction discovery tool and a patient profiling tool are being developed to demonstrate the utility of this framework.


SRINIVAS PALGHAT VISWANATH

Design and Development of a Social Media Aggregator

When & Where:


2001B Eaton Hall

Committee Members:

Fengjun Li, Chair
Victor Frost
Prasad Kulkarni


Abstract

There are so many social network aggregators available in the market, e.g.SocialNetwork.in, FriedFeed, Pluggio, Postano, Hootsuite etc. A social network aggregator is a one-stop shop which provides a single point of entry to manage operations of multiple social network accounts and keep track of social media streams. Once a user establishes the sites credentials onto the aggregator, it pulls static data like user profile information, dynamic data like news feed and user posts. 

This project aims to design a unified interface of static and dynamic data from facebook, foursquare and twitter for a particular user. Unlike other social aggregators that display dynamic social media stream data in different tabs, each corresponding to a social networking site, we merge dynamic data like timeline from facebook and sent tweets from twitter together and display them on a single stream sorted according to the posting date. Similarly, news feed from facebook and twitter home are merged together and can be seen on a single stream. To simplify cross-social-network management, we support unified operations such as Posting. New posts/tweets can be easily posted at the same time on both the sites through this application. User can further specify the access privileges (i.e., seen by public, friends, friends of friends or only me) of the posts on Facebook, for dynamic privacy protection. 

Least but not last, this aggregator supports integration of user profiles from the three social networks. An edit distance based similarity score is calculated to determine the likelihood of profiles from three social networks belong to a same friend. For those with a perfect score, the matched profiles are combined and displayed in an additional dialog.


BRIGID HALLING

Towards a Formal Verification of the Trusted Platform Module

When & Where:


250 Nichols Hall

Committee Members:

Perry Alexander, Chair
Andy Gill
Fengjun Li


Abstract

The Trusted Platform Module (TPM) serves as the root-of-trust in a trusted computing environment, and therefore warrants formal specification and verification. This thesis presents results of an effort to specify and verify an abstract TPM 1.2 model using PVS that is useful for understanding the TPM and verifying protocols that use it. TPM commands are specified as state transformations and sequenced to represent protocols using a state monad. Preconditions, postconditions, and invariants are specified for individual commands and validated. All specifications are written and verified automatically using the PVS decision procedures and rewriting system.