Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Jennifer Quirk

Aspects of Doppler-Tolerant Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Patrick McCormick
Charles Mohr
James Stiles
Zsolt Talata

Abstract

The Doppler tolerance of a waveform refers to its behavior when subjected to a fast-time Doppler shift imposed by scattering that involves nonnegligible radial velocity. While previous efforts have established decision-based criteria that lead to a binary judgment of Doppler tolerant or intolerant, it is also useful to establish a measure of the degree of Doppler tolerance. The purpose in doing so is to establish a consistent standard, thereby permitting assessment across different parameterizations, as well as introducing a Doppler “quasi-tolerant” trade-space that can ultimately inform automated/cognitive waveform design in increasingly complex and dynamic radio frequency (RF) environments. 

Separately, the application of slow-time coding (STC) to the Doppler-tolerant linear FM (LFM) waveform has been examined for disambiguation of multiple range ambiguities. However, using STC with non-adaptive Doppler processing often results in high Doppler “cross-ambiguity” side lobes that can hinder range disambiguation despite the degree of separability imparted by STC. To enhance this separability, a gradient-based optimization of STC sequences is developed, and a “multi-range” (MR) modification to the reiterative super-resolution (RISR) approach that accounts for the distinct range interval structures from STC is examined. The efficacy of these approaches is demonstrated using open-air measurements. 

The proposed work to appear in the final dissertation focuses on the connection between Doppler tolerance and STC. The first proposal includes the development of a gradient-based optimization procedure to generate Doppler quasi-tolerant random FM (RFM) waveforms. Other proposals consider limitations of STC, particularly when processed with MR-RISR. The final proposal introduces an “intrapulse” modification of the STC/LFM structure to achieve enhanced sup pression of range-folded scattering in certain delay/Doppler regions while retaining a degree of Doppler tolerance.


Past Defense Notices

Dates

HARSHITH POTU

Android Application for Interactive Teaching

When & Where:


250 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Esam El-Araby
Andy Gill


Abstract

In a world with enormously growing technologies and applications, most people use smart 
devices. This provides a means to develop smart applications that will be help students learn effectively. 
In this project, we develop a smart android application which will provide digital means of 
interaction between the professors and students. Instead of using traditional emails for every 
discussion, this application helps to broadcast multiple messages to the class through a single 
click. The students will also be able to follow multiple professors and participate in the active 
discussions. And also this application allows the users to send personal messages to the other 
users in order to participate in an active discussion. It provides unique logins to every student 
and professor. It uses mongoDB as the database and "parse" backend as a service.The main 
inspiration for this project was an application called Tophat. 


ABDULMALIK HUMAYED

Security Protection for Smart Cars — A CPS Perspective

When & Where:


246 Nichols Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Prasad Kulkarni
Heechul Yun
Prajna Dhar

Abstract

As the passenger vehicles evolve to be “smart”, electronic components, including communication, intelligent control and entertainment, are continuously introduced to new models and concept vehicles. The new paradigm introduces new features and benefits, but also brings new security issues, which is often overlooked in the industry as well as in the research community. 

Smart cars are considered cyber-physical systems (CPS) because of their integration of cyber- and physical- components. In recent years, various threats, vulnerabilities, and attacks have been discovered from different models of smart cars. In the worst- case scenario, external attackers may remotely obtain full control of the vehicle by exploiting an existing vulnerability. 

In this research, we investigate smart cars’ security from a CPS’ perspective and derive a taxonomy of threats, vulnerabilities, attacks, and controls. In addition, we investigate three security solutions that would improve the security posture of automotive networks. First, as automotive networks are highly vulnerable to Denial of Service (DoS) attacks, we investigate a solution that effectively mitigates such attacks, namely ID-Hopping. In addition, because several attacks have successfully exploited the poor separation between critical and non-critical components in the automotive networks, we propose to investigate the effectiveness of firewalls and Intrusion Detection Systems (IDS) to prevent and detect such exploitations. To evaluate our proposals, we built a test bench that is composed of five microcontrollers and a communication bus to simulate an automotive network. Simulations and experiments performed with the testbed demonstrates the effectiveness of ID-hopping against DoS attacks. 


CAITLIN McCOLLISTER

Predicting Author Traits Through Topic Modeling of Multilingual Social Media Text

When & Where:


246 Nichols Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Luke Huan


Abstract

One source of insight into the motivations of a modern human being is the text they write and post for public consumption online, in forms such as personal status updates, product reviews, or forum discussions. The task of inferring traits about an author based on their writing is often called "author profiling." One challenging aspect of author profiling in today’s world is the increasing diversity of natural languages represented on social media websites. Furthermore, the informal nature of such writing often inspires modifications to standard spelling and grammatical structure which are highly language-specific. 
These are some of the dilemmas that inspired a series of so-called "shared task" competitions, in which many participants work to solve a single problem in different ways, in order to compare their methods and results. This thesis describes our submission to one author profiling shared task in which 22 teams implemented software to predict the age, gender, and certain personality traits of Twitter users based on the content of their posts to the website. We will also analyze the performance and implementation of our system compared to those of other teams, all of which were described in open-access reports. 
The competition organizers provided a labeled training dataset of tweets in English, Spanish, Dutch, and Italian, and evaluated the submitted software on a similar but hidden dataset. Our approach is based on applying a topic modeling algorithm to an auxiliary, unlabeled but larger collection of tweets we collected in each language, and representing tweets from the competition dataset in terms of a vector of 100 topics. We then trained a random forest classifier based on the labeled training dataset to predict the age, gender and personality traits for authors of tweets in the test set. Our software ranked in the top half of participants in English and Italian, and the top third in Dutch.


ANIRUDH NARASIMMAN

Arcana: Private Tweets on a Public Microblog Platform

When & Where:


250 Nichols Hall

Committee Members:

Bo Luo, Chair
Luke Huan
Prasad Kulkarni


Abstract

As one of the world’s most famous online social networks (OSN), Twitter now has 320 million monthly active users. Accompanying the large user group and abundant personal information, users increasingly realize the vulnerability of tweets and have reservations of showing certain tweets to different follower groups, such as colleagues, friends and other followers. However, Twitter does not offer enough privacy protection or access control functions. Users can just set an account as protected, which results in only the user’s followers seeing the tweet. The protected tweet does not appear in the public domain, third party sites and search engines cannot access the tweet. However, a protected account cannot distinguish between different follower groups or users who use multiple accounts. To serve the demand of the user so that they can restrict the access of each tweet to certain follower groups, we propose a browser plug-in system, which utilizes CP-ABE (Ciphertext Policy Attribute based encryption), allowing the user to select followers based on predefined attributes. Through simple installation and pre-setting, the user can encrypt and decrypt tweets conveniently and can avoid the fear of information leakage.


PRATHAP KUMAR VALSAN

Towards Achieving Predictable Memory Performance on Multi-core Based Mixed Criticality Embedded Systems

When & Where:


250 Nichols Hall

Committee Members:

Heechul Yun, Chair
Esam El-Araby
Prasad Kulkarni


Abstract

The shared resources in multi-core systems, mainly the memory subsystem(caches and DRAM), if not managed properly would affect the predictability of real-time tasks in the presence of co-runners. In this work, we first studied the design of COTS DRAM controllers and its impact on predictability and, proposed a DRAM controller design, called MEDUSA, to provide predictable memory performance in multi-core based real-time systems. In our approach, the OS partially partitions DRAM banks into reserved banks and shared banks. The reserved banks are exclusive to each core to provide predictable timing while the shared banks are shared by all cores to efficiently utilize the resources. MEDUSA has two separate queues for read and write requests, and it prioritizes reads over writes. In processing read requests, MEDUSA employs a two-level scheduling algorithm that prioritizes the memory requests to the reserved banks in a Round Robin fashion to provide strong timing predictability. In processing write requests, MEDUSA largely relies on the FR-FCFS for high throughput. We implemented MEDUSA in a cycle-accurate full-system simulator. The results show that MEDUSA achieves up to 91% better worst-case performance for real-time tasks while achieving up to 29% throughput improvement for non-real-time tasks 

Second, we studied the contention at shared caches and its impact on predictability. We demonstrate that the prevailing cache partition techniques does not necessarily ensure predictable cache performance in modern COTS multi-core platforms that use non-blocking caches to exploit memory-level-parallelism (MLP). Through carefully designed experiments using three real COTS multi-core platforms (four distinct CPU architectures) and a cycle-accurate full system simulator, we show that special hardware registers in non-blocking caches, known as Miss Status Holding Registers (MSHRs), which track the status of outstanding cache-misses, can be a significant source of contention. We propose a hardware and system software (OS) collaborative approach to efficiently eliminate MSHR contention for multi-core real-time systems.We implement the hardware extension in a cycle-accurate full-system simulator and the scheduler modification in Linux 3.14 kernel. In a case study, we achieve up to 19% WCET reduction (average: 13%) for a set of EEMBC benchmarks compared to a baseline cache partitioning setup. 


LEI SHI

Multichannel Sense-and-Avoid Radar for Small UAVs

When & Where:


2001B Eaton Hall

Committee Members:

Chris Allen, Chair
Glenn Prescott
Jim Stiles
Heechul Yun
Lisa Friis

Abstract

This dissertation investigates the feasibility of creating a multichannel sense-and-avoid radar system for small fixed-wing unmanned aerial vehicles (UAVs, also known as sUAS or drones). These aircraft are projected to have a significant impact on the U.S. economy in both the commercial and government sectors, however, their lack of situation awareness has caused the FAA to strictly limit their use. Through this dissertation, a miniature, multichannel, FMCW radar system was created with a small enough size, weight, and power (SWaP) that would allow it to be mounted onboard a sUAS providing inflight target detection. The primary hazard to avoid are general aviation (GA) aircraft such as a Cessna 172 which was estimated to have a radar cross section (RCS) of approximately 1 sqr meter. The radar system is capable of locating potential hazards in range, Doppler, and 3-dimensional space using a patent pending 2-D FFT process and interferometry. The initial prototype system has a detection range of approximately 800 m, with 360-degree azimuth coverage, and +/- 15-degree elevation coverage and draws less than 20 W. From the radar data, target detection, tracking, and the extrapolation of the target behavior in 6-degree of freedom was demonstrated.


RANJITH SOMPALLI

Implementation of Invertebrate Paleontology Knowledge base using Integration of Textual Ontology & Visual Features

When & Where:


2001B Eaton Hall

Committee Members:

Bo Luo, Chair
Jerzy Grzymala-Busse
Richard Wang


Abstract

The Treatise on Invertebrate Paleontology is the most authoritative compilation of the invertebrate fossil records. The quality of studies in paleontology, in particular depends on the accessibility of fossil data. Unfortunately, the PDF version of Treatise currently available is just a scanned copy of the paper publications and the content is in no way organized to facilitate search and knowledge discovery. This project builds an Information Retrieval based system, to extract the fossil descriptions, images and other available information from Treatise. This project is divided into two parts. The first part deals with the extraction of the text and images from the Treatise, organize the information in a structured format and store in a relational database, build a search engine to browse fossil data. Extracting text requires identifying common textual patterns and a text parsing algorithm is developed to identify the patterns and organize the information in a structural format. Images are extracted using the image processing techniques like image segmentation, morphological operations etc., and then associated with the corresponding textual descriptions. A Search engine is built to efficiently browse the extracted information and also the web interface provides options to perform many useful tasks with ease. The second part of this research focuses on the implementation of Content Based Information Retrieval System. All images from treatise are grayscale fossil images and identifying the matching images based on the visual image features is a very difficult task. Hence, we employed an approach that integrates textual and visual features to identify matching images. Textual features are extracted from the description of the fossils and using statistical approaches and Parts of Speech tagging approaches, an ontology is generated, that forms attribute – property pairs explaining how a region looks like in each shell. Popular image features like SIFT, GIST, and HOG features are extracted from fossil images. Both the textual and image features are then integrated to extract the information related to the fossil image matching the query image.


NAGABHUSHANA GARGESHWARI MAHADEVASWAMY

How Duplicates Affect the Error Rate of Data Sets During Validation

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo


Abstract

In data mining, duplicate data plays a huge role in deciding the set of rules. In this project, an analysis has been made on finding the impact of duplicates in the input data set on the rule set. The effect of duplicates is being analyzed using the error rate factor. Error rate is calculated by comparing the obtained rule set against the testing part of input data. The results of experiments have shown decrement of error rate with the increase of percentage of duplicates in the input data set, which demonstrates that the duplicate data plays a crucial role in validation process of machine learning. LEM2 algorithm and rule checker application have been implemented as a part of project. LEM2 algorithm is used to induce the rule set for the given input data set and rule checker application is used to calculate the error rate.


GOWTHAM GOLLA

Developing Novel Machine Learning Algorithms to Improve Sedentary Assessment for Youth Health Enhancement

When & Where:


2001B Eaton Hall

Committee Members:

Luke Huan, Chair
Jerzy Grzymala-Busse
Jordan Carlson


Abstract

Sedentary behavior of youth is an important determinant of health. However, better measures are needed to improve understanding of this relationship and the mechanisms at play, as well as to evaluate health promotion interventions. Even though wearable devices like accelerometers (e.g. activPAL) are considered as the standard for assessing physical activity in research, the machine learning algorithms that we propose will allow us to re-examine existing accelerometer data to better understand the association between sedentary time and health in various populations. In order to achieve this, we collected two datasets, one is laboratory-controlled dataset and second is free-living dataset. We trained machine learning classifiers on both datasets and compared their behaviors on these datasets. The classifiers predict five postures: sit, stand, sit-stand, stand-sit, and stand\walk. We have also compared manually constructed Hidden Markov model(HMM) with automated HMM from existing software on both datasets to better understand the algorithm and existing software. When we tested on the laboratory-controlled dataset and the free-living dataset, the manually constructed HMM gave more F1-Macro score.


RITANKAR GANGULY

Graph Search Algorithms and Their Applications

When & Where:


2001B Eaton Hall

Committee Members:

Man Kong, Chair
Nancy Kinnersley
Jim Miller


Abstract

Depth- First Search (DFS) and Breadth- First Search are two of the most extensively used graph traversal algorithms to compile information about the graph in linear time. These two graph traversal mechanisms overlay a path to explore further the applications based on them that are widely used in Network Engineering, Web Analytics, Social Networking, Postal Services and Hardware Implementations. The difference between DFS and BFS results in the order in which they explore vertices and the implementation techniques for storing the discovered but un-processed vertices in the graph. BFS algorithm usually needs less time but consumes more computer memory than a DFS implementation. DFS algorithm is based on LIFO mechanism and is implemented using stack. BFS algorithm is based on FIFO technique and is realized using a queue. The order in which the vertices are visited using DFS or BFS can be realized with the help of a tree. The type of graph (directed or undirected) along with the edges of these trees form the basis of all the applications on BFS or DFS. Determining the shortest path between vertices of an un-weighted graph can be used in network engineering to transfer data packets. Checking for the presence of cycle can be critical in minimizing redundancy in telecommunications and is extensively used by social networking websites these days to analyse information as how people are connected. Finding bridges in a graph or determining the set of articulation vertices help minimize vulnerability in network design. Finding the strongly connected components in a graph can be used by model checkers in computer science. Determining an Euler circuit in a graph can be used by the postal service industries and the algorithm can be successfully implemented with linear running time using enhanced data structures. This survey project briefly defines and explains the basics of DFS and BFS traversal and explores some of the applications that are based on these algorithms.