Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

No upcoming defense notices for now!

Past Defense Notices

Dates

Dinesh Mukharji Dandamudi

Analyzing the short squeeze caused by Reddit community by Using Machine learning

When & Where:


Zoom defense, please email jgrisafe@ku.edu for the meeting information

Committee Members:

Matthew Moore, Chair
Drew Davidson
Cuncong Zhong


Abstract

Algorithmic trading (sometimes termed automated trading, black-box trading, or algo-trading) is a computerized trading system where a computer program follows a set of specified instructions to make a transaction. Theoretically, the transaction should allow traders to make profits at a rate and frequency that a human trader cannot attain. Algorithmic trading is an automated trading method that is carried out using a computer algorithm. Trade theory theoretically posits that humans cannot earn profits at a pace and frequency comparable to those generated by computers.  

 

Traders have a tough time keeping track of the many handles that originate data. NLP (Natural Language Processing) can be used to rapidly scan various news sources, identifying opportunities to gain an advantage before other traders do. 

 

Based on this background, this project aims to select and implement an NLP and Machine Learning process that produces an algorithm, which can detect OR predict the future value from scraped data using Natural language processing and Machine Learning. This algorithm builds the basic structure for an approach to evaluate these documents. 


Lyndon Meadow

Remote Lensing

When & Where:


2001B Eaton Hall

Committee Members:

Matthew Moore, Chair
Perry Alexander
Prasad Kulkarni


Abstract

The problem of the manipulation of remote data is typically solved used complex methods to guarantee consistency. This is an instance of the remote bidirectional transformation problem. From the inspiration that several versions of this problem have been addressed using lenses, we now extend this technique of lenses to the Remote Procedure Calls setting, and provide a few simple example implementations.

    Taking the host side to be the strongly-typed language with lensing properties, and the client side to be a weakly-typed language with minimal lensing properties, this work contributes to the existing body of research that has brought lenses from the realm of math to the space of computer science. This shall give a formal look on remote editing of data in type safety with Remote Monads and their local variants.


Chanaka Samarathungage

NextG Wireless Networks: Applications of the Millimeter Wave Networking and Integration of UAVs with Cellular Systems

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Morteza Hashemi, Chair
Taejoon Kim
Erik Perrins


Abstract

Considering the growth of wireless and cellular devices and applications, the spectrum-rich millimeter wave (mmWave) frequencies have the potential to alleviate the spectrum crunch that wireless and cellular operators are already experiencing. However, there are several challenges to overcome when using mmWave bands. Since mmWave frequencies have small wavelengths compared to sub-6 GHz bands, most objects such as human body, cause significant additional path losses, which can entirely break the link. Highly directional mmWave links are susceptible to frequent link failures in such environments. Limited range of communication is another challenge in mmWave communications. In this research, we demonstrate the benefits of multi-hop routing in mitigating the blockage and extending communication range in the mmWave band. We develop a hop-by-hop multi-path routing protocol that finds one primary and one backup next-hop per destination to guarantee reliable and robust communication under extreme stress conditions. We also extend our solution by proposing a proactive route refinement scheme for AODV and Backpressure routing protocols under dynamic scenarios.
In the second part, the integration of Unmanned Aerial Vehicles (UAVs) to the NextG cellular systems is considered for various applications such as commercial package delivery, public health and safety, surveying, and inspection, to name a few. We present network simulation results based on 4G and 5G technologies using raytracing software. Based on the results, we propose several network adjustments to optimize 5G network operation for the ground users as well as the UAV users.


Wenchi Ma

Object Detection and Classification based on Hierarchical Semantic Features and Deep Neural Networks

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Bo Luo, Chair
Taejoon Kim
Prasad Kulkarni
Cuncong Zhong
Guanghui Wang

Abstract

The abilities of feature learning, semantic understanding, cognitive reasoning, and model generalization are the consistent pursuit for current deep learning-based computer vision tasks. A variety of network structures and algorithms have been proposed to learn effective features, extract contextual and semantic information, deduct the relationships between objects and scenes, and achieve robust and generalized model. Nevertheless, these challenges are still not well addressed. One issue lies in the inefficient feature learning and propagation, static single-dimension semantic memorizing, leading to the difficulty of handling challenging situations, such as small objects, occlusion, illumination, etc. The other issue is the robustness and generalization, especially when the data source has diversified feature distribution.  

The study aims to explore classification and detection models based on hierarchical semantic features ("transverse semantic" and "longitudinal semantic"), network architectures, and regularization algorithm, so that the above issues could be improved or solved. (1) A detector model is proposed to make full use of "transverse semantic", the semantic information in space scene, which emphasizes on the effectiveness of deep features produced in high-level layers for better detection of small and occluded objects. (2) We also explore the anchor-based detector algorithm and propose the location-aware reasoning (LAAR), where both the location and classification confidences are considered for the bounding box quality criterion, so that the best-qualified boxes can be picked up in Non-Maximum Suppression (NMS). (3) A semantic clustering-based deduction learning is proposed, which explores the "longitudinal semantic", realizing the high-level clustering in the semantic space, enabling the model to deduce the relations among various classes so as better classification performance is expected. (4) We propose the near-orthogonality regularization by introducing an implicit self-regularization to push the mean and variance of filter angles in a network towards 90° and 0° simultaneously, revealing it helps stabilize the training process, speed up convergence and improve robustness. (5) Inspired by the research that self-attention networks possess a strong inductive bias which leads to the loss of feature expression power, the transformer architecture with mitigatory attention mechanism is proposed and applied with the state-of-the-art detectors, verifying the superiority of detection enhancement. 


Sai Krishna Teja Damaraju

Strabospot 2

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Drew Davidson, Chair
Prasad Kulkarni
Douglas Walker


Abstract

Geology is a data-intensive field, but much of its current tooling is inefficient, labor intensive and tedious. While software solutions are a natural solution to these issues, careful consideration of domain-specific needs is required to make such a solution useful. Geology involves field work, collaboration, and a complex hierarchical data structure management to organize the data being captured.

 

    Strabospot was designed to address the above considerations. Strabospot is an application that helps earth scientists capture data, digitize it, and make it available over the world wide web for further research and development. Strabospot is a highly portable, effective, and efficient solution which can transform the field of Geology, affecting not only how the data is captured but also how that data can be further processed and analyzed. The initial implementation of Strabospot, while an important step forward in the field, has several limitations that necessitate a complete rewrite in the form of a second version, Strabospot 2.

 

    Strabospot 2 is a major software undertaking being developed at the University of Kansas through a collaboration between the Department of Geology and the Department of Electrical Engineering and Computer Sciences. This project elaborates on how Strabospot 2 helps the Geologists on the field, what challenges Geologists had with Strabospot and how Strabospot 2 fills in the deficits of Strabospot 1. Strabospot 2 is a large, multi-developer project. This project report focuses on the features implemented by the report author.


Patrick McNamee

Machine Learning for Aerospace Applications using the Blackbird Dataset

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Michael Branicky, Chair
Prasad Kulkarni
Ronald Barrett


Abstract

There is currently much interest in using machine learning (ML) models for vision-based object detection and navigation tasks in autonomous vehicles. For unmanned aerial vehicles (UAVs), and particularly small multi-rotor vehicles such as quadcopters, these models are trained on either unpublished data or within simulated environments, which leads to two issues: the inability to reliably reproduce results, and behavioral discrepancies on physical deployments resulting from unmodeled dynamics in the simulation environment. To overcome these issues, this project uses the Blackbird Dataset to explore integration of ML models for UAV. The Blackbird Dataset is overviewed to illustrate features and issues before investigating possible ML applications. Unsupervised learning models are used to determine flight-test partitions for training supervised deep neural network (DNN) models for nonlinear dynamic inversion. The DNN models are used to determine appropriate model choices over several network parameters including network layer depth, activation functions, epochs for training, and neural network regularization.


Charles Mohr

Design and Evaluation of Stochastic Processes as Physical Radar Waveforms

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Carl Leuschen
James Stiles
Zsolt Talata

Abstract

Recent advances in waveform generation and in computational power have enabled the design and implementation of new complex radar waveforms. Still despite these advances, in a waveform agile mode where the radar transmits unique waveforms for every pulse or a nonrepeating signal continuously, effective operation can be difficult due the waveform design requirements. In general, for radar waveforms to be both useful and physically robust they must achieve good autocorrelation sidelobes, be spectrally contained, and possess a constant amplitude envelope for high power operation. Meeting these design goals represents a tremendous computational overhead that can easily impede real-time operation and the overall effectiveness of the radar. This work addresses this concern in the context of random FM waveforms (RFM) which have been demonstrated in recent years in both simulation and in experiments to achieve low autocorrelation sidelobes through the high dimensionality of coherent integration when operating in a waveform agile mode. However, while they are effective, the approaches to design these waveforms require optimization of each individual waveform, making them subject to costly computational requirements.

 

This dissertation takes a different approach. Since RFM waveforms are meant to be noise like in the first place, the waveforms here are instantiated as the sample functions of an underlying stochastic process called a waveform generating function (WGF). This approach enables the convenient generation of spectrally contained RFM waveforms for little more computational cost than pulling numbers from a random number generator (RNG). To do so, this work translates the traditional mathematical treatment of random variables and random processes to a more radar centric perspective such that the WGFs can be analytically evaluated as a function of the usefulness of the radar waveforms that they produce via metrics such as the expected matched filter response and the expected power spectral density (PSD). Further, two WGF models denoted as pulsed stochastic waveform generation (Pulsed StoWGe) and continuous wave stochastic waveform generation (CW-StoWGe) are devised as means to optimize WGFs to produce RFM waveform with good spectral containment and design flexibility between the degree of spectral containment and autocorrelation sidelobe levels for both pulsed and CW modes. This goal is achieved by leveraging gradient descent optimization methods to reduce the expected frequency template error (EFTE) cost function. The EFTE optimization is shown analytically using the metrics above, as well as others defined in this work and through simulation, to produce WGFs whose sample functions achieve these goals and thus produce useful random FM waveforms. To complete the theory-modeling-experimentation design life cycle, the resultant StoWGe waveforms are implemented in a loop-back configuration and are shown to be amenable to physical implementation.

 


David Menager

Event Memory for Intelligent Agents

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Arvin Agah, Chair
Michael Branicky
Prasad Kulkarni
Andrew Williams
Sarah Robins

Abstract

This dissertation presents a novel theory of event memory along with an associated computational model that embodies the claims of view which is integrated within a cognitive architecture. Event memory is a general-purpose storage for personal past experience. Literature on event memory reveals that people can remember events by both the successful retrieval of specific representations from memory and the reconstruction of events via schematic representations. Prominent philosophical theories of event memory, i.e., causal and simulationist theories, fail to capture both capabilities because of their reliance on a single representational format. Consequently, they also struggle with accounting for the full range of human event memory phenomena. In response, we propose a novel view that remedies these issues by unifying the representational commitments of the causal and simulation theories, thus making it a hybrid theory. We also describe an associated computational implementation of the proposed theory and conduct experiments showing the remembering capabilities of our system and its coverage of event memory phenomena. Lastly, we discuss our initial efforts to integrate our implemented event memory system into a cognitive architecture, and situate a tool-building agent with this extended architecture in the Minecraft domain in preparation for future event memory research.


Yiju Yang

Image Classification Based on Unsupervised Domain Adaptation Methods

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Taejoon Kim, Chair
Andrew Williams
Cuncong Zhong


Abstract

Convolutional Neural Networks (CNNs) have achieved great success in broad computer vision tasks. However, due to the lack of labeled data, many available CNN models cannot be widely used in many real scenarios or suffer from significant performance drop. To solve the problem of lack of correctly labeled data, we explored the capability of existing unsupervised domain adaptation (UDA) methods on image classification and proposed two new methods to improve the performance.

1. An Unsupervised Domain Adaptation Model based on Dual-module Adversarial Training: we proposed a dual-module network architecture that employs a domain discriminative feature module to encourage the domain invariant feature module to learn more domain invariant features. The proposed architecture can be applied to any model that utilizes domain invariant features for UDA to improve its ability to extract domain invariant features. Through the adversarial training by maximizing the loss of their feature distribution and minimizing the discrepancy of their prediction results, the two modules are encouraged to learn more domain discriminative and domain invariant features respectively. Extensive comparative evaluations are conducted and the proposed approach significantly outperforms the baseline method in all image classification tasks.

2. Exploiting maximum classifier discrepancy on multiple classifiers for unsupervised domain adaptation: The adversarial training method based on the maximum classifier discrepancy between the two classifier structures has been applied to the unsupervised domain adaptation task of image classification. This method is straightforward and has achieved very good results. However, based on our observation, we think the structure of two classifiers, though simple, may not explore the full power of the algorithm. Thus, we propose to add more classifiers to the model. In the proposed method, we construct a discrepancy loss function for multiple classifiers following the principle that the classifiers are different from each other. By constructing this loss function, we can add any number of classifiers to the original framework. Extensive experiments show that the proposed method achieves significant improvements over the baseline method.


Idhaya Elango

Detection of COVID-19 cases from chest X-ray images using COVID-NET, a deep convolutional neural network

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni , Chair
Bo Luo
Heechul Yun


Abstract

COVID-19 is caused by the SARS-COV-2 contagious virus. It causes a devastating effect on the health of humans leading to high morbidity and mortality worldwide. Infected patients should be screened effectively to fight against the virus. Chest X-Ray (CXR) is one of the important adjuncts in the detection of visual responses related to SARS-COV-2 infection. Abnormalities in chest x-ray images are identified for COVID-19 patients. COVID-Net a deep convolutional neural network, is used here to detect COVID-19 cases from Chest X-ray images. COVIDX dataset used in this project is generated from five different open data access repositories. COVID-Net makes predictions using an explainability method to gain knowledge into critical factors related to COVID cases. We also perform quantitative and qualitative analyses to understand the decision-making behavior.