Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Jennifer Quirk

Aspects of Doppler-Tolerant Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Patrick McCormick
Charles Mohr
James Stiles
Zsolt Talata

Abstract

The Doppler tolerance of a waveform refers to its behavior when subjected to a fast-time Doppler shift imposed by scattering that involves nonnegligible radial velocity. While previous efforts have established decision-based criteria that lead to a binary judgment of Doppler tolerant or intolerant, it is also useful to establish a measure of the degree of Doppler tolerance. The purpose in doing so is to establish a consistent standard, thereby permitting assessment across different parameterizations, as well as introducing a Doppler “quasi-tolerant” trade-space that can ultimately inform automated/cognitive waveform design in increasingly complex and dynamic radio frequency (RF) environments. 

Separately, the application of slow-time coding (STC) to the Doppler-tolerant linear FM (LFM) waveform has been examined for disambiguation of multiple range ambiguities. However, using STC with non-adaptive Doppler processing often results in high Doppler “cross-ambiguity” side lobes that can hinder range disambiguation despite the degree of separability imparted by STC. To enhance this separability, a gradient-based optimization of STC sequences is developed, and a “multi-range” (MR) modification to the reiterative super-resolution (RISR) approach that accounts for the distinct range interval structures from STC is examined. The efficacy of these approaches is demonstrated using open-air measurements. 

The proposed work to appear in the final dissertation focuses on the connection between Doppler tolerance and STC. The first proposal includes the development of a gradient-based optimization procedure to generate Doppler quasi-tolerant random FM (RFM) waveforms. Other proposals consider limitations of STC, particularly when processed with MR-RISR. The final proposal introduces an “intrapulse” modification of the STC/LFM structure to achieve enhanced sup pression of range-folded scattering in certain delay/Doppler regions while retaining a degree of Doppler tolerance.


Mary Jeevana Pudota

Assessing Processor Allocation Strategies for Online List Scheduling of Moldable Task Graphs

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni


Abstract

Scheduling a graph of moldable tasks, where each task can be executed by a varying number of

processors with execution time depending on the processor allocation, represents a fundamental

problem in high-performance computing (HPC). The online version of the scheduling problem

introduces an additional constraint: each task is only discovered when all its predecessors have

been completed. A key challenge for this online problem lies in making processor allocation

decisions without complete knowledge of the future tasks or dependencies. This uncertainty can

lead to inefficient resource utilization and increased overall completion time, or makespan. Recent

studies have provided theoretical analysis (i.e., derived competitive ratios) for certain processor

allocation algorithms. However, the algorithms’ practical performance remains under-explored,

and their reliance on fixed parameter settings may not consistently yield optimal performance

across varying workloads. In this thesis, we conduct a comprehensive evaluation of three processor

allocation strategies by empirically assessing their performance under widely used speedup models

and diverse graph structures. These algorithms are integrated into a List scheduling framework that

greedily schedules ready tasks based on the current processor availability. We perform systematic

tuning of the algorithms’ parameters and report the best observed makespan together with the

corresponding parameter settings. Our findings highlight the critical role of parameter tuning in

obtaining optimal makespan performance, regardless of the differences in allocation strategies.

The insights gained in this study can guide the deployment of these algorithms in practical runtime

systems.


Past Defense Notices

Dates

Likitha Vemulapalli

Identification of Foliar Diseases in Plants using Deep Learning Techniques

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Suzanne Shontz


Abstract

Artificial Intelligence has been gathering tremendous support lately by bridging the gap between humans and machines. Amazing discoveries in numerous fields are paving way for state-of-the-art technologies. Deep Learning has shown immense progress in the field of computer vision in recent years, with neural networks repeatedly pushing the frontier of visual recognition technology. Recent years have witnessed an exponential increase in the use of mobile and embedded devices. With great success of deep learning, there is an emerging trend to deploy deep learning models on mobile and embedded devices. However, it is not a simple task, the limited resources of mobile and embedded devices make it challenging to fulfill the intensive computation and storage demand of deep learning models and state-of-the-art Convolutional Neural Networks (CNN) require computation at billions of floating-point operations per second (FLOP) which inhibit them from being utilized in mobile and embedded devices. Mobile convolutional neural networks use depth wise and group convolutions rather than standard “fully-connected” convolutions.  In this project we will be applying mobile convolutional models to identify the diseases in plants. Plant diseases are responsible for serious economic losses every year. Due to various reasons, the crops are affected based on climate conditions, various kinds of diseases, heavy usage of pesticides and many other factors. Due to the rise in use of pesticides, the farmers are experiencing irreplaceable losses. Less use of pesticides can help in better crop production. Using these mobile CNNs we can identify the diseases in plants with leaf images and based on the type of disease pesticides can be used respectively. The main goal is to use an efficient model which can assist farmers in recognizing leaf symptoms and providing targeted information for rational use of pesticides. 


Truc Anh Ngoc Nguyen

ResTP: A Configurable and Adaptable Multipath Transport Protocol for Future Internet Resilience

When & Where:


2001B Eaton Hall

Committee Members:

Victor Frost, Chair
Morteza Hashemi
Taejoon Kim
Bo Luo
Hyunjin Seo

Abstract

Motivated by the shortcomings of common transport protocols, e.g., TCP, UDP, and MPTCP, in modern networking and the belief that a general-purpose transport-layer protocol, which can operate efficiently over diverse network environments while being able to provide desired services for various application types, we design a new transport protocol, ResTP. The rapid advancement of networking technology and use paradigms is continually supporting new applications. The configurable and adaptable multipath-capable ResTP is not only distinct from the standard protocols by its flexibility in satisfying the requirements of different traffic classes considering the characteristics of the underlying networks, but by its emphasis on providing resilience. Resilience is an essential property that is unfortunately missing in the current Internet. In this dissertation, we present the design of ResTP, including the services that it supports and the set of algorithms that implement each service. We also discuss our modular implementation of ResTP in the open-source network simulator ns-3. Finally, the protocol is simulated under various network scenarios, and the results are analyzed in comparison with conventional protocols such as TCP, UDP, and MPTCP to demonstrate that ResTP is a promising new transport-layer protocol providing resilience in the Future Internet (FI).


Dinesh Mukharji Dandamudi

Analyzing the short squeeze caused by Reddit community by Using Machine learning

When & Where:


Zoom defense, please email jgrisafe@ku.edu for the meeting information

Committee Members:

Matthew Moore, Chair
Drew Davidson
Cuncong Zhong


Abstract

Algorithmic trading (sometimes termed automated trading, black-box trading, or algo-trading) is a computerized trading system where a computer program follows a set of specified instructions to make a transaction. Theoretically, the transaction should allow traders to make profits at a rate and frequency that a human trader cannot attain. Algorithmic trading is an automated trading method that is carried out using a computer algorithm. Trade theory theoretically posits that humans cannot earn profits at a pace and frequency comparable to those generated by computers.  

 

Traders have a tough time keeping track of the many handles that originate data. NLP (Natural Language Processing) can be used to rapidly scan various news sources, identifying opportunities to gain an advantage before other traders do. 

 

Based on this background, this project aims to select and implement an NLP and Machine Learning process that produces an algorithm, which can detect OR predict the future value from scraped data using Natural language processing and Machine Learning. This algorithm builds the basic structure for an approach to evaluate these documents. 


Lyndon Meadow

Remote Lensing

When & Where:


2001B Eaton Hall

Committee Members:

Matthew Moore, Chair
Perry Alexander
Prasad Kulkarni


Abstract

The problem of the manipulation of remote data is typically solved used complex methods to guarantee consistency. This is an instance of the remote bidirectional transformation problem. From the inspiration that several versions of this problem have been addressed using lenses, we now extend this technique of lenses to the Remote Procedure Calls setting, and provide a few simple example implementations.

    Taking the host side to be the strongly-typed language with lensing properties, and the client side to be a weakly-typed language with minimal lensing properties, this work contributes to the existing body of research that has brought lenses from the realm of math to the space of computer science. This shall give a formal look on remote editing of data in type safety with Remote Monads and their local variants.


Chanaka Samarathungage

NextG Wireless Networks: Applications of the Millimeter Wave Networking and Integration of UAVs with Cellular Systems

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Morteza Hashemi, Chair
Taejoon Kim
Erik Perrins


Abstract

Considering the growth of wireless and cellular devices and applications, the spectrum-rich millimeter wave (mmWave) frequencies have the potential to alleviate the spectrum crunch that wireless and cellular operators are already experiencing. However, there are several challenges to overcome when using mmWave bands. Since mmWave frequencies have small wavelengths compared to sub-6 GHz bands, most objects such as human body, cause significant additional path losses, which can entirely break the link. Highly directional mmWave links are susceptible to frequent link failures in such environments. Limited range of communication is another challenge in mmWave communications. In this research, we demonstrate the benefits of multi-hop routing in mitigating the blockage and extending communication range in the mmWave band. We develop a hop-by-hop multi-path routing protocol that finds one primary and one backup next-hop per destination to guarantee reliable and robust communication under extreme stress conditions. We also extend our solution by proposing a proactive route refinement scheme for AODV and Backpressure routing protocols under dynamic scenarios.
In the second part, the integration of Unmanned Aerial Vehicles (UAVs) to the NextG cellular systems is considered for various applications such as commercial package delivery, public health and safety, surveying, and inspection, to name a few. We present network simulation results based on 4G and 5G technologies using raytracing software. Based on the results, we propose several network adjustments to optimize 5G network operation for the ground users as well as the UAV users.


Wenchi Ma

Object Detection and Classification based on Hierarchical Semantic Features and Deep Neural Networks

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Bo Luo, Chair
Taejoon Kim
Prasad Kulkarni
Cuncong Zhong
Guanghui Wang

Abstract

The abilities of feature learning, semantic understanding, cognitive reasoning, and model generalization are the consistent pursuit for current deep learning-based computer vision tasks. A variety of network structures and algorithms have been proposed to learn effective features, extract contextual and semantic information, deduct the relationships between objects and scenes, and achieve robust and generalized model. Nevertheless, these challenges are still not well addressed. One issue lies in the inefficient feature learning and propagation, static single-dimension semantic memorizing, leading to the difficulty of handling challenging situations, such as small objects, occlusion, illumination, etc. The other issue is the robustness and generalization, especially when the data source has diversified feature distribution.  

The study aims to explore classification and detection models based on hierarchical semantic features ("transverse semantic" and "longitudinal semantic"), network architectures, and regularization algorithm, so that the above issues could be improved or solved. (1) A detector model is proposed to make full use of "transverse semantic", the semantic information in space scene, which emphasizes on the effectiveness of deep features produced in high-level layers for better detection of small and occluded objects. (2) We also explore the anchor-based detector algorithm and propose the location-aware reasoning (LAAR), where both the location and classification confidences are considered for the bounding box quality criterion, so that the best-qualified boxes can be picked up in Non-Maximum Suppression (NMS). (3) A semantic clustering-based deduction learning is proposed, which explores the "longitudinal semantic", realizing the high-level clustering in the semantic space, enabling the model to deduce the relations among various classes so as better classification performance is expected. (4) We propose the near-orthogonality regularization by introducing an implicit self-regularization to push the mean and variance of filter angles in a network towards 90° and 0° simultaneously, revealing it helps stabilize the training process, speed up convergence and improve robustness. (5) Inspired by the research that self-attention networks possess a strong inductive bias which leads to the loss of feature expression power, the transformer architecture with mitigatory attention mechanism is proposed and applied with the state-of-the-art detectors, verifying the superiority of detection enhancement. 


Sai Krishna Teja Damaraju

Strabospot 2

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Drew Davidson, Chair
Prasad Kulkarni
Douglas Walker


Abstract

Geology is a data-intensive field, but much of its current tooling is inefficient, labor intensive and tedious. While software solutions are a natural solution to these issues, careful consideration of domain-specific needs is required to make such a solution useful. Geology involves field work, collaboration, and a complex hierarchical data structure management to organize the data being captured.

 

    Strabospot was designed to address the above considerations. Strabospot is an application that helps earth scientists capture data, digitize it, and make it available over the world wide web for further research and development. Strabospot is a highly portable, effective, and efficient solution which can transform the field of Geology, affecting not only how the data is captured but also how that data can be further processed and analyzed. The initial implementation of Strabospot, while an important step forward in the field, has several limitations that necessitate a complete rewrite in the form of a second version, Strabospot 2.

 

    Strabospot 2 is a major software undertaking being developed at the University of Kansas through a collaboration between the Department of Geology and the Department of Electrical Engineering and Computer Sciences. This project elaborates on how Strabospot 2 helps the Geologists on the field, what challenges Geologists had with Strabospot and how Strabospot 2 fills in the deficits of Strabospot 1. Strabospot 2 is a large, multi-developer project. This project report focuses on the features implemented by the report author.


Patrick McNamee

Machine Learning for Aerospace Applications using the Blackbird Dataset

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Michael Branicky, Chair
Prasad Kulkarni
Ronald Barrett


Abstract

There is currently much interest in using machine learning (ML) models for vision-based object detection and navigation tasks in autonomous vehicles. For unmanned aerial vehicles (UAVs), and particularly small multi-rotor vehicles such as quadcopters, these models are trained on either unpublished data or within simulated environments, which leads to two issues: the inability to reliably reproduce results, and behavioral discrepancies on physical deployments resulting from unmodeled dynamics in the simulation environment. To overcome these issues, this project uses the Blackbird Dataset to explore integration of ML models for UAV. The Blackbird Dataset is overviewed to illustrate features and issues before investigating possible ML applications. Unsupervised learning models are used to determine flight-test partitions for training supervised deep neural network (DNN) models for nonlinear dynamic inversion. The DNN models are used to determine appropriate model choices over several network parameters including network layer depth, activation functions, epochs for training, and neural network regularization.


Charles Mohr

Design and Evaluation of Stochastic Processes as Physical Radar Waveforms

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Carl Leuschen
James Stiles
Zsolt Talata

Abstract

Recent advances in waveform generation and in computational power have enabled the design and implementation of new complex radar waveforms. Still despite these advances, in a waveform agile mode where the radar transmits unique waveforms for every pulse or a nonrepeating signal continuously, effective operation can be difficult due the waveform design requirements. In general, for radar waveforms to be both useful and physically robust they must achieve good autocorrelation sidelobes, be spectrally contained, and possess a constant amplitude envelope for high power operation. Meeting these design goals represents a tremendous computational overhead that can easily impede real-time operation and the overall effectiveness of the radar. This work addresses this concern in the context of random FM waveforms (RFM) which have been demonstrated in recent years in both simulation and in experiments to achieve low autocorrelation sidelobes through the high dimensionality of coherent integration when operating in a waveform agile mode. However, while they are effective, the approaches to design these waveforms require optimization of each individual waveform, making them subject to costly computational requirements.

 

This dissertation takes a different approach. Since RFM waveforms are meant to be noise like in the first place, the waveforms here are instantiated as the sample functions of an underlying stochastic process called a waveform generating function (WGF). This approach enables the convenient generation of spectrally contained RFM waveforms for little more computational cost than pulling numbers from a random number generator (RNG). To do so, this work translates the traditional mathematical treatment of random variables and random processes to a more radar centric perspective such that the WGFs can be analytically evaluated as a function of the usefulness of the radar waveforms that they produce via metrics such as the expected matched filter response and the expected power spectral density (PSD). Further, two WGF models denoted as pulsed stochastic waveform generation (Pulsed StoWGe) and continuous wave stochastic waveform generation (CW-StoWGe) are devised as means to optimize WGFs to produce RFM waveform with good spectral containment and design flexibility between the degree of spectral containment and autocorrelation sidelobe levels for both pulsed and CW modes. This goal is achieved by leveraging gradient descent optimization methods to reduce the expected frequency template error (EFTE) cost function. The EFTE optimization is shown analytically using the metrics above, as well as others defined in this work and through simulation, to produce WGFs whose sample functions achieve these goals and thus produce useful random FM waveforms. To complete the theory-modeling-experimentation design life cycle, the resultant StoWGe waveforms are implemented in a loop-back configuration and are shown to be amenable to physical implementation.

 


David Menager

Event Memory for Intelligent Agents

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Arvin Agah, Chair
Michael Branicky
Prasad Kulkarni
Andrew Williams
Sarah Robins

Abstract

This dissertation presents a novel theory of event memory along with an associated computational model that embodies the claims of view which is integrated within a cognitive architecture. Event memory is a general-purpose storage for personal past experience. Literature on event memory reveals that people can remember events by both the successful retrieval of specific representations from memory and the reconstruction of events via schematic representations. Prominent philosophical theories of event memory, i.e., causal and simulationist theories, fail to capture both capabilities because of their reliance on a single representational format. Consequently, they also struggle with accounting for the full range of human event memory phenomena. In response, we propose a novel view that remedies these issues by unifying the representational commitments of the causal and simulation theories, thus making it a hybrid theory. We also describe an associated computational implementation of the proposed theory and conduct experiments showing the remembering capabilities of our system and its coverage of event memory phenomena. Lastly, we discuss our initial efforts to integrate our implemented event memory system into a cognitive architecture, and situate a tool-building agent with this extended architecture in the Minecraft domain in preparation for future event memory research.