All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

UPCOMING DEFENSE NOTICES

Gordon Ariho - MULTIPASS SAR PROCESSING FOR ICE SHEET VERTICAL VELOCITY AND TOMOGRAPHY MEASUREMENTS
PhD Comprehensive Defense(EE)

When & Where:

November 19, 2021 - 1:00 PM
Nichols Hall, Room 317

Committee Members:

James Stiles, Chair
John Paden
Christopher Allen
Shannon Blunt
Carl Leuschen

Abstract

Ice dynamics are a major factor in ice sheet mass balance and play a huge role in sea level rise (and future sea-level rise projections). Ice velocity measures the direction and rate at which ice is redistributed from the accumulation to the ablation regions of glaciers and ice sheets. We propose to apply multipass differential interferometric synthetic aperture radar (DInSAR) techniques to data from the Multichannel Coherent Radar Depth Sounder (MCoRDS) to measure the vertical displacement of englacial layers within an ice sheet. DInSAR’s accuracy is usually on the order of a small fraction of the wavelength (e.g. millimeter to centimeter precision is common) in monitoring ground displacement along the radar line of sight (LOS).  Unlike ground-based Autonomous phase-sensitive Radio-Echo Sounder (ApRES) units that can be precisely positioned and used to produce vertical velocity fields, airborne systems suffer from unknown baseline errors. In the case of ice sheet internal layers, vertical displacement is estimated by compensating for the spatial baseline using precise trajectory information and estimates of the cross-track layer slope from direction of arrival analysis. The current DInSAR algorithm is applied to radar depth sounder data to produce results for Summit camp in central Greenland and a high accumulation region near Camp Century in northwest Greenland using the CReSIS toolbox. This approach has a drawback arising from the baseline error due to the GPS being estimated after Direction of Arrival (DOA) estimation yet DOA estimation is dependent on the baseline being accurate. We propose to extend this work by implementing a maximum likelihood estimator that jointly estimates the vertical velocity, the cross-track internal layer slope, and the unknown baseline error due to GPS and INS (Inertial Navigation System) errors. The multipass algorithm will be applied to additional flights from the decade long NASA Operation IceBridge airborne mission that flew MCoRDS on many repeated flight tracks. We also propose to improve the accuracy of tomographic swaths produced from multipass measurements and investigate the possibility to use focusing matrices to improve wideband tomographic processing.

 


 

Likitha Vemulapalli - Identification of Foliar Diseases in Plants using Deep Learning Techniques
MS Project Defense(CS)

When & Where:

November 8, 2021 - 9:30 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Suzanne Shontz

Abstract

Artificial Intelligence has been gathering tremendous support lately by bridging the gap between humans and machines. Amazing discoveries in numerous fields are paving way for state-of-the-art technologies. Deep Learning has shown immense progress in the field of computer vision in recent years, with neural networks repeatedly pushing the frontier of visual recognition technology. Recent years have witnessed an exponential increase in the use of mobile and embedded devices. With great success of deep learning, there is an emerging trend to deploy deep learning models on mobile and embedded devices. However, it is not a simple task, the limited resources of mobile and embedded devices make it challenging to fulfill the intensive computation and storage demand of deep learning models and state-of-the-art Convolutional Neural Networks (CNN) require computation at billions of floating-point operations per second (FLOP) which inhibit them from being utilized in mobile and embedded devices. Mobile convolutional neural networks use depth wise and group convolutions rather than standard “fully-connected” convolutions.  In this project we will be applying mobile convolutional models to identify the diseases in plants. Plant diseases are responsible for serious economic losses every year. Due to various reasons, the crops are affected based on climate conditions, various kinds of diseases, heavy usage of pesticides and many other factors. Due to the rise in use of pesticides, the farmers are experiencing irreplaceable losses. Less use of pesticides can help in better crop production. Using these mobile CNNs we can identify the diseases in plants with leaf images and based on the type of disease pesticides can be used respectively. The main goal is to use an efficient model which can assist farmers in recognizing leaf symptoms and providing targeted information for rational use of pesticides. 

 


 

PAST DEFENSE NOTICES


Truc Anh Ngoc Nguyen - ResTP: A Configurable and Adaptable Multipath Transport Protocol for Future Internet Resilience

When & Where:

September 3, 2021 - 3:00 PM
2001B Eaton Hall

Committee Members:

Victor Frost, Chair
Morteza Hashemi
Taejoon Kim
Bo Luo
Hyunjin Seo

Abstract

Motivated by the shortcomings of common transport protocols, e.g., TCP, UDP, and MPTCP, in modern networking and the belief that a general-purpose transport-layer protocol, which can operate efficiently over diverse network environments while being able to provide desired services for various application types, we design a new transport protocol, ResTP. The rapid advancement of networking technology and use paradigms is continually supporting new applications. The configurable and adaptable multipath-capable ResTP is not only distinct from the standard protocols by its flexibility in satisfying the requirements of different traffic classes considering the characteristics of the underlying networks, but by its emphasis on providing resilience. Resilience is an essential property that is unfortunately missing in the current Internet. In this dissertation, we present the design of ResTP, including the services that it supports and the set of algorithms that implement each service. We also discuss our modular implementation of ResTP in the open-source network simulator ns-3. Finally, the protocol is simulated under various network scenarios, and the results are analyzed in comparison with conventional protocols such as TCP, UDP, and MPTCP to demonstrate that ResTP is a promising new transport-layer protocol providing resilience in the Future Internet (FI).


Dinesh Mukharji Dandamudi - Analyzing the short squeeze caused by Reddit community by Using Machine learning

When & Where:

September 2, 2021 - 10:00 AM
Zoom defense, please email jgrisafe@ku.edu for the meeting information

Committee Members:

Matthew Moore, Chair
Drew Davidson
Cuncong Zhong

Abstract

Algorithmic trading (sometimes termed automated trading, black-box trading, or algo-trading) is a computerized trading system where a computer program follows a set of specified instructions to make a transaction. Theoretically, the transaction should allow traders to make profits at a rate and frequency that a human trader cannot attain. Algorithmic trading is an automated trading method that is carried out using a computer algorithm. Trade theory theoretically posits that humans cannot earn profits at a pace and frequency comparable to those generated by computers.  

 

Traders have a tough time keeping track of the many handles that originate data. NLP (Natural Language Processing) can be used to rapidly scan various news sources, identifying opportunities to gain an advantage before other traders do. 

 

Based on this background, this project aims to select and implement an NLP and Machine Learning process that produces an algorithm, which can detect OR predict the future value from scraped data using Natural language processing and Machine Learning. This algorithm builds the basic structure for an approach to evaluate these documents. 


Lyndon Meadow - Remote Lensing

When & Where:

August 25, 2021 - 12:00 PM
2001B Eaton Hall

Committee Members:

Matthew Moore, Chair
Perry Alexander
Prasad Kulkarni

Abstract

The problem of the manipulation of remote data is typically solved used complex methods to guarantee consistency. This is an instance of the remote bidirectional transformation problem. From the inspiration that several versions of this problem have been addressed using lenses, we now extend this technique of lenses to the Remote Procedure Calls setting, and provide a few simple example implementations.
    Taking the host side to be the strongly-typed language with lensing properties, and the client side to be a weakly-typed language with minimal lensing properties, this work contributes to the existing body of research that has brought lenses from the realm of math to the space of computer science. This shall give a formal look on remote editing of data in type safety with Remote Monads and their local variants.

Chanaka Samarathungage - NextG Wireless Networks: Applications of the Millimeter Wave Networking and Integration of UAVs with Cellular Systems

When & Where:

July 29, 2021 - 10:00 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Morteza Hashemi, Chair
Taejoon Kim
Erik Perrins

Abstract

Considering the growth of wireless and cellular devices and applications, the spectrum-rich millimeter wave (mmWave) frequencies have the potential to alleviate the spectrum crunch that wireless and cellular operators are already experiencing. However, there are several challenges to overcome when using mmWave bands. Since mmWave frequencies have small wavelengths compared to sub-6 GHz bands, most objects such as human body, cause significant additional path losses, which can entirely break the link. Highly directional mmWave links are susceptible to frequent link failures in such environments. Limited range of communication is another challenge in mmWave communications. In this research, we demonstrate the benefits of multi-hop routing in mitigating the blockage and extending communication range in the mmWave band. We develop a hop-by-hop multi-path routing protocol that finds one primary and one backup next-hop per destination to guarantee reliable and robust communication under extreme stress conditions. We also extend our solution by proposing a proactive route refinement scheme for AODV and Backpressure routing protocols under dynamic scenarios.
In the second part, the integration of Unmanned Aerial Vehicles (UAVs) to the NextG cellular systems is considered for various applications such as commercial package delivery, public health and safety, surveying, and inspection, to name a few. We present network simulation results based on 4G and 5G technologies using raytracing software. Based on the results, we propose several network adjustments to optimize 5G network operation for the ground users as well as the UAV users.


Wenchi Ma - Object Detection and Classification based on Hierarchical Semantic Features and Deep Neural Networks

When & Where:

July 27, 2021 - 1:00 PM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Bo Luo, Chair
Taejoon Kim
Prasad Kulkarni
Cuncong Zhong
Guanghui Wang

Abstract

The abilities of feature learning, semantic understanding, cognitive reasoning, and model generalization are the consistent pursuit for current deep learning-based computer vision tasks. A variety of network structures and algorithms have been proposed to learn effective features, extract contextual and semantic information, deduct the relationships between objects and scenes, and achieve robust and generalized model. Nevertheless, these challenges are still not well addressed. One issue lies in the inefficient feature learning and propagation, static single-dimension semantic memorizing, leading to the difficulty of handling challenging situations, such as small objects, occlusion, illumination, etc. The other issue is the robustness and generalization, especially when the data source has diversified feature distribution.  

The study aims to explore classification and detection models based on hierarchical semantic features ("transverse semantic" and "longitudinal semantic"), network architectures, and regularization algorithm, so that the above issues could be improved or solved. (1) A detector model is proposed to make full use of "transverse semantic", the semantic information in space scene, which emphasizes on the effectiveness of deep features produced in high-level layers for better detection of small and occluded objects. (2) We also explore the anchor-based detector algorithm and propose the location-aware reasoning (LAAR), where both the location and classification confidences are considered for the bounding box quality criterion, so that the best-qualified boxes can be picked up in Non-Maximum Suppression (NMS). (3) A semantic clustering-based deduction learning is proposed, which explores the "longitudinal semantic", realizing the high-level clustering in the semantic space, enabling the model to deduce the relations among various classes so as better classification performance is expected. (4) We propose the near-orthogonality regularization by introducing an implicit self-regularization to push the mean and variance of filter angles in a network towards 90° and 0° simultaneously, revealing it helps stabilize the training process, speed up convergence and improve robustness. (5) Inspired by the research that self-attention networks possess a strong inductive bias which leads to the loss of feature expression power, the transformer architecture with mitigatory attention mechanism is proposed and applied with the state-of-the-art detectors, verifying the superiority of detection enhancement. 


Sai Krishna Teja Damaraju - Strabospot 2

When & Where:

July 26, 2021 - 12:00 PM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Drew Davidson, Chair
Prasad Kulkarni
Douglas Walker

Abstract

Geology is a data-intensive field, but much of its current tooling is inefficient, labor intensive and tedious. While software solutions are a natural solution to these issues, careful consideration of domain-specific needs is required to make such a solution useful. Geology involves field work, collaboration, and a complex hierarchical data structure management to organize the data being captured.
 
    Strabospot was designed to address the above considerations. Strabospot is an application that helps earth scientists capture data, digitize it, and make it available over the world wide web for further research and development. Strabospot is a highly portable, effective, and efficient solution which can transform the field of Geology, affecting not only how the data is captured but also how that data can be further processed and analyzed. The initial implementation of Strabospot, while an important step forward in the field, has several limitations that necessitate a complete rewrite in the form of a second version, Strabospot 2.
 
    Strabospot 2 is a major software undertaking being developed at the University of Kansas through a collaboration between the Department of Geology and the Department of Electrical Engineering and Computer Sciences. This project elaborates on how Strabospot 2 helps the Geologists on the field, what challenges Geologists had with Strabospot and how Strabospot 2 fills in the deficits of Strabospot 1. Strabospot 2 is a large, multi-developer project. This project report focuses on the features implemented by the report author.

Patrick McNamee - Machine Learning for Aerospace Applications using the Blackbird Dataset

When & Where:

July 9, 2021 - 10:00 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Michael Branicky, Chair
Prasad Kulkarni
Ronald Barrett

Abstract

There is currently much interest in using machine learning (ML) models for vision-based object detection and navigation tasks in autonomous vehicles. For unmanned aerial vehicles (UAVs), and particularly small multi-rotor vehicles such as quadcopters, these models are trained on either unpublished data or within simulated environments, which leads to two issues: the inability to reliably reproduce results, and behavioral discrepancies on physical deployments resulting from unmodeled dynamics in the simulation environment. To overcome these issues, this project uses the Blackbird Dataset to explore integration of ML models for UAV. The Blackbird Dataset is overviewed to illustrate features and issues before investigating possible ML applications. Unsupervised learning models are used to determine flight-test partitions for training supervised deep neural network (DNN) models for nonlinear dynamic inversion. The DNN models are used to determine appropriate model choices over several network parameters including network layer depth, activation functions, epochs for training, and neural network regularization.


Charles Mohr - Design and Evaluation of Stochastic Processes as Physical Radar Waveforms

When & Where:

June 30, 2021 - 9:30 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Carl Leuschen
James Stiles
Zsolt Talata

Abstract

Recent advances in waveform generation and in computational power have enabled the design and implementation of new complex radar waveforms. Still despite these advances, in a waveform agile mode where the radar transmits unique waveforms for every pulse or a nonrepeating signal continuously, effective operation can be difficult due the waveform design requirements. In general, for radar waveforms to be both useful and physically robust they must achieve good autocorrelation sidelobes, be spectrally contained, and possess a constant amplitude envelope for high power operation. Meeting these design goals represents a tremendous computational overhead that can easily impede real-time operation and the overall effectiveness of the radar. This work addresses this concern in the context of random FM waveforms (RFM) which have been demonstrated in recent years in both simulation and in experiments to achieve low autocorrelation sidelobes through the high dimensionality of coherent integration when operating in a waveform agile mode. However, while they are effective, the approaches to design these waveforms require optimization of each individual waveform, making them subject to costly computational requirements.

 

This dissertation takes a different approach. Since RFM waveforms are meant to be noise like in the first place, the waveforms here are instantiated as the sample functions of an underlying stochastic process called a waveform generating function (WGF). This approach enables the convenient generation of spectrally contained RFM waveforms for little more computational cost than pulling numbers from a random number generator (RNG). To do so, this work translates the traditional mathematical treatment of random variables and random processes to a more radar centric perspective such that the WGFs can be analytically evaluated as a function of the usefulness of the radar waveforms that they produce via metrics such as the expected matched filter response and the expected power spectral density (PSD). Further, two WGF models denoted as pulsed stochastic waveform generation (Pulsed StoWGe) and continuous wave stochastic waveform generation (CW-StoWGe) are devised as means to optimize WGFs to produce RFM waveform with good spectral containment and design flexibility between the degree of spectral containment and autocorrelation sidelobe levels for both pulsed and CW modes. This goal is achieved by leveraging gradient descent optimization methods to reduce the expected frequency template error (EFTE) cost function. The EFTE optimization is shown analytically using the metrics above, as well as others defined in this work and through simulation, to produce WGFs whose sample functions achieve these goals and thus produce useful random FM waveforms. To complete the theory-modeling-experimentation design life cycle, the resultant StoWGe waveforms are implemented in a loop-back configuration and are shown to be amenable to physical implementation.

 


David Menager - Event Memory for Intelligent Agents

When & Where:

June 1, 2021 - 11:00 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Arvin Agah, Chair
Michael Branicky
Prasad Kulkarni
Andrew Williams
Sarah Robins

Abstract

This dissertation presents a novel theory of event memory along with an associated computational model that embodies the claims of view which is integrated within a cognitive architecture. Event memory is a general-purpose storage for personal past experience. Literature on event memory reveals that people can remember events by both the successful retrieval of specific representations from memory and the reconstruction of events via schematic representations. Prominent philosophical theories of event memory, i.e., causal and simulationist theories, fail to capture both capabilities because of their reliance on a single representational format. Consequently, they also struggle with accounting for the full range of human event memory phenomena. In response, we propose a novel view that remedies these issues by unifying the representational commitments of the causal and simulation theories, thus making it a hybrid theory. We also describe an associated computational implementation of the proposed theory and conduct experiments showing the remembering capabilities of our system and its coverage of event memory phenomena. Lastly, we discuss our initial efforts to integrate our implemented event memory system into a cognitive architecture, and situate a tool-building agent with this extended architecture in the Minecraft domain in preparation for future event memory research.


Yiju Yang - Image Classification Based on Unsupervised Domain Adaptation Methods

When & Where:

May 24, 2021 - 10:00 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Taejoon Kim, Chair
Andrew Williams
Cuncong Zhong

Abstract

Convolutional Neural Networks (CNNs) have achieved great success in broad computer vision tasks. However, due to the lack of labeled data, many available CNN models cannot be widely used in many real scenarios or suffer from significant performance drop. To solve the problem of lack of correctly labeled data, we explored the capability of existing unsupervised domain adaptation (UDA) methods on image classification and proposed two new methods to improve the performance.

1. An Unsupervised Domain Adaptation Model based on Dual-module Adversarial Training: we proposed a dual-module network architecture that employs a domain discriminative feature module to encourage the domain invariant feature module to learn more domain invariant features. The proposed architecture can be applied to any model that utilizes domain invariant features for UDA to improve its ability to extract domain invariant features. Through the adversarial training by maximizing the loss of their feature distribution and minimizing the discrepancy of their prediction results, the two modules are encouraged to learn more domain discriminative and domain invariant features respectively. Extensive comparative evaluations are conducted and the proposed approach significantly outperforms the baseline method in all image classification tasks.

2. Exploiting maximum classifier discrepancy on multiple classifiers for unsupervised domain adaptation: The adversarial training method based on the maximum classifier discrepancy between the two classifier structures has been applied to the unsupervised domain adaptation task of image classification. This method is straightforward and has achieved very good results. However, based on our observation, we think the structure of two classifiers, though simple, may not explore the full power of the algorithm. Thus, we propose to add more classifiers to the model. In the proposed method, we construct a discrepancy loss function for multiple classifiers following the principle that the classifiers are different from each other. By constructing this loss function, we can add any number of classifiers to the original framework. Extensive experiments show that the proposed method achieves significant improvements over the baseline method.


Idhaya Elango - Detection of COVID-19 cases from chest X-ray images using COVID-NET, a deep convolutional neural network

When & Where:

May 13, 2021 - 11:00 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni , Chair
Bo Luo
Heechul Yun

Abstract

COVID-19 is caused by the SARS-COV-2 contagious virus. It causes a devastating effect on the health of humans leading to high morbidity and mortality worldwide. Infected patients should be screened effectively to fight against the virus. Chest X-Ray (CXR) is one of the important adjuncts in the detection of visual responses related to SARS-COV-2 infection. Abnormalities in chest x-ray images are identified for COVID-19 patients. COVID-Net a deep convolutional neural network, is used here to detect COVID-19 cases from Chest X-ray images. COVIDX dataset used in this project is generated from five different open data access repositories. COVID-Net makes predictions using an explainability method to gain knowledge into critical factors related to COVID cases. We also perform quantitative and qualitative analyses to understand the decision-making behavior. 


Blake Bryant - A Secure and Reliable Network Latency Reduction Protocol for Real-Time Multimedia Applications

When & Where:

May 13, 2021 - 1:00 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Hossein Saiedian, Chair
Arvin Agah
Perry Alexander
Bo Luo
Reza Barati

Abstract

Multimedia networking is the area of study associated with the delivery of heterogeneous data including, but not limited to, imagery, video, audio, and interactive content. Multimedia and communication network researchers have continually struggled to devise solutions for addressing the three core challenges in multimedia delivery: security, reliability, and performance. Solutions to these challenges typically exist in a spectrum of compromises achieving gains in one aspect at the cost of one or more of the others. Networked videogames represent the pinnacle of multimedia presented in a real-time interactive format. Continual improvements to multimedia delivery have led to tools such as buffering, redundant coupling of low-resolution alternative data streams, congestion avoidance, and forced in-order delivery of best-effort service; however, videogames cannot afford to pay the latency tax of these solutions in their current state.

This dissertation aims to address these challenges through the application of a novel networking protocol, leveraging emerging technology such as block-chain enabled smart contracts, to provide security, reliability, and performance gains to distributed network applications. This work provides a comprehensive overview of contemporary networking approaches used in delivering videogame multimedia content and their associated shortcomings. Additionally, key elements of block-chain technology are identified as focal points for prospective solution development, notably through the application of distributed ledger technology, consensus mechanisms and smart contracts. Preliminary results from empirical evaluation of contemporary videogame networking applications have confirmed security and performance flaws existing in well-funded AAA videogame titles. Ultimately, this work aims to solve challenges that the videogame industry has struggled with for over a decade.

The broader impact of this research is to improve the real-time delivery of interactive multimedia content. Positive results in the area will have wide reaching effects in the future of content delivery, entertainment streaming, virtual conferencing, and videogame performance.

Alaa Daffalla - Security & Privacy Practices and Threat Models of Activists during a Political Revolution

When & Where:

May 7, 2021 - 11:00 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Alexandru Bardas, Chair
Fengjun Li
Bo Luo

Abstract

Activism is a universal concept that has often played a major role in putting an end to injustices and human rights abuses globally. Political activism in specific is a modern day term coined to refer to a form of activism in which a group of people come into collision with a more omnipotent adversary - national or international governments - who often has a purview and control over the very telecommunications infrastructure that is necessary for activists in order to organize and operate. As technology and social media use have become vital to the success of activism movements in the twenty first century, our study focuses on surfacing the technical challenges and the defensive strategies that activists employ during a political revolution. We find that security and privacy behavior and app adoption is influenced by the specific societal and political context in which activists operate. In addition, the impact of a social media blockade or an internet blackout can trigger a series of anti-censorship approaches at scale and cripple activists’ technology use. To a large extent the combination of low tech defensive strategies employed by activists were sufficient against the threats of surveillance, arrests and device confiscation. Throughout our results we surface a number of design principles but also some design tensions that could occur between the security and usability needs of different populations. And thus, we present a set of observations that can help guide technology designers and policy makers. 


Chiranjeevi Pippalla - Autonomous Driving Using Deep Learning Techniques

When & Where:

May 7, 2021 - 9:30 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Prasad Kulkarni, Chair
David Johnson, Co-Chair
Suzanne Shontz

Abstract

Recent advances in machine learning (ML), known as deep neural networks (DNN) or deep learning, have greatly improved the state-of-the-art for many ML tasks, such as image classification (He, Zhang, Ren, & Sun, 2016; Krizhevsky, Sutskever, & Hinton, 2012; LeCun, Bottou, Bengio, & Haffner, 1998; Szegedy et al., 2015; Zeiler & Fergus, 2014), speech recognition (Graves, Mohamed, & Hinton, 2013; Hannun et al., 2014; Hinton et al., 2012), complex games and learning from simple reward signals (Goodfellow et al., 2014; Mnih et al., 2015; Silver et al., 2016), and many other areas as well. NN and ML methods have been applied to the task of autonomously controlling a vehicle with only a camera image input to successfully navigate on road (Bojarski et al., 2016). However, advances in deep learning are not yet applied systematically to this task. In this work I used a simulated environment to implement and compare several methods for controlling autonomous navigation behavior using a standard camera input device to sense environmental state. The simulator contained a simulated car with a camera mounted on the top to gather visual data while being operated by a human controller on a virtual driving environment. The gathered data was used to perform supervised training for building an autonomous controller to drive the same vehicle remotely over a local connection. Reproduced past results that have used simple neural networks and other ML techniques to guide similar test vehicles using a camera. Compared these results with more complex deep neural network controllers, to see if they can improve navigation performance based on past methods on measures of speed, distance, and other performance metrics on unseen simulated road driving tasks.


Anna Fritz - Type Dependent Policy Language

When & Where:

May 6, 2021 - 10:00 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Perry Alexander, Chair
Alex Bardas
Andy Gill

Abstract

Remote attestation is the act of making trust decisions about a communicating party. During this process, an appraiser asks a target to execute an attestation protocol that generates and returns evidence. The appraiser can then make claims about the target by evaluating the evidence. Copland is a formally specified, executable language for representing attestation protocols. We introduce Copland centered negotiation as prerequisite to attestation to find a protocol that meets the target’s needs for constrained disclosure and the appraiser’s desire for comprehensive information. Negotiation begins when the appraiser sends a request, a Copland phrase, to the target. The target gathers all protocols that satisfy the request and then, using their privacy policy, can filter out the phrases that expose sensitive information. The target sends these phrases to the appraiser as a proposal. The appraiser then chooses the best phrase for attestation, based on situational requirements embodied in a selection function. Our focus is statically ensuring the target does not share sensitive information though terms in the proposal, meeting their need for constrained disclosure. To accomplish this, we realize two independent implementation of the privacy and selection policies using indexed types and subset types. In using indexed types, the policy check is accomplishes by indexing the term grammar with the type of evidence the term produces. The statically ensures that terms written in the language will satisfy the privacy policy criteria. In using the subset type, we statically limit the collection of terms to those that satisfy the privacy policy. This type abides by the rules of set comprehension to build a set such that all elements of the set satisfy the privacy policy. Combining our ideas for a dependently typed privacy policy and negotiation, we give the target the chance to suggest a term or terms for attestation that fits the appraiser’s needs while not disclosing sensitive information.


Sahithi Reddy Paspuleti - Real-Time Mask Recognition

When & Where:

May 4, 2021 - 2:00 PM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni, Chair
David Johnson, Co-Chair
Andrew Gill

Abstract

COVID-19 is a disease that spreads from human to human which can be controlled by ensuring proper use of a facial mask. The spread of COVID-19 can be limited if people strictly maintain social distancing and use a facial mask. Very sadly, people are not obeying these rules properly which is speeding the spread of this virus. Detecting the people not obeying the rules and informing the corresponding authorities can be a solution in reducing the spread of Corona virus. The proposed method detects the face from the image correctly and then identifies if it has a mask on it or not. As a surveillance task performer, it can also detect a face along with a mask in motion. It has numerous applications, such as autonomous driving, education, surveillance, and so on.


Mugdha Bajjuri - Driver Drowsiness Monitoring System

When & Where:

May 3, 2021 - 1:00 PM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni, Chair
David Johnson, Co-Chair
Andrew Gill

Abstract

Fatigue and microsleep at the wheel are often the cause of serious accidents and death. Fatigue, in general, is difficult to measure or observe unlike alcohol and drugs, which have clear key indicators and tests that are available easily. Hence, detection of driver’s fatigue and its indication is an active research area. Also, I believe that drowsiness can negatively impact people in working and classroom environments as well. Drowsiness in the workplace especially while working with heavy machinery may result in serious injuries similar to those that occur while driving drowsily. The proposed system for detecting driver drowsiness has a webcam that records the video of the driver and driver’s face is detected in each frame employing image processing techniques. Facial landmarks on the detected face are pointed and subsequently the eye aspect ratio, mouth opening ratio and nose length ratio are computed and depending on their values, drowsiness is detected. If drowsiness is detected, a warning or alarm is sent to the driver from the warning system.


Kamala Gajurel - A Fine-Grained Visual Attention Approach for Fingerspelling Recognition in the Wild

When & Where:

May 3, 2021 - 10:00 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Cuncong Zhong, Chair
Guanghui Wang
Taejoon Kim
Fengjun Li

Abstract

Fingerspelling in sign language has been the means of communicating technical terms and proper nouns when they do not have dedicated sign language gestures. The automatic recognition of fingerspelling can help resolve communication barriers when interacting with deaf people. The main challenges prevalent in automatic recognition tasks are the ambiguity in the gestures and strong articulation of the hands. The automatic recognition model should address high inter-class visual similarity and high intra-class variation in the gestures. Most of the existing research in fingerspelling recognition has focused on the dataset collected in a controlled environment. The recent collection of a large-scale annotated fingerspelling dataset in the wild, from social media and online platforms, captures the challenges in a real-world scenario. This study focuses on implementing a fine-grained visual attention approach using Transformer models to address the challenges existing in two fingerspelling recognition tasks: multiclass classification of static gestures and sequence-to-sequence prediction of continuous gestures. For a dataset with a single gesture in a controlled environment (multiclass classification), the Transformer decoder employs the textual description of gestures along with image features to achieve fine-grained attention. For the sequence-to-sequence prediction task in the wild dataset, fine-grained attention is attained by utilizing the change in motion of the video frames (optical flow) in sequential context-based attention along with a Transformer encoder model. The unsegmented continuous video dataset is jointly trained by balancing the Connectionist Temporal Classification (CTC) loss and maximum-entropy loss. The proposed methodologies outperform state-of-the-art performance in both datasets. In comparison to the previous work for static gestures in fingerspelling recognition, the proposed approach employs multimodal fine-grained visual categorization. The state-of-the-art model in sequence-to-sequence prediction employs an iterative zooming mechanism for fine-grained attention whereas the proposed method is able to capture better fine-grained attention in a single iteration.


Chuan Sun - Reconfigurability in Wireless Networks: Applications of Machine Learning for User Localization and Intelligent Environment

When & Where:

April 30, 2021 - 1:00 PM
Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Morteza Hashemi, Chair
David Johnson
Taejoon Kim

Abstract

With the rapid development of machine learning (ML) and deep learning (DL) methodologies, DL methods can be leveraged for wireless network reconfigurability and channel modeling. While deep learning-based methods have been applied in a few wireless network use cases, there is still much to be explored. In this project, we focus on the application of deep learning methods for two scenarios. In the first scenario, a user transmitter was moving randomly within a campus area, and at certain spots sending wireless signals that were received by multiple antennas. We construct an active deep learning architecture to predict user locations from received signals after dimensionality reduction, and analyze 4 traditional query strategies for active learning to improve the efficiency of utilizing labeled data. We propose a new location-based query strategy that considers both spatial density and model uncertainty when selecting samples to label. We show that the proposed query strategy outperforms all the existing strategies. In the second scenario, a reconfigurable intelligent surface (RIS) containing 4096 tunable cells reflects signals from a transmitter to users in an office for better performance. We use the training data of one user's received signals under different RIS configurations to learn the impact behavior of the RIS on the wireless channel. Based on the context and experience from the first scenario, we build a DL neural network that maps RIS configurations to received signal estimations. In the second phase, the loss function was customized towards our final evaluation formula to obtain the optimum configuration array for a user. We propose and build a customized DL pipeline that automatically learns the behavior of RIS on received signals, and generates the optimal RIS configuration array for each of the 50 test users.


Kailani Jones - Deploying Android Security Updates: an Extensive Study Involving Manufacturers, Carriers, and End Users

When & Where:

March 8, 2021 - 10:30 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Alex Bardas, Chair
Fengjun Li
Bo Luo

Abstract

Android's fragmented ecosystem makes the delivery of security updates and OS upgrades cumbersome and complex. While Google initiated various projects such as Android One, Project Treble, and Project Mainline to address this problem, and other involved entities (e.g., chipset vendors, manufacturers, carriers) continuously strive to improve their processes, it is still unclear how effective these efforts are on the delivery of updates to supported end-user devices. In this paper, we perform an extensive quantitative study (August 2015 to December 2019) to measure the Android security updates and OS upgrades rollout process. Our study leverages multiple data sources: the Android Open Source Project (AOSP), device manufacturers, and the top four U.S. carriers (AT\&T, Verizon, T-Mobile, and Sprint). Furthermore, we analyze an end-user dataset captured in 2019 (152M anonymized HTTP requests associated with 9.1M unique user identifiers) from a U.S.-based social network. Our findings include unique measurements that, due to the fragmented and inconsistent ecosystem, were previously challenging to perform. For example, manufacturers and carriers introduce a median latency of 24 days before rolling out security updates, with an additional median delay of 11 days before end devices update. We show that these values alter per carrier-manufacturer relationship, yet do not alter greatly based on a model's age. Our results also delve into the effectiveness of current Android projects. For instance, security updates for Treble devices are available on average 7 days faster than for non-Treble devices. While this constitutes an improvement, the security update delay for Treble devices still averages 19 days.

 


Ali Alshawish - A New Fault-Tolerant Topology and Operation Scheme for the High Voltage Stage in a Three-Phase Solid-State Transformer

When & Where:

February 8, 2021 - 10:00 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Prasad Kulkarni, Chair
Morteza Hashemi
Taejoon Kim
Alessandro Salandrino
Elaina Sutley

Abstract

Solid-state transformers (SSTs) are comprised of several cascaded power stages with different voltage levels. This leads to more challenges for operation and maintenance of the SSTs not only under critical conditions, but also during normal operation. However, one of the most important reliability concerns for the SSTs is related to high voltage side switch and grid faults. High voltage stress on the switches, together with the fact that most modern SST topologies incorporate large number of power switches in the high voltage side, contribute to a higher probability of a switch fault occurrence. The power electronic switches in the high voltage stage are under very high voltage stress, significantly higher than other SST stages. Therefore, the probability of the switch failures becomes more substantial in this stage. In this research, a new technique is proposed to improve the overall reliability of the SSTs by enhancing the reliability of the high voltage stage.
 
The proposed method restores the normal operation of the SST from the point of view of the load even though the input stage voltages are unbalanced due to the switch faults. On the other hand, high voltage grid faults that result in unbalanced operating conditions in the SST can also lead to dire consequences in regards to safety and reliability. The proposed method can also revamp the faulty operation to the pre-fault conditions in the case of grid faults. The proposed method integrates the quasi-z-source inverter topology into the SST topology for rebalancing the transformer voltages. Therefore, this work develops a new SST topology in conjunction with a fault-tolerant operation strategy that can fully restore operation of the proposed SST in the case of the two fault scenarios. The proposed fault-tolerant operation strategy rebalances the line-to-line voltages after a fault occurrence by modifying the phase angles between the phase voltages generated by the high voltage stage of the proposed SST. The boosting property of the quasi-z-source inverter topology circuitry is then used to increase the amplitude of the rebalanced line-to-line voltages to their pre-fault values. A modified modulation technique is proposed for modifying the phase angles and controlling the quasi-z-source inverter topology shoot-through duty ratio.

Usman Sajid - Effective uni-modal to multi-modal crowd estimation

When & Where:

January 29, 2021 - 10:30 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Taejoon Kim, Chair
Bo Luo
Fengjun Li
Cuncong Zhong
Guanghui Wang

Abstract

Crowd estimation is an integral part of crowd analysis. It plays an important role in event management of huge gatherings like Hajj, sporting, and musical events or political rallies. Automated crowd count can lead to better and effective management of such events and prevent any unwanted incident. Crowd estimation is an active research problem due to different challenges pertaining to large perspective, huge variance in scale and image resolution, severe occlusions and dense crowd-like cluttered background regions. Current approaches cannot handle huge crowd diversity well and thus perform poorly in cases ranging from extreme low to high crowd-density, thus, leading to crowd underestimation or overestimation. Also, manual crowd counting subjects to very slow and inaccurate results due to the complex issues as mentioned above. To address the major issues and challenges in the crowd counting domain, we separately investigate two different types of input data: uni-modal (Image) and multi-modal (Image and Audio).

 

In the uni-modal setting, we propose and analyze four novel end-to-end crowd counting networks, ranging from multi-scale fusion-based models to uniscale one-pass and two-pass multi-task models. The multi-scale networks also employ the attention mechanism to enhance the model efficacy. On the other hand, the uni-scale models are equipped with novel and simple-yet-effective patch re-scaling module (PRM) that functions identical but lightweight in comparison to the multi-scale approaches. Experimental evaluation demonstrates that the proposed networks outperform the state-of-the-art methods in majority cases on four different benchmark datasets with up to 12.6% improvement in terms of the RMSE evaluation metric. Better cross-dataset performance also validates the better generalization ability of our schemes. For the multimodal input, the effective feature-extraction (FE) and strong information fusion between two modalities remain a big challenge. Thus, the aim in the multimodal environment is to investigate different fusion techniques with improved FE mechanism for better crowd estimation. The multi-scale uni-modal attention networks are also proven to be more effective in other deep leaning domains, as applied successfully on seven different scene-text recognition datasets with better performance.


Sana Awan - Privacy-preserving Federated Learning

When & Where:

January 29, 2021 - 2:00 AM
Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Fengjun Li, Chair
Alex Bardas
Bo Luo
Cuncong Zhong
Mei Liu

Abstract

Machine learning (ML) is transforming a wide range of applications, promising to bring immense economic and social benefits. However, it also raises substantial security and privacy challenges.  In this dissertation we describe a framework for efficient, collaborative and secure ML training using a federation of client devices that jointly train a ML model using their private datasets in a process called Federated Learning (FL). First, we present the design of a blockchain-enabled Privacy-preserving Federated Transfer Learning (PPFTL) framework for resource-constrained IoT applications. PPFTL addresses the privacy challenges of FL and improves efficiency and effectiveness through model personalization. The framework overcomes the computational limitation of on-device training and the communication cost of transmitting high-dimensional data or feature vectors to a server for training. Instead, the resource-constrained devices jointly learn a global model by sharing their local model updates. To prevent information leakage about the privately-held data from the shared model parameters, the individual client updates are homomorphically encrypted and aggregated in a privacy-preserving manner so that the server only learns the aggregated update to refine the global model. The blockchain provides provenance of the model updates during the training process, makes contribution-based incentive mechanisms deployable, and supports traceability, accountability and verification of the transactions so that malformed or malicious updates can be identified and traced to the offending source. The framework implements model personalization approaches (e.g. fine-tuning) to adapt the global model more closely to the individual client's data distribution.

In the second part of the dissertation, we turn our attention to the limitations of existing FL algorithms in the presence of adversarial clients who may carry out poisoning attacks against the FL model. We propose a privacy-preserving defense, named CONTRA, to mitigate data poisoning attacks and provide a guaranteed level of accuracy under attack.  The defense strategy identifies malicious participants based on the cosine similarity of their encrypted gradient contributions and removes them from FL training. We report the effectiveness of the proposed scheme for IID and non-IID data distributions. To protect data privacy, the clients' updates are combined using secure multi-party computation (MPC)-based aggregation so that the server only learns the aggregated model update without violating the privacy of users' contributions.


Dustin Hauptman - Communication Solutions for Scaling Number of Collaborative Agents in Swarm of Unmanned Aerial Systems Using Frequency Based Hierarchy

When & Where:

January 28, 2021 - 1:00 PM
Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Prasad Kulkarni, Chair
Shawn Keshmiri, (Co-Chair)
Alex Bardas
Morteza Hashemi

Abstract

Swarms of unmanned aerial systems (UASs) usage is becoming more prevalent in the world. Many private companies and government agencies are actively developing analytical and technological solutions for multi-agent cooperative swarm of UASs.  However, majority of existing research focuses on developing guidance, navigation, and control (GNC) algorithms for swarm of UASs and proof of stability and robustness of those algorithms. In addition to profound challenges in control of swarm of UASs, a reliable and fast intercommunication between UASs is one of the vital conditions for success of any swarm.  Many modern UASs have high inertia and fly at high speeds which means if latency or throughput are too low in swarms, there is a higher risk for catastrophic failure due to intercollision within the swarm. This work presents solutions for scaling number of collaborative agents in swarm of UASs using frequency-based hierarchy. This work identifies shortcomings and discusses traditional swarm communication systems and how they rely on a single frequency that will handle distribution of information to all or some parts of a swarm. These systems typically use an ad-hoc network to transfer data locally, on the single frequency, between agents without the need of existing communication infrastructure. While this does allow agents the flexibility of movement without concern for disconnecting from the network and managing only neighboring communications, it doesn’t necessarily scale to larger swarms. In those large swarms, for example, information from the outer agents will be routed to the inner agents. This will cause inner agents, critical to the stability of a swarm, to spend more time routing information than transmitting their state information. This will lead to instability as the inner agents’ states are not known to the rest of the swarm. Even if an ad-hoc network is not used (e.g. an Everyone-to-Everyone network), the frequency itself has an upper limit to the amount of data that it can send reliably before bandwidth constraints or general  interference causes information to arrive too late or not at all.

We propose that by using two frequencies and creating a hierarchy where each layer is a separate frequency, we can group large swarms into manageable local swarms. The intra-swarm communication (inside the local swarm) will be handled on a separate frequency while the inter-swarm communication will have its own. A normal mesh network was tested in both hardware in the loop (HitL) scenarios and a collision avoidance flight test scenario. Those results were compared against dual-frequency HitL simulations. The dual-frequency simulations showed overall improvement in the latency and throughput comparatively to both the simulated and flight-tested mesh network.


Department Events
KU Today
High school seniors can apply to the SELF Program, a four-year enrichment and leadership experience
Engineering students build concrete canoes, Formula race cars, unmanned planes, and rockets for competitions nationwide
More first and second place awards in student AIAA aircraft design contests than any other school in the world
One of 34 U.S. public institutions in the prestigious Association of American Universities
44 nationally ranked graduate programs.
—U.S. News & World Report
Top 50 nationwide for size of library collection.
—ALA
5th nationwide for service to veterans —"Best for Vets: Colleges," Military Times