Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Andrew Riachi
An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux SmapsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Prasad Kulkarni, ChairPerry Alexander
Drew Davidson
Heechul Yun
Abstract
Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.
In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.
Elizabeth Wyss
A New Frontier for Software Security: Diving Deep into npmWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Drew Davidson, ChairAlex Bardas
Fengjun Li
Bo Luo
J. Walker
Abstract
Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week.
However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.
This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains.
Alfred Fontes
Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope ModulationsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Patrick McCormick, ChairShannon Blunt
Jonathan Owen
Abstract
Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.
A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal.
The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.
Qua Nguyen
Hybrid Array and Privacy-Preserving Signaling Optimization for NextG Wireless CommunicationsWhen & Where:
Zoom Defense, please email jgrisafe@ku.edu for link.
Committee Members:
Erik Perrins, ChairMorteza Hashemi
Zijun Yao
Taejoon Kim
KC Kong
Abstract
This PhD research tackles two critical challenges in NextG wireless networks: hybrid precoder design for wideband sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and privacy-preserving federated learning (FL) over wireless networks.
In the first part, we propose a novel hybrid precoding framework that integrates true-time delay (TTD) devices and phase shifters (PS) to counteract the beam squint effect - a significant challenge in the wideband sub-THz massive MIMO systems that leads to considerable loss in array gain. Unlike previous methods that only designed TTD values while fixed PS values and assuming unbounded time delay values, our approach jointly optimizes TTD and PS values under realistic time delays constraint. We determine the minimum number of TTD devices required to achieve a target array gain using our proposed approach. Then, we extend the framework to multi-user wideband systems and formulate a hybrid array optimization problem aiming to maximize the minimum data rate across users. This problem is decomposed into two sub-problems: fair subarray allocation, solved via continuous domain relaxation, and subarray gain maximization, addressed via a phase-domain transformation.
The second part focuses on preserving privacy in FL over wireless networks. First, we design a differentially-private FL algorithm that applies time-varying noise variance perturbation. Taking advantage of existing wireless channel noise, we jointly design differential privacy (DP) noise variances and users transmit power to resolve the tradeoffs between privacy and learning utility. Next, we tackle two critical challenges within FL networks: (i) privacy risks arising from model updates and (ii) reduced learning utility due to quantization heterogeneity. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. We approach to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that ensures a DP guarantee and minimal quantization distortion. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Lastly, inspired by the information-theoretic rate-distortion framework, a privacy-distortion tradeoff problem is formulated to minimize privacy loss under a given maximum allowable quantization distortion. The optimal solution to this problem is identified, revealing that the privacy loss decreases as the maximum allowable quantization distortion increases, and vice versa.
This research advances hybrid array optimization for wideband sub-THz massive MIMO and introduces novel algorithms for privacy-preserving quantized FL with diverse precision. These contributions enable high-throughput wideband MIMO communication systems and privacy-preserving AI-native designs, aligning with the performance and privacy protection demands of NextG networks.
Arin Dutta
Performance Analysis of Distributed Raman Amplification with Different Pumping ConfigurationsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui, ChairMorteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao
Abstract
As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.
Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.
The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.
As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.
Audrey Mockenhaupt
Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target RecognitionWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Patrick McCormick, ChairShannon Blunt
Jon Owen
Abstract
As machine learning (ML), artificial intelligence (AI), and deep learning continue to advance, their applications become more diverse – one such application is synthetic aperture radar (SAR) automatic target recognition (ATR). These SAR ATR networks use different forms of deep learning such as convolutional neural networks (CNN) to classify targets in SAR imagery. An emerging research area of SAR is dual function radar communication (DFRC) which performs both radar and communications functions using a single co-designed modulation. The utilization of DFRC emissions for SAR imaging impacts image quality, thereby influencing SAR ATR network training. Here, using the Civilian Vehicle Data Dome dataset from the AFRL, SAR ATR networks are trained and evaluated with simulated data generated using Gaussian Minimum Shift Keying (GMSK) and Linear Frequency Modulation (LFM) waveforms. The networks are used to compare how the target classification accuracy of the ATR network differ between DFRC (i.e., GMSK) and baseline (i.e., LFM) emissions. Furthermore, as is common in pulse-agile transmission structures, an effect known as ’range sidelobe modulation’ is examined, along with its impact on SAR ATR. Finally, it is shown that SAR ATR network can be trained for GMSK emissions using existing LFM datasets via two types of data augmentation.
Past Defense Notices
Brian Quiroz
Mobile Edge Computing for Unmanned VehiclesWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Morteza Hashemi, ChairTaejoon Kim
Prasad Kulkarni
Abstract
Unmanned aerial vehicles (UAVs) and autonomous vehicles are becoming more ubiquitous than ever before. From medical to delivery drones, to space exploration rovers and self-driving taxi services, these vehicles are starting to play a prominent role in society as well as in our day to day lives.
Efficient computation and communication strategies are paramount to the effective functioning of these vehicles. Mobile Edge Computing (MEC) is an innovative network technology that enables resource-constrained devices - such as UAVs and autonomous vehicles - to offload computationally intensive tasks to a nearby MEC server. Moreover, vehicles such as self-driving cars must reliably and securely relay and receive latency-sensitive information to improve traffic safety. Extensive research performed on vehicle to vehicle (V2V) and vehicle to everything (V2X) communication indicates that they will both be further enhanced by the widespread usage of 5G technology.
We consider two relevant problems in mobile edge computing for unmanned vehicles. The first problem was to satisfy resource-constrained UAV's need for a resource-efficient offloading policy. To that end, we implemented both a computation and an energy consumption model and trained a DQN agent that seeks to maximize task completion and minimize energy consumption. The second problem was establishing communication between two autonomous vehicles and between an autonomous vehicle and an MEC server. To accomplish this goal, we experimented by leveraging an autonomous vehicle's server to send and receive custom messages in real time. These experiments will serve as a stepping stone towards enabling mobile edge computing and device-to-device communication and computation.
Ruturaj Vaidya
Explore Effectiveness and Performance of Security Checks on Software BinariesWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Prasad Kulkarni, ChairAlex Bardas
Drew Davidson
Esam El-Araby
Michael Vitevitch
Abstract
Binary analysis is difficult, as most of semantic and syntactic information available at source-level gets lost during the compilation process. If the binary is stripped and/ or optimized, then it negatively affects the efficacy of binary analysis frameworks. Moreover, handwritten assembly, obfuscation, excessive indirect calls or jumps, etc. further degrade their accuracy. Thus, it is important to investigate and assess the challenges to improve the binary analysis. One way of doing that is by studying security techniques implemented at binary-level.
In this dissertation we propose to implement existing compiler-level techniques for binary executables and thereby evaluate how does the loss of information at binary-level affect the performance of existing compiler-level techniques in terms of both efficiency and effectiveness.
Michael Bechtel
Shared Resource Denial-of-Service Attacks on Multicore PlatformsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Heechul Yun, ChairMohammad Alian
Drew Davidson
Prasad Kulkarni
Shawn Keshmiri
Abstract
With the increased adoption of machine learning algorithms across many different fields, powerful computing platforms have become necessary to meet their computational needs. Multicore platforms are a popular choice due to their ability to provide greater computing capabilities and still meet the different size, weight, and power (SWaP) constraints. As a result, multicore systems are also being employed at an increasing rate. However, contention for hardware resources between the multiple cores is a significant challenge as it can lead to interference and unpredictable timing behaviors. Furthermore, this contention can be intentionally induced by malicious actors with the specific goals of inhibiting system performance and increasing the execution time of safety-critical tasks. This is done by performing Denial-of-Service (DoS) attacks that target shared resources in order to prevent other cores from accessing them. When done properly, these DoS attacks can have significant impacts to performance and can threaten system safety. For example, we find that DoS attacks can cause >300X slowdown on the popular Raspberry Pi 3 embedded platform. Due to the inherent risks, it is vital that we discover and understand the mechanisms through which shared resource contention can occur and develop solutions that mitigate or prevent the potential impacts.
In this work, we investigate and evaluate shared resource contention on multicore platforms and the impacts it can have on the performance of real-time tasks. Leveraging this contention, we propose various Denial-of-Service attacks that each target different shared resources in the memory hierarchy with the goal of causing as much slowdown as possible. We show that each attack can inflict significant temporal slowdowns to victim tasks on target platforms by exploiting different hardware and software mechanisms. We then develop and analyze techniques for providing shared resource isolation and temporal performance guarantees for safety-critical tasks running on multicore platforms. In particular, we find that bandwidth throttling mechanisms are effective solutions against many DoS attacks and can protect the performance of real-time victim tasks.
Anushka Bhattacharya
Predicting In-Season Soil Mineral Nitrogen in Corn Production Using Deep Learning ModelWhen & Where:
Nichols Hall, Room 246
Committee Members:
Taejoon Kim, ChairMorteza Hashemi
Dorivar Ruiz Diaz
Abstract
One of the biggest challenges in nutrient management in corn (Zea mays) production is determining the amount of plant-available nitrogen (N) that will be supplied to the crop by the soil. Measuring a soil’s N-supplying power is quite difficult and approximations are often used in-lieu of intensive soil testing. This can lead to under/over-fertilization of crops, and in turn increased risk of crop N-deficiencies or environmental degradation. In this paper, we propose a deep learning algorithm to predict the inorganic-N content of the soil on a given day of the growing season. Since the historic data for inorganic nitrogen (IN) is scarce, deep learning has not yet been implemented in predicting fertilizer content. To overcome this hurdle, Generative Adversarial Network (GAN) is used to produce synthetic IN data and is trained using offline simulation data from the Decision Support System for Agrotechnology Transfer (DSSAT). Additionally, the time-series prediction problem is solved using long-short term memory (LSTM) neural networks. This model proves to be economical as it gives an estimate without the need for comprehensive soil testing, overcomes the issue of limited available data, and the accuracy makes it reliable for use.
Krushi Patel
Image Classification & Segmentation based on Enhanced CNN and Transformer NetworksWhen & Where:
Nichols Hall, Room 250 - Gemini Room
Committee Members:
Fengjun Li, ChairPrasad Kulkarni
Bo Luo
Cuncong Zhong
Guanghui Wang
Abstract
Convolutional Neural Networks (CNNs) have significantly improved the performance on various computer vision tasks such as image recognition and segmentation based on their rich representation power. To enhance the performance of CNN, a self-attention module is embedded after each layer in the network. Recently proposed Transformer-based models achieve outstanding performance by employing a multi-head self-attention module as the main building block. However, several challenges still need to be addressed, such as (1) focusing only on class-specified limited channels in CNN; (2) limited respective field in the local transformer; and (3) addition of redundant features and lack of multi-scale features in U-Net type segmentation architecture.
In our work, we propose new strategies to address these issues. First, we propose a novel channel-based self-attention module to diversify the focus more on the discriminative and significant channels, and the module can be embedded at the end of any backbone network for image classification. Second, to limit the noise added by the shallow layers of an encoder in U-Net type architecture, we replaced the skip connections with the Adaptive Global Context Module (AGCM). In addition, we introduced the Semantic Feature Enhancement Module (SFEM) for multi-scale feature enhancement in polyp segmentation. Third, we propose a Multi-scaled Overlapped Attention (MOA) mechanism in the local transformer-based network for image classification to establish the long-range dependencies and initiate the neighborhood window communication.
Justinas Lialys
Parametrically resonant surface plasmon polaritonsWhen & Where:
2001B Eaton Hall
Committee Members:
Alessandro Salandrino, ChairKenneth Demarest
Shima Fardad
Rongqing Hui
Xinmai Yang
Abstract
The surface electromagnetic waves that propagate along a metal-dielectric or a metal-air interface are called surface plasmon polaritons (SPPs). These SPPs are advantageous in a broad range of applications, including in optical waveguides to increase the transmission rates of carrier waves, in near field optics to enhance the resolution beyond the diffraction limit, and in Raman spectroscopy to amplify the Raman signal. However, they have an inherent limitation: as the tangential wavevector component of propagation is larger than what is permitted for the homogenous plane wave in the dielectric medium, this poses a phase-matching issue. In other words, the available spatial vector in the dielectric at a given frequency is smaller than what is required by SPP to be excited. The most commonly known technique to bypass this problem is by using the Otto and Kretschmann configurations. A glass prism is used to increase the available spatial vector in dielectric/air. Other methods are the evanescent field directional coupling, optical grating, localized scatterers, and coupling via highly focused beams. However, even with all these methods at our disposal, it is still challenging to couple SPPs that have a large propagation constant.
As SPPs apply to a wide range of purposes, it is vitally important to overcome the SPP excitation dilemma. Presented here is a novel way to efficiently inject power into SPPs via temporal modulation of the dielectric adhered to the metal. In this configuration, the dielectric constant is modulated in time using an incident pump field. As a result of the induced changes in the dielectric constant, we show that efficient phase-matched coupling can be achieved even by a perpendicularly incident uniform plane wave. This novel method of exciting SPPs paves the way for further understanding and implementation of SPPs in a plethora of applications. For example, optical waveguides can be investigated under such excitation. Hence, this technique opens new possibilities in conventional plasmonics, as well as in the emerging field of nonlinear plasmonics.
Andrei Elliott
Promise Land: Proving Correctness with Strongly Typed Javascript-Style PromisesWhen & Where:
Nichols Hall, Room 250, Gemini Room
Committee Members:
Matt Moore, ChairPerry Alexander
Drew Davidson
Abstract
Code that can run asynchronously is important in a wide variety of situations, from user interfaces to communication over networks, to the use of concurrency for performance gains. One widely used method of specifying asynchronous control flow is the Promise model as used in Javascript. Promises are powerful, but can be confusing and hard-to-debug. This problem is exacerbated by Javascript’s permissive type system, where erroneous code is likely to fail silently, with values being implicitly coerced into unexpected types at runtime.
The present work implements Javascript-style Promises in Haskell, translating the model to a strongly typed framework where we can use the type system to rule out some classes of bugs.
Common errors – such as failure to call one of the callbacks of an executor, which would, in Javascript, leave the Promise in an eternally-pending deadlock state – can be detected for free by the type system at compile time and corrected without even needing to run the code.
We also demonstrate that Promises form a monad, providing a monad instance that allows code using Promises to be written using Haskell’s do notation.
Hoang Trong Mai
Design and Development of Multi-band and Ultra-wideband Antennas and Circuits for Ice and Snow Radar MeasurementsWhen & Where:
Nichols Hall, Room 317
Committee Members:
Carl Leuschen, ChairFernando Rodriguez-Morales, Co-Chair
Christopher Allen
Abstract
Remote sensing based on radar technology has been successfully used for several decades as an effective tool of scientific discovery. A particular application of radar remote sensing instruments is the systematic monitoring of ice and snow masses in both hemispheres of the Earth. The operating requirements of these instruments are driven by factors such as science requirements and platform constraints, often necessitating the development of custom electronic components to enable the desired radar functionality.
This work focuses on component development and trade studies for two multichannel radar systems. First, this thesis presents the design and implementation of two dual-polarized ultra-wideband antennas for a ground-based dual-band ice penetrating radar. The first antenna operates at UHF (600–900 MHz) while the second antenna operates at VHF (140–215 MHz). Each antenna element is composed of two orthogonal octagon-shaped dipoles, two inter-locked printed circuit baluns and an impedance matching network for each polarization. Prototype of each band shows a VSWR of less than 2:1 at both polarizations over a fractional bandwidth exceeding 40%. The antennas developed offer cross-polarization isolation larger than 30 dB, an E-plane 3-dB beamwidth of 69 degrees, and a gain of at least 4 dBi with a variation of ± 1 dB across the bandwidth. This design with high power handling in mind also allows for straightforward adjustment of the antenna dimensions to meet other bandwidth constrains. It is being used as the basis for an airborne system.
Next, this work documents design details and measured performance of an improved and integrated x16 frequency multiplier system for an airborne snow-probing radar. This sub-system produces a 40 – 56 GHz linear frequency sweep from a 2.5 – 3.5 GHz chirp and mixes it down to the 2 – 18 GHz range. The resulting chirp is used for transmission and analog de-chirping of the receive signal. The initial prototype developed through this work provided a higher level of integration and wider fractional bandwidth (>135%) compared to earlier versions implemented with the same frequency plan and a path to guide future realizations.
Lastly, this work documents a series of trade studies on antenna array configurations for both radar systems using electromagnetic simulation tools and measurements.
Xi Mo
Convolutional Neural Network in Pattern RecognitionWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link.
Committee Members:
Cuncong Zhong, ChairTaejoon Kim
Fengjun Li
Bo Luo
Hauzhen Fang
Abstract
Since convolutional neural network (CNN) was first implemented by Yann LeCun et al. in 1989, CNN and its variants have been widely implemented to numerous topics of pattern recognition, and have been considered as the most crucial techniques in the field of artificial intelligence and computer vision. This dissertation not only demonstrates the implementation aspect of CNN, but also lays emphasis on the methodology of neural network (NN) based classifier.
As known to many, one general pipeline of NN-based classifier can be recognized as three stages: pre-processing, inference by models, and post-processing. To demonstrate the importance of pre-processing techniques, this dissertation presents how to model actual problems in medical pattern recognition and image processing by introducing conceptual abstraction and fuzzification. In particular, a transformer on the basis of self-attention mechanism, namely beat-rhythm transformer, greatly benefits from correct R-peak detection results and conceptual fuzzification.
Recently proposed self-attention mechanism has been proven to be the top performer in the fields of computer vision and natural language processing. In spite of the pleasant accuracy and precision it has gained, it usually consumes huge computational resources to perform self-attention. Therefore, realtime global attention network is proposed to make a better trade-off between efficiency and performance for the task of image segmentation. To illustrate more on the stage of inference, we also propose models to detect polyps via Faster R-CNN - one of the most popular CNN-based 2D detectors, as well as a 3D object detection pipeline for regressing 3D bounding boxes from LiDAR points and stereo image pairs powered by CNN.
The goal for post-processing stage is to refine artifacts inferred by models. For the semantic segmentation task, the dilated continuous random field is proposed to be better fitted to CNN-based models than the widely implemented fully-connected continuous random field. Proposed approaches can be further integrated into a reinforcement learning architecture for robotics.
Sirisha Thippabhotla
An Integrated Approach for de novo Gene Prediction, Assembly and Biosynthetic Gene Cluster Discovery of Metagenomic Sequencing DataWhen & Where:
Eaton Hall, Room 1
Committee Members:
Cuncong Zhong, ChairPrasad Kulkarni
Fengjun Li
Zijun Yao
Liang Xu
Abstract
Metagenomics is the study of genomic content present in given microbial communities. Metagenomic functional analysis aims to quantify protein families and reconstruct metabolic pathways from the metagenome. It plays a central role in understanding the interaction between the microbial community and its host or environment. De novo functional analysis, which allows the discovery of novel protein families, remains challenging for high-complexity communities. There are currently three main approaches for recovering novel genes or proteins: de novo nucleotide assembly, gene calling, and peptide assembly. Unfortunately, their informational dependencies have been overlooked, and have been formulated as independent problems.
In this work, we propose a novel de novo analysis pipeline that leverages these informational dependencies, to improve functional analysis of metagenomics data. Specifically, the pipeline will contain four novel modules: an assembly graph module, a graph-based gene calling module, a peptide assembly module, and a biosynthetic gene cluster (BGC) discovery module. The assembly graph module will be computational and memory efficient. It will be based on a combination of de Bruijn and string graphs. The assembly graphs contain important sequencing information, which can be further exploited to improve functional annotation. De novo gene-calling enables us to predict novel genes and protein sequences, that have not been previously characterized. We hypothesize that de novo gene calling can benefit from assembly graph structures, as they contain important start/stop codon information that provide stronger ORF signals. The assembly graph framework will be designed for both nucleotide and protein sequences. The resulting protein sequences from gene calling can be further assembled into longer protein contigs using our assembly framework. For the novel BGC module, the gene members of a BGC will be marked in the assembly graph. Finding a BGC can be achieved by identifying a path connecting its gene members in the assembly graph. Experimental results have shown that our proposed pipeline improved existing gene calling sensitivity on unassembled reads, achieving a 10-15% improvement in sensitivity over the state-of-the-art methods, at a high specificity (>90%). Our pipeline further allowed for more sensitive and accurate peptide assembly, recovering more reference proteins, delivering more hypothetical protein sequences.