Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Andrew Riachi

An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux Smaps

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Drew Davidson
Heechul Yun

Abstract

Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.

In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.


Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Qua Nguyen

Hybrid Array and Privacy-Preserving Signaling Optimization for NextG Wireless Communications

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for link.

Committee Members:

Erik Perrins, Chair
Morteza Hashemi
Zijun Yao
Taejoon Kim
KC Kong

Abstract

This PhD research tackles two critical challenges in NextG wireless networks: hybrid precoder design for wideband sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and privacy-preserving federated learning (FL) over wireless networks.

In the first part, we propose a novel hybrid precoding framework that integrates true-time delay (TTD) devices and phase shifters (PS) to counteract the beam squint effect - a significant challenge in the wideband sub-THz massive MIMO systems that leads to considerable loss in array gain. Unlike previous methods that only designed TTD values while fixed PS values and assuming unbounded time delay values, our approach jointly optimizes TTD and PS values under realistic time delays constraint. We determine the minimum number of TTD devices required to achieve a target array gain using our proposed approach. Then, we extend the framework to multi-user wideband systems and formulate a hybrid array optimization problem aiming to maximize the minimum data rate across users. This problem is decomposed into two sub-problems: fair subarray allocation, solved via continuous domain relaxation, and subarray gain maximization, addressed via a phase-domain transformation.

The second part focuses on preserving privacy in FL over wireless networks. First, we design a differentially-private FL algorithm that applies time-varying noise variance perturbation. Taking advantage of existing wireless channel noise, we jointly design differential privacy (DP) noise variances and users transmit power to resolve the tradeoffs between privacy and learning utility. Next, we tackle two critical challenges within FL networks: (i) privacy risks arising from model updates and (ii) reduced learning utility due to quantization heterogeneity. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. We approach to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that ensures a DP guarantee and minimal quantization distortion. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Lastly, inspired by the information-theoretic rate-distortion framework, a privacy-distortion tradeoff problem is formulated to minimize privacy loss under a given maximum allowable quantization distortion. The optimal solution to this problem is identified, revealing that the privacy loss decreases as the maximum allowable quantization distortion increases, and vice versa.

This research advances hybrid array optimization for wideband sub-THz massive MIMO and introduces novel algorithms for privacy-preserving quantized FL with diverse precision. These contributions enable high-throughput wideband MIMO communication systems and privacy-preserving AI-native designs, aligning with the performance and privacy protection demands of NextG networks.


Arin Dutta

Performance Analysis of Distributed Raman Amplification with Different Pumping Configurations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao

Abstract

As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.

Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.

The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.

As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research  is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

As machine learning (ML), artificial intelligence (AI), and deep learning continue to advance, their applications become more diverse – one such application is synthetic aperture radar (SAR) automatic target recognition (ATR). These SAR ATR networks use different forms of deep learning such as convolutional neural networks (CNN) to classify targets in SAR imagery. An emerging research area of SAR is dual function radar communication (DFRC) which performs both radar and communications functions using a single co-designed modulation. The utilization of DFRC emissions for SAR imaging impacts image quality, thereby influencing SAR ATR network training. Here, using the Civilian Vehicle Data Dome dataset from the AFRL, SAR ATR networks are trained and evaluated with simulated data generated using Gaussian Minimum Shift Keying (GMSK) and Linear Frequency Modulation (LFM) waveforms. The networks are used to compare how the target classification accuracy of the ATR network differ between DFRC (i.e., GMSK) and baseline (i.e., LFM) emissions. Furthermore, as is common in pulse-agile transmission structures, an effect known as ’range sidelobe modulation’ is examined, along with its impact on SAR ATR. Finally, it is shown that SAR ATR network can be trained for GMSK emissions using existing LFM datasets via two types of data augmentation.


Past Defense Notices

Dates

Jace Kline

A Framework for Assessing Decompiler Inference Accuracy of Source-Level Program Constructs

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Bo Luo


Abstract

Decompilation is the process of reverse engineering a binary program into an equivalent source code representation with the objective to recover high-level program constructs such as functions, variables, data types, and control flow mechanisms. Decompilation is applicable in many contexts, particularly for security analysts attempting to decipher the construction and behavior of malware samples. However, due to the loss of information during compilation, this process is naturally speculative and thus is prone to inaccuracy. This inherent speculation motivates the idea of an evaluation framework for decompilers.

In this work, we present a novel framework to quantitatively evaluate the inference accuracy of decompilers, regarding functions, variables, and data types. Within our framework, we develop a domain-specific language (DSL) for representing such program information from any "ground truth" or decompiler source. Using our DSL, we implement a strategy for comparing ground truth and decompiler representations of the same program. Subsequently, we extract and present insightful metrics illustrating the accuracy of decompiler inference regarding functions, variables, and data types, over a given set of benchmark programs. We leverage our framework to assess the correctness of the Ghidra decompiler when compared to ground truth information scraped from DWARF debugging information. We perform this assessment over a subset of the GNU Core Utilities (Coreutils) programs and discuss our findings.


Jaypal Singh

EvalIt: Skill Evaluation using block chain

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
David Johnson
Hongyang Sun


Abstract

Skills validation is a key issue when hiring workers. Companies and universities often face difficulties in determining an applicant's skills because certification of the skills claimed by an applicant is usually not readily verifiable and verification is costly. Also, from applicant's perspective, skill evaluation from industry expert is valuable instead of learning a generalized course with certification. Most of the certification programs are easy and proved not so fruitful in learning the required work skills. Blockchain has been proposed in the literature for functional verification and tamper-proof information storage in a decentralized way. "EvalIt" is a blockchain-based Dapp that addresses the above issues and guarantees some desirable properties. The Dapp facilitates skill evaluation efforts through payments using tokens that it collects from payments made by users of the platform.


Soma Pal

Properties of Profile-guided Compiler Optimization with GCC and LLVM

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Mohammad Alian
Tamzidul Hoque


Abstract

Profile-guided optimizations (PGO) are a class of sophisticated compiler transformations that employ information regarding the profile or execution time behavior of a program to improve program performance, typically speed. PGOs for popular language platforms, like C, C++, and Java, are generally regarded as a mature and mainstream technology and are supported by most standard compilers. Consequently, properties and characteristics of PGOs are assumed to be established and known but have rarely been systematically studied with multiple mainstream compilers.

The goal of this work is to explore and report some important properties of PGOs in mainstream compilers, specifically GCC and LLVM in this work. We study the performance delivered by PGOs at the program and function-level, impact of different execution profiles on PGO performance, and compare relative PGO benefit delivered by different mainstream compilers. We also built the experimental framework to conduct this research. We expect that our work will help focus future research and assist in building frameworks to field PGOs in actual systems.


Samyak Jain

Monkeypox Detection Using Computer Vision

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
David Johnson, (Co-Chair)
Hongyang Sun


Abstract

As the world recovers from the damage caused by the spread of COVID-19, the monkeypox virus poses a new threat of becoming a global pandemic. The monkeypox virus itself is not as deadly or contagious as COVID-19, but many countries report new patient cases every day. So it wouldn't be surprising if the world faces another pandemic due to lack of proper precautions. Recently, deep learning has shown great potential in image-based diagnostics, such as cancer detection, tumor cell identification, and COVID-19 patient detection. Therefore, since monkeypox has infected human skin, a similar application can be employed in diagnosing monkeypox-related diseases, and this image can be captured and used for further disease diagnosis. This project presents a deep learning approach for detecting monkeypox disease from skin lesion images. Several pre-trained deep learning models, such as ResNet50 and Mobilenet, are deployed on the dataset to classify monkeypox and other diseases.


Grace Young

Quantum Algorithms & the Hidden Subgroup Problem

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Matthew Moore, Chair
Perry Alexander
Esam El-Araby
Cuncong Zhong
KC Kong

Abstract

In the last century, we have seen incredible growth in the field of quantum computing. Quantum computing offers us the opportunity to find efficient solutions to certain computational problems which are intractable on classical computers. One class of problems that seems to benefit from quantum computing is the Hidden Subgroup Problem (HSP). In the following proposal, we will examine basics of quantum computing as well as the current research surrounding the HSP. We will also discuss the importance of the HSP and its relation to other popular problems such as Integer Factoring, Discrete Logarithm, Shortest Vector, and Subset Sum problems.

The proposed research aims to develop a quantum algorithmic polynomial-time reduction to special cases of the HSP where the parameterizing group is the Dihedral group. This problem is known as the Dihedral HSP (DHSP). The usual approach to the HSP relies on harmonic analysis in the domain of the problem and the best-known algorithm using this approach is sub-exponential, but still super-polynomial. The algorithm we have designed focuses on the structure encoded in the codomain which uses this structure to direct a “walk” down the subgroup lattice terminating at the hidden subgroup.

 


Victor Alberto Lopez Nikolskiy

Maximum Power Point Tracking For Solar Harvesting Using Industry Implementation Of Perturb And Observe with Integrated Circuits

When & Where:


Eaton Hall, Room 2001B

Committee Members:

James Stiles, Chair
Christopher Allen
Patrick McCormick


Abstract

This project is not a new idea or an innovative method, this project consists in the implementation of techniques already used in the consumer industry.

The purpose of this project is to implement a compact and low-weight Maximum Power Point Tracking (MPPT) Solar Harvesting Device intended for a small fixed-wing unmanned aircraft. For the aircraft selected, the load could be subsidized up to 25% by the MPPT device and installed solar cells.

The MPPT device was designed around the Texas Instruments SM72445 Integrated Circuit and its technical documentation. The prototype was evaluated using a Photovoltaic Profile Emulator Power Supply and a LiPo battery.

The device performed MPPT in one of the different tested current-voltage (IV) profiles reaching Maximum Power Point (MPP).  The device did not maintain the MPP. Under an additional external DC load or different IV profiles, the emulator operates in prohibited operating conditions. The probable cause of the failed behavior is due to instability in the emulator’s output. The inputs to the controller and response behaviors of the H-bridge circuit were as expected and designed.


Koyel Pramanick

Detection of measures devised by the compiler to improve security of the generated code

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Drew Davidson
Fengjun Li
Bo Luo
John Symons

Abstract

The main aim of the thesis is to identify provisions employed by the compiler to ensure the security of any arbitrary binary. These provisions are security techniques applied automatically by the compiler during the system build process. Compilers provide a number of security checks, that can be applied statically or at compile time, to protect the software from attacks that target code vulnerabilities. Most compilers use warnings to indicate potential code bugs and run-time security checks which add instrumentation code in the binary to detect problems during execution. Our first work is to develop a language-agnostic and compiler-agnostic experimental framework which determines the presence of targeted compiler-based run-time security checks in any binary. Our next work is focused on exploring if unresolved compiler generated warnings can be detected in the binary when the source code is not available.


Ben Liu

Computational Microbiome Analysis: Method Development, Integration and Clinical Applications

When & Where:


Eaton Hall, Room 1

Committee Members:

Cuncong Zhong, Chair
Esam El-Araby
Bo Luo
Zijun Yao
Mizuki Azuma

Abstract

Metagenomics is the study of microbial genomes from one common environment and in most cases, metagenomic data refer to the whole-genome shotgun sequencing data of the microbiota, which are fragmented DNA sequences from all regions in the microbial genomes. Because the data are generated without laboratory culture, they provide a more unbiased insight to and uniquely enriched information of the microbial community. Currently many researchers are interested in metagenomic data, and a sea of software exist for various purposes at different analyzing stages. Most researchers build their own analyzing pipeline on their expertise, and the pipelines for the same purpose built by two researchers might be disparate, thus affecting the conclusion of experiment. 

My research interests involve: (1) We first developed an assembly graph-based ncRNA searching tools, named DRAGoM, to improve the searching quality in metagenomic data. (2) We proposed an automatic metagenomic data analyzing pipeline generation system to extract, organize and exploit the enormous amount of knowledge available in literature. The system consists of two work procedures: construction and generation. In the construction procedure, the system takes a corpus of raw textual data, and updates the constructed pipeline network, whereas in the genera- tion stage, the system recommends analyzing pipeline based on the user input. (3) We performed a meta-analysis on the taxonomic and functional features of the gut microbiome from non-small cell lung cancer patients treated with immunotherapy, to establish a model to predict if a patient will benefit from immunotherapy. We systematically studied the taxonomical characteristics of the dataset and used both random forest and multilayer perceptron neural network models to predict the patients with progressing-free survival above 6 months versus those below 3 months.


Matthew Showers

Software-based Runtime Protection of Secret Assets in Untrusted Hardware under Zero Trust

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Tamzidul Hoque, Chair
Alex Bardas
Drew Davidson


Abstract

The complexity of the design and fabrication process of electronic devices is advancing with their ability to provide wide-ranging functionalities including data processing, sensing, communication, artificial intelligence, and security. Due to these complexities in the design and manufacturing process and associated time and cost, system developers often prefer to procure off-the-shelf components directly from the market instead of developing custom Integrated Circuits (ICs) from scratch. Procurement of Commerical-Off-The-Shelf (COTS) components reduces system development time and cost significantly, enables easy integration of new technologies, and facilitates smaller production runs. Moreover, since various companies use the same COTS IC, they are generally available in the market for a long period and are easy to replace. 

Although utilizing COTS parts can provide many benefits, it also introduces serious security concerns. None of the entities in the COTS IC supply chain are trusted from a consumer's perspective, leading to a ”Zero Trust” supply chain threat model. Any of these entities could introduce hidden malicious circuits or hardware Trojans within the component that could help an attacker in the field extract secret information (e.g., cryptographic keys) or cause a functional failure. Existing solutions to counter hardware Trojans are inapplicable in a zero trust scenario as they assume either the design house or the foundry to be trusted. Moreover, many solutions require access to the design for analysis or modification to enable the countermeasure. 

In this work, we have proposed a software-oriented countermeasure to ensure the confidentiality of secret assets against hardware Trojan attacks in untrusted COTS microprocessors. The proposed solution does not require any supply chain entity to be trusted and does not require analysis or modification of the IC design.  

To protect secret assets in an untrusted microprocessor, the proposed method leverages the concept of residue number coding to transform the software functions operating on the asset to be homomorphic. We have presented a detailed security analysis to evaluate the confidentiality of a secret asset under Trojan attacks using the secret key of the Advanced Encryption Standard (AES) program as a case study. Finally, to help streamline the application of this protection scheme, we have developed a plugin for the LLVM compiler toolchain that integrates the solution without requiring extensive source code alterations.


Madhuvanthi Mohan Vijayamala

Camouflaged Object Detection in Images using a Search-Identification based framework

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
David Johnson (Co-Chair)
Zijun Yao


Abstract

While identifying an object in an image is almost an instantaneous task for the human visual perception system, it takes more effort and time to process and identify a camouflaged object - an entity that flawlessly blends with the background in the image. This explains why it is much more challenging to enable a machine learning model to do the same, in comparison to generic object detection or salient object detection.

This project implements a framework called Search Identification Network, that simulates the search and identification pattern adopted by predators in hunting their prey and applies it to detect camouflaged objects. The efficiency of this framework in detecting polyps in medical image datasets is also measured.