Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
David Felton
Optimization and Evaluation of Physical Complementary Radar WaveformsWhen & Where:
Nichols Hall, Room 129 (Apollo Auditorium)
Committee Members:
Shannon Blunt, ChairRachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata
Abstract
The RF spectrum is a precious, finite resource with ever-increasing demand. Consequently, the mandate to be a "good spectral neighbor" is in direct conflict with the requirements for high-performance sensing where correlation error is fundamentally limited. As such, matched-filter radar performance is often sidelobe-limited with estimation error being constrained by the time-bandwidth (TB) of the collective emission. The methods developed here seek to bridge this gap between idealized radar performance and practical utility via waveform design.
Estimation error becomes more complex when employing pulse-agility. In doing so, range-sidelobe modulation (RSM) spreads energy across Doppler, rendering traditional methods ineffective. To address this, the gradient-based complementary-FM framework was developed to produce complementary sidelobe cancellation (CSC) after coherently combining subsets within a pulse-agile emission. In contrast to the majority of complementary signals, explored via phase-coding, these Comp-FM waveform subsets achieve CSC while preserving hardware-compatibility since they are FM (though design distortion is never completely avoided). Although Comp-FM addressed practicality via hardware amenability, CSC was localized to zero-Doppler. This work expands the Comp-FM notion to a Doppler-generalized (DG) framework, extending the cancellation condition to an arbitrary span. The same framework can likewise be employed to jointly optimize an entire coherent processing interval (CPI) to minimize RSM within the radar point-spread-function (PSF), thereby generalizing the notion of complementarity and introducing the potential for cognitive operation if sufficient scattering knowledge is available a-priori.
Sensing with a single emitter is limited by self-inflicted error alone (e.g., clutter, sidelobes), while MIMO systems must additionally contend with the cross-responses from emitters operating concurrently (e.g., simultaneously, spatially proximate, in a shared spectrum), further degrading radar sensitivity. Now, total correlation error is dictated by the overlapping TB (i.e., how coincident are the signals) and number of operating emitters, compounding difficulty to estimate if left unaddressed. As such, the determination of "orthogonal waveforms" comprises a large portion of MIMO literature, though remains a phenomenological misnomer for pulsed emissions. Here, the notion of complementary-FM is applied to a multi-emitter context in which transmitter-amenable quasi-orthogonal subsets, occupying the same spectral band, are produced via a similar gradient-based approach. To further practicalize these MIMO-Comp-FM waveform subsets, the same "DG" approach described above, addressing the otherwise-default Doppler-induced degradation of complementary signals, is applied. In doing so, Doppler-independent separability and complementarity greatly improves estimation sensitivity for multi-emitter systems.
This MIMO-Comp-FM framework is developed for standard matched filter processing. Coupling this framework with a "DG" form of the previously explored MIMO-MiCRFt is also investigated, illustrating the added benefit of pairing optimized subsets with similarly calibrated processing.
Each of these methods is developed to address unique and increasingly complex sources of estimation error. All approaches are initially developed and evaluated via simulated analysis where ground-truth is known. Then, despite hardware-induced distortion being unavoidable, the MIMO-Comp-FM framework is confirmed via loopback measurements to preserve the majority of CSC that was observed in simulation. Finally, open-air demonstration of each approach validates practical utility on a radar system.
Hao Xuan
Toward an Integrated Computational Framework for Metagenomics: From Sequence Alignment to Automated Knowledge DiscoveryWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Cuncong Zhong, ChairFengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu
Abstract
Metagenomic sequencing has become a central paradigm for studying complex microbial communities and their interactions with the host, with emerging applications in clinical prediction and disease modeling. In this work, we first investigate two representative application scenarios: predicting immune checkpoint inhibitor response in non-small cell lung cancer using gut microbial signatures, and characterizing host–microbiome interactions in neonatal systems. The proposed reference-free neural network captures both compositional and functional signals without reliance on reference genomes, while the neonatal study demonstrates how environmental and genetic factors reshape microbial communities and how probiotic intervention can mitigate pathogen-induced immune activation.
These studies highlight both the promise and the inherent difficulty of metagenomic analysis: transforming raw sequencing data into clinically actionable insights remains an algorithmically fragmented and computationally intensive process. This challenge arises from two key limitations: the lack of a unified algorithmic foundation for sequence alignment and the absence of systematic approaches for selecting and organizing analytical tools. Motivated by these challenges, we present a unified computational framework for metagenomic analysis that integrates complementary algorithmic and systems-level solutions.
First, to resolve fragmentation at the alignment level, we develop the Versatile Alignment Toolkit (VAT), a unified algorithmic system for biological sequence alignment across diverse applications. VAT introduces an asymmetric multi-view k-mer indexing scheme that integrates multiple seeding strategies within a single architecture and enables dynamic seed-length adjustment via longest common prefix (LCP)–based inference without re-indexing. A flexible seed-chaining mechanism further supports diverse alignment scenarios, including collinear, rearranged, and split alignments. Combined with a hardware-efficient in-register bitonic sorting algorithm and dynamic index-loading strategy, VAT achieves high efficiency and broad applicability across read mapping, homology search, and whole-genome alignment. Second, to address the challenge of tool selection and pipeline construction, we develop SNAIL, a natural language processing system for automated recognition of bioinformatics tools from large-scale and rapidly growing scientific literature. By integrating XGBoost and Transformer-based models such as SciBERT, SNAIL enables structured extraction of analytical tools and supports automated, reproducible pipeline construction.
Together, this work establishes a unified framework that is grounded in real-world applications and addresses key bottlenecks in metagenomic analysis, enabling more efficient, scalable, and clinically actionable workflows.
Pramil Paudel
Learning Without Seeing: Privacy-Preserving and Adversarial Perspectives in Lensless ImagingWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Fengjun Li, ChairAlex Bardas
Bo Luo
Cuncong Zhong
Haiyang Chao
Abstract
Conventional computer vision relies on spatially resolved, human-interpretable images, which inherently expose sensitive information and raise privacy concerns. In this study, we explore an alternative paradigm based on lensless imaging, where scenes are captured as diffraction patterns governed by the point spread function (PSF). Although unintelligible to humans, these measurements encode structured, distributed information that remains useful for computational inference.
We propose a unified framework for privacy-preserving vision that operates directly on lensless sensor measurements by leveraging their frequency-domain and phase-encoded properties. The framework is developed along two complementary directions. First, we enable reconstruction-free inference by exploiting the intrinsic obfuscation of lensless data. We show that semantic tasks such as classification can be performed directly on diffraction patterns using models tailored to non-local, phase-scrambled representations. We further design lensless-aware architectures and integrate them into practical pipelines, including a Swin Transformer-based steganographic framework (DiffHide) for secure and imperceptible information embedding. To assess robustness, we formalize adversarial threat models and develop defenses against learning-based reconstruction attacks, particularly GAN-driven inversion. Second, we investigate the limits of privacy by studying the reconstructability of lensless measurements without explicit knowledge of the forward model. We develop learning-based reconstruction methods that approximate the inverse mapping and analyze conditions under which sensitive information can be recovered. Our results demonstrate that lensless measurements enable effective vision tasks without reconstruction, while providing a principled framework to evaluate and mitigate privacy risks.
Sharmila Raisa
Digital Coherent Optical System: Investigation and MonitoringWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui, ChairMorteza Hashemi
Erik Perrins
Alessandro Salandrino
Jie Han
Abstract
Coherent wavelength-division multiplexed (WDM) optical fiber systems have become the primary transmission technology for high-capacity data networks, driven by the explosive bandwidth demand of cloud computing, streaming services, and large-scale artificial intelligence training infrastructure. This dissertation investigates two fundamental aspects of digital coherent fiber optic systems under the unifying theme of source and monitoring: the design of multi-wavelength optical sources compatible with high-order coherent detection, and the leveraging of fiber Kerr-effect nonlinearity at the coherent receiver to perform physical-layer link health monitoring and to assess inherent security vulnerabilities — both achieved through digital signal processing of the received complex optical field without dedicated hardware.
We begin by addressing the multi-wavelength transmitter challenge in WDM coherent systems. Existing quantum-dot, quantum-dash, and quantum-well based optical frequency comb (OFC) sources share a common limitation: individual comb line linewidths in the tens of MHz range caused by low output power levels of 1–20 mW, making them incompatible with high-order coherent detection. We demonstrate coherent system application of a single-section InGaAsP QW Fabry-Perot laser diode with greater than 120 mW optical power at the fiber pigtail and 36.14 GHz mode spacing. The high optical power per mode produces Lorentzian equivalent linewidths below 100 kHz — compatible with 16-QAM carrier phase recovery without optical phase locking. Experimental results obtained using a commercial Ciena WaveLogic-Ai coherent transceiver demonstrate 20-channel WDM transmission over 78.3 km of standard single-mode fiber with all channels below the HD-FEC threshold of 3.8 × 10⁻³ at 30 GBaud differential-coded 16-QAM, corresponding to an aggregate capacity of 2.15 Tb/s from a single laser device.
After investigating the QW Fabry-Perot laser as a multi-wavelength source for coherent WDM transmission, we leverage the coherent receiver DSP to exploit fiber Kerr-effect nonlinearity for longitudinal power profile estimation, enabling reconstruction of the signal power distribution P(z) along the full multi-span link without dedicated hardware or traffic interruption. We propose a modified enhanced regular perturbation (ERP) method that corrects two independent physical error sources of the standard RP1 least-squares baseline: the accumulated nonlinear phase rotation, and the dispersion-mediated phase-to-intensity conversion — a second bias source not addressed by prior methods. The RP1 method produces mean absolute error (MAE) that scales quadratically with span count, growing to 1.656 dB at 10 spans and 3 dBm. The modified ERP reduces this to 0.608 dB — an improvement that grows consistently with link length, confirming increasing advantage in the long-haul regime. Extension to WDM through an XPM-aware per-channel formulation achieves MAE of 0.113–0.419 dB across 150–500 km link lengths.
In addition to its role in enabling DSP-based longitudinal power profile estimation, the fiber Kerr-effect nonlinearity is shown to give rise to an inherent physical-layer security vulnerability in coherent WDM systems. We show that an eavesdropper co-tenanting a shared fiber — transmitting a continuous-wave probe at a wavelength adjacent to the legitimate signal — can capture the XPM-induced waveform at the fiber output and apply a bidirectional gated recurrent unit neural network, trained on split-step Fourier method simulation data, to reconstruct the transmitted symbol sequence without physical fiber access and without perturbing the legitimate signal. This eavesdropping mechanism is experimentally validated using a commercial Ciena WaveLogic-Ai coherent transceiver for ASK, BPSK, QPSK, and 16-QAM modulation formats at 4.26 GBaud and 8.56 GBaud over one- and two-span 75 km fiber systems, achieving zero symbol errors under high-OSNR conditions. Noise-aware training over OSNR from 20 to 60 dB maintains symbol error rate below 10⁻² for OSNR above 25–30 dB.
Together, these three contributions demonstrate that the coherent fiber optic system is a versatile physical instrument extending well beyond its role as a data transmission medium. The coherent receiver infrastructure — deployed for high-order modulation and data recovery — simultaneously enables the high-power OFC laser to serve as a practical multi-wavelength transmitter source, and provides the complex field measurement capability through which fiber Kerr-effect nonlinearity can be exploited constructively for distributed link monitoring and, as a direct consequence, reveals an inherent physical-layer security exposure in shared fiber infrastructure. This unified perspective on the coherent system as both a transmission platform and a general-purpose measurement instrument has direct relevance to the design of spectrally efficient, self-monitoring, and physically secure optical interconnects for next-generation AI computing networks.
Arman Ghasemi
Task-Oriented Data Communication and Compression for Timely Forecasting and Control in Smart GridsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Morteza Hashemi, ChairAlexandru Bardas
Prasad Kulkarni
Taejoon Kim
Zsolt Talata
Abstract
Advances in sensing, communication, and intelligent control have transformed power systems into data-driven smart grids, where forecasting and intelligent decision-making are essential components. Modern smart grids include distributed energy resources (DERs), renewable generation, battery energy storage systems, and large numbers of grid-edge devices that continuously generate time-series data. At the same time, increasing renewable penetration introduces substantial uncertainty in generation, net load, and market operations, while communication networks impose bandwidth, latency, and reliability constraints on timely data delivery. This dissertation addresses how time-series forecasting, data compression, and task-oriented wireless communication can be jointly designed for smart grid applications.
First, we study weather-aware distributed energy management in prosumer-centric microgrids and show that incorporating day-ahead weather information into decision-making improves battery dispatch and reduces the impact of renewable uncertainty. Second, we introduce forecasting-aware energy management in both wholesale and retail electricity markets, highlighting how renewable generation forecasting affects pricing, scheduling, and uncertainty mitigation. Third, we develop and evaluate deep learning methods for renewable generation forecasting, showing that Transformer-based models outperform recurrent baselines such as RNN and LSTM for wind and solar prediction tasks.
Building on this forecasting foundation, we develop a communication-efficient forecasting framework in which high-dimensional smart grid measurements are compressed into low-dimensional latent representations before transmission. This framework is extended into a task-oriented communication system that jointly optimizes data relevance and information timeliness, so that the receiver obtains compressed updates that remain useful for downstream forecasting tasks. Finally, we extend this framework to a distributed multi-node uplink setting, where multiple grid sensors share a bandwidth-limited channel, and develop scheduling policy that improves both the timeliness and task-relevance of received updates.
Pardaz Banu Mohammad
Towards Early Detection of Alzheimer’s Disease based on Speech using Reinforcement Learning Feature SelectionWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Arvin Agah, ChairDavid Johnson
Sumaiya Shomaji
Dongjie Wang
Sara Wilson
Abstract
Alzheimer’s Disease (AD) is a progressive, irreversible neurodegenerative disorder and the leading cause of dementia worldwide, affecting an estimated 55 million people globally. The window of opportunity for intervention is demonstrably narrow, making reliable early-stage detection a clinical and scientific imperative. While current diagnostic techniques such as neuroimaging and cerebrospinal fluid (CSF) biomarkers carry well-defined limitations in scalability, cost, and access equity, speech has emerged as a compelling non-invasive proxy for cognitive function evaluation.
This work presents a novel approach for using acoustic feature selection as a decision-making technique and implements it using deep reinforcement learning. Specifically, we use a Deep-Q-Network (DQN) agent to navigate a high dimensional feature space of over 6,000 acoustic features extracted using the openSMILE toolkit, dynamically constructing maximally discriminative and non-redundant features subsets. In order to capture the latent structural dependencies among
acoustic features which classifier and wrapper methods have difficulty to model, we introduce the Graph Convolutional Network (GCN) based correlation awareness feature representation layer that operates as an auxiliary input to the DQN state encoder. Post selection interpretability is reinforced through TF-IDF weighting and K-means clustering which together yield both feature level and cluster level explanations that are clinically actionable. The framework is evaluated across five classifiers, namely, support vector machines (SVM), logistic regression, XGBoost, random forest, and feedforward neural network. We use 10-fold stratified cross-validation on established benchmarks of datasets, including DementiaBank Pitt Corpus, Ivanova, and ADReSS challenge data. The proposed approach is benchmarked against state-of-the-art feature selection methods such as LASSO, Recursive feature selection, and mutual information selectors. This research contributes to three primary intellectual advances: (1) a graph augmented state representation that encodes inter-feature relational structure within a reinforcement learning agent, (2) a clinically interpretable pipeline that bridges the gap between algorithmic performance and translational utility, and (3) multilingual data approach for the reinforcement learning agent framework. This study has direct implications for equitable, low-cost and scalable AD screening in both clinical and community settings.
Zhou Ni
Bridging Federated Learning and Wireless Networks: From Adaptive Learning to FLdriven System OptimizationWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Morteza Hashemi, ChairFengjun Li
Van Ly Nguyen
Han Wang
Shawn Keshmiri
Abstract
Federated learning (FL) has emerged as a promising distributed machine learning
framework that enables multiple devices to collaboratively train models without sharing raw
data, thereby preserving privacy and reducing the need for centralized data collection. However,
deploying FL in practical wireless environments introduces two major challenges. First, the data
generated across distributed devices are often heterogeneous and non-IID, which makes a single
global model insufficient for many users. Second, learning performance in wireless systems is
strongly affected by communication constraints such as interference, unreliable channels, and
dynamic resource availability. This PhD research aims to address these challenges by bridging
FL methods and wireless networks.
In the first thrust, we develop personalized and adaptive FL methods given the underlying
wireless link conditions. To this end, we propose channel-aware neighbor selection and
similarity-aware aggregation in wireless device-to-device (D2D) learning environments. We
further investigate the impacts of partial model update reception on FL performance. The
overarching goal of the first thrust is to enhance FL performance under wireless constraints.
Next, we investigate the opposite direction and raise the question: How can FL-based distributed
optimization be used for the design of next-generation wireless systems? To this end, we
investigate communication-aware participation optimization in vehicular networks, where
wireless resource allocation affects the number of clients that can successfully contribute to FL.
We further extend this direction to integrated sensing and communication (ISAC) systems,
where personalized FL (PFL) is used to support distributed beamforming optimization with joint
sensing and communication objectives.
Overall, this research establishes a unified framework for bridging FL and wireless networks. As
a future direction, this work will be extended to more realistic ISAC settings with dynamic
spectrum access, where communication, sensing, scheduling, and learning performance must be
considered jointly.
Past Defense Notices
Charles Mohr
Design and Evaluation of Stochastic Processes as Physical Radar WaveformsWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Shannon Blunt, ChairChristopher Allen
Carl Leuschen
James Stiles
Zsolt Talata
Abstract
Recent advances in waveform generation and in computational power have enabled the design and implementation of new complex radar waveforms. Still, even with these advances in computation, in a pulse agile mode, where the radar transmits unique waveforms at every pulse, the requirement to design physically robust waveforms which achieve good autocorrelation sidelobes, are spectrally contained, and have a constant amplitude envelope for high power operation, can require expensive computation equipment and can impede real time operation. This work addresses this concern in the context of FM noise waveforms which have been demonstrated in recent years in both simulation and in experiments to achieve low autocorrelation sidelobes through the high dimensionality of coherent integration when operating in a pulse agile mode. However while they are effective, the approaches to design these waveforms requires the optimization of each individual waveform making them subject to the concern above.
This dissertation takes a different approach. Since these FM noise waveforms are meant to be noise like in the first place, the waveforms here are instantiated as the sample functions of a stochastic process which has been specially designed to produce spectrally contained, constant amplitude waveforms with noise like cancellation of sidelobes. This makes the waveform creation process little more computationally expensive than pulling numbers from a random number generator (RNG) since the optimization designs a waveform generating function (WGF) itself rather than each waveform themselves. This goal is achieved by leveraging gradient descent optimization methods to reduce the expected frequency template error (EFTE) cost function for both the pulsed stochastic waveform generation (StoWGe) waveform model and a new CW version of StoWGe denoted CW-StoWGe. The effectiveness of these approaches and their ability to generate useful radar waveforms is analyzed using several stochastic waveform generation metrics developed here. The EFTE optimization is shown through simulation to produce WGFs which generate FM noise waveforms in both pulsed and CW modes which achieve good spectral containment and autocorrelation sidelobes. The resulting waveforms will be demonstrated in both loopback and in open-air experiments to be robust to physical implementation.
Michael Stees
Optimization-based Methods in High-Order Mesh Generation and UntanglingWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Suzanne Shontz, ChairPerry Alexander
Prasad Kulkarni
Jim Miller
Weizhang Huang
Abstract
High-order numerical methods for solving PDEs have the potential to deliver higher solution accuracy at a lower cost than their low-order counterparts. To fully leverage these high-order computational methods, they must be paired with a discretization of the domain that accurately captures key geometric features. In the presence of curved boundaries, this requires a high-order curvilinear mesh. Consequently, there is a lot of interest in high-order mesh generation methods. The majority of such methods warp a high-order straight-sided mesh through the following three step process. First, they add additional nodes to a low-order mesh to create a high-order straight-sided mesh. Second, they move the newly added boundary nodes onto the curved domain (i.e., apply a boundary deformation). Finally, they compute the new locations of the interior nodes based on the boundary deformation. We have developed a mesh warping framework based on optimal weighted combinations of nodal positions. Within our framework, we develop methods for optimal affine and convex combinations of nodal positions, respectively. We demonstrate the effectiveness of the methods within our framework on a variety of high-order mesh generation examples in two and three dimensions. As with many other methods in this area, the methods within our framework do not guarantee the generation of a valid mesh. To address this issue, we have also developed two high-order mesh untangling methods. These optimization-based untangling methods formulate unconstrained optimization problems for which the objective functions are based on the unsigned and signed angles of the curvilinear elements. We demonstrate the results of our untangling methods on a variety of two-dimensional triangular meshes.
Farzad Farshchi
Deterministic Memory Systems for Real-time Multicore ProcessorsWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Heechul Yun, ChairEsam Eldin Mohamed Aly
Prasad Kulkarni
Rodolfo Pellizzoni
Shawn Keshmiri
Abstract
With the emergence of autonomous systems such as self-driving cars and drones, the need for high-performance real-time embedded systems is increasing. On the other hand, the physics of the autonomous systems constraints size, weight, and power consumption (known as SWaP constraints) of the embedded systems. A solution to satisfy the need for high performance while meeting the SWaP constraints is to incorporate multicore processors in real-time embedded systems. However, unlike unicore processors, in multicore processors, the memory system is shared between the cores. As a result, the memory system performance varies widely due to inter-core memory interference. This can lead to over-estimating the worst-case execution time (WCET) of the real-time tasks running on these processors, and therefore, under-utilizing the computation resources. In fact, recent studies have shown that real-time tasks can be slowed down more than 300 times due to inter-core memory interference.
In this work, we propose novel software and hardware extensions to multicore processors to bound the inter-core memory interference in order to reduce the pessimism of WCET and to improve time predictability. We introduce a novel memory abstraction, which we call Deterministic Memory, that cuts across various layers of the system: the application, OS, and hardware. The key characteristic of Deterministic Memory is that the platform—the OS and hardware—guarantees small and tightly bounded worst-case memory access timing. Additionally, we propose a drop-in hardware IP that enables bounding the memory interference by per-core regulation of the memory access bandwidth at fine-grained time intervals. This new IP, which we call the Bandwidth Regulation Unit (BRU), does not require significant changes to the processor microarchitecture and can be seamlessly integrated with the existing microprocessors. Moreover, BRU has the ability to regulate the memory access bandwidth of multiple cores collectively to improve bandwidth utilization. As for future work, we plan to further improve bandwidth utilization by extending BRU to recognize memory requests accessing different levels of the memory hierarchy (e.g. LLC and DRAM). We propose to fully evaluate these extensions on open-source software and hardware and measure their effectiveness with realistic case studies.
Waqar Ali
Deterministic Scheduling of Real-Time Tasks on Heterogeneous Multicore PlatformsWhen & Where:
https://zoom.us/j/484640842?pwd=TDAyekxtRDVaTHF0K1NlbU5wNFVtUT09 - The password for the meeting is 005158.
Committee Members:
Heechul Yun, ChairEsam Eldin Mohamed Aly
Drew Davidson
Prasad Kulkarni
Shawn Keshmiri
Abstract
Scheduling of real-time tasks involves analytically determining whether each task in a group of periodic tasks can finish before its deadline. This problem is well understood for unicore platforms and there are exact schedulability tests which can be used for this purpose. However, in multicore platforms, sharing of hardware resources between simultaneously executing real-time tasks creates non-deterministic coupling between them based on their requirement of the shared hardware resource(s) which significantly complicates the schedulability analysis. The standard practice is to over-estimate the worst-case execution time (WCET) of the real-time tasks, by a constant factor (e.g, 2x), when determining schedulability on these platforms. Although widely used, this practice has two serious flaws. Firstly, it can make the schedulability analysis overly pessimistic because all tasks do not interfere with each other equally. Secondly, recent findings have shown that for tasks that do get affected by shared resource interference, they can experience extreme (e.g., >300X) WCET increases on commercial-of-the-shelf (COTS) multicore platforms, in which case, a schedulability analysis incorporating a blanket interference factor of 2x for every task cannot give accurate results. Apart from the problem of WCET estimation, the established schedulability analyses for multicore platforms are inherently pessimistic due to the effect of carry-in jobs from high priority tasks. Finally, the increasing integration of hardware accelerators (e.g., GPU) on SoCs complicates the problem further because of the nuances of scheduling on these devices which is different from traditional CPU scheduling.
We propose a novel approach towards scheduling of real-time tasks on heterogeneous multicore platforms with the aim of increased determinism and utilization in the online execution of real-time tasks and decreased pessimism in the offline schedulability analysis. Under this framework, we propose to statically group different real-time tasks into a single scheduling entity called a virtual-gang. Once formed, these virtual-gangs are to be executed one-at-a-time with strict regulation on interference from other sources with the help of state-of-the-art techniques for performance isolation in multicore platforms. Using this idea, we can achieve three goals. Firstly, we can limit the effect of shared resource interference which can exist only between tasks that are part of the same virtual-gang. Secondly, due to one-gang-at-a-time policy, we can transform the complex problem of scheduling real-time tasks on multicore platforms into simple and well-understood problem of scheduling these tasks on unicore platforms. Thirdly, we can demonstrate that it is easy to incorporate scheduling on integrated GPUs into our framework while preserving the determinism of the overall system. We show that the virtual-gang formation problem can be modeled as an optimization problem and present algorithms for solving it with different trade-offs. We propose to fully implement this framework in the open-source Linux kernel and evaluate it both analytically using generated tasksets and empirically with realistic case-studies.
Amir Modarresi
Network Resilience Architecture and Analysis for Smart HomesWhen & Where:
https://kansas.zoom.us/j/228154773
Committee Members:
Victor Frost, ChairMorteza Hashemi
Fengjun Li
Bo Luo
John Symons
Abstract
The Internet of Things (IoT) is evolving rapidly to every aspect of human life including, healthcare, homes, cities, and driverless vehicles that makes humans more dependent on the Internet and related infrastructure. While many researchers have studied the structure of the Internet that is resilient as a whole, new studies are required to investigate the resilience of the edge networks in which people and \things" connect to the Internet. Since the range of service requirements varies at the edge of the network, a wide variety of technologies with different topologies are involved. Though the heterogeneity of the technologies at the edge networks can improve the robustness through the diversity of mechanisms, other issues such as connectivity among the utilized technologies and cascade of failures would not have the same effect as a simple network. Therefore, regardless of the size of networks at the edge, the structure of these networks is complicated and requires appropriate study.
In this dissertation, we propose an abstract model for smart homes, as part of one of the fast-growing networks at the edge, to illustrate the heterogeneity and complexity of the network structure. As the next step, we make two instances of the abstract smart home model and perform a graph-theoretic analysis to recognize the fundamental behavior of the network to improve its robustness. During the process, we introduce a formal multilayer graph model to highlight the structures, topologies, and connectivity of various technologies at the edge networks and their connections to the Internet core. Furthermore, we propose another graph model, technology interdependence graph, to represent the connectivity of technologies. This representation shows the degree of connectivity among technologies and illustrates which technologies are more vulnerable to link and node failures.
Moreover, the dominant topologies at the edge change the node and link vulnerability, which can be used to apply worst-case scenario attacks. Restructuring of the network by adding new links associated with various protocols to maximize the robustness of a given network can have distinctive outcomes for different robustness metrics. However, typical centrality metrics usually fail to identify important nodes in multi-technology networks such as smart homes. We propose four new centrality metrics to improve the process of identifying important nodes in multi-technology networks and recognize vulnerable nodes. Finally, we study over 1000 different smart home topologies to examine the resilience of the networks with typical and the proposed centrality metrics.
Qiaozhi Wang
Towards the Understanding of Private Content -- Content-based Privacy Assessment and Protection in Social NetworksWhen & Where:
246 Nichols Hall
Committee Members:
Bo Luo, ChairFengjun Li
Guanghui Wang
Heechul Yun
Prajna Dhar
Abstract
In the wake of the Facebook data breach scandal, users begin to realize how vulnerable their per-sonal data is and how blindly they trust the online social networks (OSNs) by giving them an inordinate amount of private data that touch on unlimited areas of their lives. In particular, stud-ies show that users sometimes reveal too much information or unintentionally release regretful messages, especially when they are careless, emotional, or unaware of privacy risks. Additionally, friends on social media platforms are also found to be adversarial and may leak one’s private in-formation. Threats from within users’ friend networks – insider threats by human or bots – may be more concerning because they are much less likely to be mitigated through existing solutions, e.g., the use of privacy settings. Therefore, we argue that the key component of privacy protection in social networks is protecting sensitive/private content, i.e. privacy as having the ability to control dissemination of information. A mechanism to automatically identify potentially sensitive/private posts and alert users before they are posted is urgently needed.
In this dissertation, we propose a context-aware, text-based quantitative model for private information assessment, namely PrivScore, which is expected to serve as the foundation of a privacy leakage alerting mechanism. We first solicit diverse opinions on the sensitiveness of private information from crowdsourcing workers, and examine the responses to discover a perceptual model behind the consensuses and disagreements. We then develop a computational scheme using deep neural networks to compute a context-free PrivScore (i.e., the “consensus” privacy score among average users). Finally, we integrate tweet histories, topic preferences and social contexts to generate a per-sonalized context-aware PrivScore. This privacy scoring mechanism could be employed to identify potentially-private messages and alert users to think again before posting them to OSNs. Such a mechanism could also benefit non-human users such as social media chatbots.
Mohammad Saad Adnan
Corvus: Integrating Blockchain with Internet of Things Towards a Privacy Preserving, Collaborative and Accountable, Surveillance System in a Smart CommunityWhen & Where:
246 Nichols Hall
Committee Members:
Bo Luo, ChairAlex Bardas
Fengjun Li
Abstract
The Internet of Things is a rapidly growing field that offers improved data collection, analysis and automation as solutions for everyday problems. A smart-city is one major example where these solutions can be applied to issues with urbanization. And while these solutions can help improve the quality of life of the citizens, there are always security & privacy risks. Data collected in a smart-city can infringe upon the privacy of users and reveal potentially harmful information. One example is a surveillance system in a smart city. Research shows that people are less likely to commit crimes if they are being watched. Video footage can also be used by law enforcement to track and stop criminals. But it can also be harmful if accessible to untrusted users. A malicious user who can gain access to a surveillance system can potentially use that information to harm others. There are researched methods that can be used to encrypt the video feed, but then it is only accessible to the system owner. Polls show that public opinion of surveillance systems is declining even if they provide increased security because of the lack of transparency in the system. Therefore, it is vital for the system to be able to do its intended purpose while also preserving privacy and holding malicious users accountable.
To help resolve these issues with privacy & accountability and to allow for collaboration, we propose Corvus, an IoT surveillance system that targets smart communities. Corvus is a collaborative blockchain based surveillance system that uses context-based image captioning to anonymously describe events & people detected. These anonymous captions are stored on the immutable blockchain and are accessible by other users. If they find the description from another camera relevant to their own, they can request the raw video footage if necessary. This system supports collaboration between cameras from different networks, such as between two neighbors with their own private camera networks. This paper will explore the design of this system and how it can be used as a privacy-preserving, but translucent & accountable approach to smart-city surveillance. Our contributions include exploring a novel approach to anonymizing detected events and designing the surveillance system to be privacy-preserving and collaborative.
Sandip Dey
Analysis of Performance Overheads in DynamoRIO Binary TranslatorWhen & Where:
2001 B Eaton Hall
Committee Members:
Prasad Kulkarni, ChairJerzy Grzymala-Busse
Esam Eldin Mohamed Aly
Abstract
Dynamic binary translation is the process of translating instruction code from one architecture to another while it executes, i.e., dynamically. As modern applications are becoming larger, more complex and more dynamic, the tools to manipulate these programs are also becoming increasingly complex. DynamoRIO is one such dynamic binary translation tool that targets the most common IA-32 (a.k.a. x86) architecture on the most popular operating systems - Windows and Linux. DynamoRIO includes applications ranging from program analysis and understanding to profiling, instrumentation, optimization, improving software security, and more. However, even considering all of these optimization techniques, DynamoRIO still has the limitations of performance and memory usage, which restrict deployment scalability. The goal of my thesis is to break down the various aspects which contribute to the overhead burden and evaluate which factors directly contribute to this overhead. This thesis will discuss all of these factors in further detail. If the process can be streamlined, this application will become more viable for widespread adoption in a variety of areas. We have used industry standard Mi benchmarks in order to evaluate in detail the amount and distribution of the overhead in DynamoRIO. Our statistics from the experiments show that DynamoRIO executes a large number of additional instructions when compared to the native execution of the application. Furthermore, these additional instructions are involved in building the basic blocks, linking, trace creation, and resolution of indirect branches, all of which in return contribute to the frequent exiting of the code cache. We will discuss in detail all of these overheads, show statistics of instructions for each overhead, and finally show the observations and analysis in this defense.
Eric Schweisberger
Optical Limiting via Plasmonic Parametric AbsorbersWhen & Where:
2001 B Eaton Hall
Committee Members:
Alessandro Salandrino , ChairKenneth Demarest
Rongqing Hui
Abstract
Optical sensors are increasingly prevalent devices whose costs tend to increase with their sensitivity. A hike in sensitivity is typically associated with fragility, rendering expensive devices vulnerable to threats of high intensity illumination. These potential costs and even security risks have generated interest in devices that maintain linear transparency under tolerable levels of illumination, but can quickly convert to opaque when a threshold is exceeded. Such a device is deemed an optical limiter. Copious amounts of research have been performed over the last few decades on optical nonlinearities and their efficacy in limiting. This work provides an overview of the existing literature and evaluates the applicability of known limiting materials to threats that vary in both temporal and spectral width. Additionally, we introduce the concept of plasmonic parametric resonance (PPR) and its potential for devising a new limiting material, the plasmonic parametric absorber (PPA). We show that this novel material exhibits a reverse saturable absorption behavior and promises to be an effective tool in the kit of optical limiter design.
Muhammad Saad Adnan
Corvus: Integrating Blockchain with Internet of Things Towards a Privacy Preserving, Collaborative and Accountable, Surveillance System in a Smart CommunityWhen & Where:
246 Nichols Hall
Committee Members:
Bo Luo, ChairAlex Bardas
Fengjun Li
Abstract
The Internet of Things is been a rapidly growing field that offers improved data collection, analysis and automation as solutions for everyday problems. A smart-city is one major example where these solutions can be applied to issues with urbanization. And while these solutions can help improve the quality of live of the citizens, there are always security & privacy risks. Data collected in a smart-city can infringe upon the privacy of users and reveal potentially harmful information. One example is a surveillance system in a smart city. Research shows that people are less likely to commit crimes if they are being watched. Video footage can also be used by law enforcement to track and stop criminals. But it can also be harmful if accessible to untrusted users. A malicious user who can gain access to a surveillance system can potentially use that information to harm others. There are researched methods that can be used to encrypt the video feed, but then it is only accessible to the system owner. Polls show that public opinion of surveillance systems is declining even if they provide increased security because of the lack of transparency in the system. Therefore, it is vital for the system to be able to do its intended purpose while also preserving privacy and holding malicious users accountable.
To help resolve these issues with privacy & accountability and to allow for collaboration, we propose Corvus, an IoT surveillance system that targets smart communities. Corvus is a collaborative blockchain based surveillance system that uses context-based image captioning to anonymously describe events & people detected. These anonymous captions are stored on the immutable blockchain and are accessible by other users. If they find the description from another camera relevant to their own, they can request the raw video footage if necessary. This system supports collaboration between cameras from different networks, such as between two neighbors with their own private camera networks. This paper will explore the design of this system and how it can be used as a privacy-preserving, but translucent & accountable approach to smart-city surveillance. Our contributions include exploring a novel approach to anonymizing detected events and designing the surveillance system to be privacy-preserving and collaborative.