Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Andrew Riachi
An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux SmapsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Prasad Kulkarni, ChairPerry Alexander
Drew Davidson
Heechul Yun
Abstract
Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.
In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.
Elizabeth Wyss
A New Frontier for Software Security: Diving Deep into npmWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Drew Davidson, ChairAlex Bardas
Fengjun Li
Bo Luo
J. Walker
Abstract
Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week.
However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.
This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains.
Alfred Fontes
Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope ModulationsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Patrick McCormick, ChairShannon Blunt
Jonathan Owen
Abstract
Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.
A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal.
The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.
Qua Nguyen
Hybrid Array and Privacy-Preserving Signaling Optimization for NextG Wireless CommunicationsWhen & Where:
Zoom Defense, please email jgrisafe@ku.edu for link.
Committee Members:
Erik Perrins, ChairMorteza Hashemi
Zijun Yao
Taejoon Kim
KC Kong
Abstract
This PhD research tackles two critical challenges in NextG wireless networks: hybrid precoder design for wideband sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and privacy-preserving federated learning (FL) over wireless networks.
In the first part, we propose a novel hybrid precoding framework that integrates true-time delay (TTD) devices and phase shifters (PS) to counteract the beam squint effect - a significant challenge in the wideband sub-THz massive MIMO systems that leads to considerable loss in array gain. Unlike previous methods that only designed TTD values while fixed PS values and assuming unbounded time delay values, our approach jointly optimizes TTD and PS values under realistic time delays constraint. We determine the minimum number of TTD devices required to achieve a target array gain using our proposed approach. Then, we extend the framework to multi-user wideband systems and formulate a hybrid array optimization problem aiming to maximize the minimum data rate across users. This problem is decomposed into two sub-problems: fair subarray allocation, solved via continuous domain relaxation, and subarray gain maximization, addressed via a phase-domain transformation.
The second part focuses on preserving privacy in FL over wireless networks. First, we design a differentially-private FL algorithm that applies time-varying noise variance perturbation. Taking advantage of existing wireless channel noise, we jointly design differential privacy (DP) noise variances and users transmit power to resolve the tradeoffs between privacy and learning utility. Next, we tackle two critical challenges within FL networks: (i) privacy risks arising from model updates and (ii) reduced learning utility due to quantization heterogeneity. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. We approach to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that ensures a DP guarantee and minimal quantization distortion. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Lastly, inspired by the information-theoretic rate-distortion framework, a privacy-distortion tradeoff problem is formulated to minimize privacy loss under a given maximum allowable quantization distortion. The optimal solution to this problem is identified, revealing that the privacy loss decreases as the maximum allowable quantization distortion increases, and vice versa.
This research advances hybrid array optimization for wideband sub-THz massive MIMO and introduces novel algorithms for privacy-preserving quantized FL with diverse precision. These contributions enable high-throughput wideband MIMO communication systems and privacy-preserving AI-native designs, aligning with the performance and privacy protection demands of NextG networks.
Arin Dutta
Performance Analysis of Distributed Raman Amplification with Different Pumping ConfigurationsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui, ChairMorteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao
Abstract
As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.
Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.
The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.
As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.
Audrey Mockenhaupt
Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target RecognitionWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Patrick McCormick, ChairShannon Blunt
Jon Owen
Abstract
As machine learning (ML), artificial intelligence (AI), and deep learning continue to advance, their applications become more diverse – one such application is synthetic aperture radar (SAR) automatic target recognition (ATR). These SAR ATR networks use different forms of deep learning such as convolutional neural networks (CNN) to classify targets in SAR imagery. An emerging research area of SAR is dual function radar communication (DFRC) which performs both radar and communications functions using a single co-designed modulation. The utilization of DFRC emissions for SAR imaging impacts image quality, thereby influencing SAR ATR network training. Here, using the Civilian Vehicle Data Dome dataset from the AFRL, SAR ATR networks are trained and evaluated with simulated data generated using Gaussian Minimum Shift Keying (GMSK) and Linear Frequency Modulation (LFM) waveforms. The networks are used to compare how the target classification accuracy of the ATR network differ between DFRC (i.e., GMSK) and baseline (i.e., LFM) emissions. Furthermore, as is common in pulse-agile transmission structures, an effect known as ’range sidelobe modulation’ is examined, along with its impact on SAR ATR. Finally, it is shown that SAR ATR network can be trained for GMSK emissions using existing LFM datasets via two types of data augmentation.
Past Defense Notices
Waqar Ali
Deterministic Scheduling of Real-Time Tasks on Heterogeneous Multicore PlatformsWhen & Where:
https://zoom.us/j/484640842?pwd=TDAyekxtRDVaTHF0K1NlbU5wNFVtUT09 - The password for the meeting is 005158.
Committee Members:
Heechul Yun, ChairEsam Eldin Mohamed Aly
Drew Davidson
Prasad Kulkarni
Shawn Keshmiri
Abstract
Scheduling of real-time tasks involves analytically determining whether each task in a group of periodic tasks can finish before its deadline. This problem is well understood for unicore platforms and there are exact schedulability tests which can be used for this purpose. However, in multicore platforms, sharing of hardware resources between simultaneously executing real-time tasks creates non-deterministic coupling between them based on their requirement of the shared hardware resource(s) which significantly complicates the schedulability analysis. The standard practice is to over-estimate the worst-case execution time (WCET) of the real-time tasks, by a constant factor (e.g, 2x), when determining schedulability on these platforms. Although widely used, this practice has two serious flaws. Firstly, it can make the schedulability analysis overly pessimistic because all tasks do not interfere with each other equally. Secondly, recent findings have shown that for tasks that do get affected by shared resource interference, they can experience extreme (e.g., >300X) WCET increases on commercial-of-the-shelf (COTS) multicore platforms, in which case, a schedulability analysis incorporating a blanket interference factor of 2x for every task cannot give accurate results. Apart from the problem of WCET estimation, the established schedulability analyses for multicore platforms are inherently pessimistic due to the effect of carry-in jobs from high priority tasks. Finally, the increasing integration of hardware accelerators (e.g., GPU) on SoCs complicates the problem further because of the nuances of scheduling on these devices which is different from traditional CPU scheduling.
We propose a novel approach towards scheduling of real-time tasks on heterogeneous multicore platforms with the aim of increased determinism and utilization in the online execution of real-time tasks and decreased pessimism in the offline schedulability analysis. Under this framework, we propose to statically group different real-time tasks into a single scheduling entity called a virtual-gang. Once formed, these virtual-gangs are to be executed one-at-a-time with strict regulation on interference from other sources with the help of state-of-the-art techniques for performance isolation in multicore platforms. Using this idea, we can achieve three goals. Firstly, we can limit the effect of shared resource interference which can exist only between tasks that are part of the same virtual-gang. Secondly, due to one-gang-at-a-time policy, we can transform the complex problem of scheduling real-time tasks on multicore platforms into simple and well-understood problem of scheduling these tasks on unicore platforms. Thirdly, we can demonstrate that it is easy to incorporate scheduling on integrated GPUs into our framework while preserving the determinism of the overall system. We show that the virtual-gang formation problem can be modeled as an optimization problem and present algorithms for solving it with different trade-offs. We propose to fully implement this framework in the open-source Linux kernel and evaluate it both analytically using generated tasksets and empirically with realistic case-studies.
Amir Modarresi
Network Resilience Architecture and Analysis for Smart HomesWhen & Where:
https://kansas.zoom.us/j/228154773
Committee Members:
Victor Frost, ChairMorteza Hashemi
Fengjun Li
Bo Luo
John Symons
Abstract
The Internet of Things (IoT) is evolving rapidly to every aspect of human life including, healthcare, homes, cities, and driverless vehicles that makes humans more dependent on the Internet and related infrastructure. While many researchers have studied the structure of the Internet that is resilient as a whole, new studies are required to investigate the resilience of the edge networks in which people and \things" connect to the Internet. Since the range of service requirements varies at the edge of the network, a wide variety of technologies with different topologies are involved. Though the heterogeneity of the technologies at the edge networks can improve the robustness through the diversity of mechanisms, other issues such as connectivity among the utilized technologies and cascade of failures would not have the same effect as a simple network. Therefore, regardless of the size of networks at the edge, the structure of these networks is complicated and requires appropriate study.
In this dissertation, we propose an abstract model for smart homes, as part of one of the fast-growing networks at the edge, to illustrate the heterogeneity and complexity of the network structure. As the next step, we make two instances of the abstract smart home model and perform a graph-theoretic analysis to recognize the fundamental behavior of the network to improve its robustness. During the process, we introduce a formal multilayer graph model to highlight the structures, topologies, and connectivity of various technologies at the edge networks and their connections to the Internet core. Furthermore, we propose another graph model, technology interdependence graph, to represent the connectivity of technologies. This representation shows the degree of connectivity among technologies and illustrates which technologies are more vulnerable to link and node failures.
Moreover, the dominant topologies at the edge change the node and link vulnerability, which can be used to apply worst-case scenario attacks. Restructuring of the network by adding new links associated with various protocols to maximize the robustness of a given network can have distinctive outcomes for different robustness metrics. However, typical centrality metrics usually fail to identify important nodes in multi-technology networks such as smart homes. We propose four new centrality metrics to improve the process of identifying important nodes in multi-technology networks and recognize vulnerable nodes. Finally, we study over 1000 different smart home topologies to examine the resilience of the networks with typical and the proposed centrality metrics.
Qiaozhi Wang
Towards the Understanding of Private Content -- Content-based Privacy Assessment and Protection in Social NetworksWhen & Where:
246 Nichols Hall
Committee Members:
Bo Luo, ChairFengjun Li
Guanghui Wang
Heechul Yun
Prajna Dhar
Abstract
In the wake of the Facebook data breach scandal, users begin to realize how vulnerable their per-sonal data is and how blindly they trust the online social networks (OSNs) by giving them an inordinate amount of private data that touch on unlimited areas of their lives. In particular, stud-ies show that users sometimes reveal too much information or unintentionally release regretful messages, especially when they are careless, emotional, or unaware of privacy risks. Additionally, friends on social media platforms are also found to be adversarial and may leak one’s private in-formation. Threats from within users’ friend networks – insider threats by human or bots – may be more concerning because they are much less likely to be mitigated through existing solutions, e.g., the use of privacy settings. Therefore, we argue that the key component of privacy protection in social networks is protecting sensitive/private content, i.e. privacy as having the ability to control dissemination of information. A mechanism to automatically identify potentially sensitive/private posts and alert users before they are posted is urgently needed.
In this dissertation, we propose a context-aware, text-based quantitative model for private information assessment, namely PrivScore, which is expected to serve as the foundation of a privacy leakage alerting mechanism. We first solicit diverse opinions on the sensitiveness of private information from crowdsourcing workers, and examine the responses to discover a perceptual model behind the consensuses and disagreements. We then develop a computational scheme using deep neural networks to compute a context-free PrivScore (i.e., the “consensus” privacy score among average users). Finally, we integrate tweet histories, topic preferences and social contexts to generate a per-sonalized context-aware PrivScore. This privacy scoring mechanism could be employed to identify potentially-private messages and alert users to think again before posting them to OSNs. Such a mechanism could also benefit non-human users such as social media chatbots.
Mohammad Saad Adnan
Corvus: Integrating Blockchain with Internet of Things Towards a Privacy Preserving, Collaborative and Accountable, Surveillance System in a Smart CommunityWhen & Where:
246 Nichols Hall
Committee Members:
Bo Luo, ChairAlex Bardas
Fengjun Li
Abstract
The Internet of Things is a rapidly growing field that offers improved data collection, analysis and automation as solutions for everyday problems. A smart-city is one major example where these solutions can be applied to issues with urbanization. And while these solutions can help improve the quality of life of the citizens, there are always security & privacy risks. Data collected in a smart-city can infringe upon the privacy of users and reveal potentially harmful information. One example is a surveillance system in a smart city. Research shows that people are less likely to commit crimes if they are being watched. Video footage can also be used by law enforcement to track and stop criminals. But it can also be harmful if accessible to untrusted users. A malicious user who can gain access to a surveillance system can potentially use that information to harm others. There are researched methods that can be used to encrypt the video feed, but then it is only accessible to the system owner. Polls show that public opinion of surveillance systems is declining even if they provide increased security because of the lack of transparency in the system. Therefore, it is vital for the system to be able to do its intended purpose while also preserving privacy and holding malicious users accountable.
To help resolve these issues with privacy & accountability and to allow for collaboration, we propose Corvus, an IoT surveillance system that targets smart communities. Corvus is a collaborative blockchain based surveillance system that uses context-based image captioning to anonymously describe events & people detected. These anonymous captions are stored on the immutable blockchain and are accessible by other users. If they find the description from another camera relevant to their own, they can request the raw video footage if necessary. This system supports collaboration between cameras from different networks, such as between two neighbors with their own private camera networks. This paper will explore the design of this system and how it can be used as a privacy-preserving, but translucent & accountable approach to smart-city surveillance. Our contributions include exploring a novel approach to anonymizing detected events and designing the surveillance system to be privacy-preserving and collaborative.
Sandip Dey
Analysis of Performance Overheads in DynamoRIO Binary TranslatorWhen & Where:
2001 B Eaton Hall
Committee Members:
Prasad Kulkarni, ChairJerzy Grzymala-Busse
Esam Eldin Mohamed Aly
Abstract
Dynamic binary translation is the process of translating instruction code from one architecture to another while it executes, i.e., dynamically. As modern applications are becoming larger, more complex and more dynamic, the tools to manipulate these programs are also becoming increasingly complex. DynamoRIO is one such dynamic binary translation tool that targets the most common IA-32 (a.k.a. x86) architecture on the most popular operating systems - Windows and Linux. DynamoRIO includes applications ranging from program analysis and understanding to profiling, instrumentation, optimization, improving software security, and more. However, even considering all of these optimization techniques, DynamoRIO still has the limitations of performance and memory usage, which restrict deployment scalability. The goal of my thesis is to break down the various aspects which contribute to the overhead burden and evaluate which factors directly contribute to this overhead. This thesis will discuss all of these factors in further detail. If the process can be streamlined, this application will become more viable for widespread adoption in a variety of areas. We have used industry standard Mi benchmarks in order to evaluate in detail the amount and distribution of the overhead in DynamoRIO. Our statistics from the experiments show that DynamoRIO executes a large number of additional instructions when compared to the native execution of the application. Furthermore, these additional instructions are involved in building the basic blocks, linking, trace creation, and resolution of indirect branches, all of which in return contribute to the frequent exiting of the code cache. We will discuss in detail all of these overheads, show statistics of instructions for each overhead, and finally show the observations and analysis in this defense.
Eric Schweisberger
Optical Limiting via Plasmonic Parametric AbsorbersWhen & Where:
2001 B Eaton Hall
Committee Members:
Alessandro Salandrino , ChairKenneth Demarest
Rongqing Hui
Abstract
Optical sensors are increasingly prevalent devices whose costs tend to increase with their sensitivity. A hike in sensitivity is typically associated with fragility, rendering expensive devices vulnerable to threats of high intensity illumination. These potential costs and even security risks have generated interest in devices that maintain linear transparency under tolerable levels of illumination, but can quickly convert to opaque when a threshold is exceeded. Such a device is deemed an optical limiter. Copious amounts of research have been performed over the last few decades on optical nonlinearities and their efficacy in limiting. This work provides an overview of the existing literature and evaluates the applicability of known limiting materials to threats that vary in both temporal and spectral width. Additionally, we introduce the concept of plasmonic parametric resonance (PPR) and its potential for devising a new limiting material, the plasmonic parametric absorber (PPA). We show that this novel material exhibits a reverse saturable absorption behavior and promises to be an effective tool in the kit of optical limiter design.
Muhammad Saad Adnan
Corvus: Integrating Blockchain with Internet of Things Towards a Privacy Preserving, Collaborative and Accountable, Surveillance System in a Smart CommunityWhen & Where:
246 Nichols Hall
Committee Members:
Bo Luo, ChairAlex Bardas
Fengjun Li
Abstract
The Internet of Things is been a rapidly growing field that offers improved data collection, analysis and automation as solutions for everyday problems. A smart-city is one major example where these solutions can be applied to issues with urbanization. And while these solutions can help improve the quality of live of the citizens, there are always security & privacy risks. Data collected in a smart-city can infringe upon the privacy of users and reveal potentially harmful information. One example is a surveillance system in a smart city. Research shows that people are less likely to commit crimes if they are being watched. Video footage can also be used by law enforcement to track and stop criminals. But it can also be harmful if accessible to untrusted users. A malicious user who can gain access to a surveillance system can potentially use that information to harm others. There are researched methods that can be used to encrypt the video feed, but then it is only accessible to the system owner. Polls show that public opinion of surveillance systems is declining even if they provide increased security because of the lack of transparency in the system. Therefore, it is vital for the system to be able to do its intended purpose while also preserving privacy and holding malicious users accountable.
To help resolve these issues with privacy & accountability and to allow for collaboration, we propose Corvus, an IoT surveillance system that targets smart communities. Corvus is a collaborative blockchain based surveillance system that uses context-based image captioning to anonymously describe events & people detected. These anonymous captions are stored on the immutable blockchain and are accessible by other users. If they find the description from another camera relevant to their own, they can request the raw video footage if necessary. This system supports collaboration between cameras from different networks, such as between two neighbors with their own private camera networks. This paper will explore the design of this system and how it can be used as a privacy-preserving, but translucent & accountable approach to smart-city surveillance. Our contributions include exploring a novel approach to anonymizing detected events and designing the surveillance system to be privacy-preserving and collaborative.
Lumumba Harnett
Reduced Dimension Optimal and Adaptive Mismatch Processing for Interference CancellationWhen & Where:
246 Nichols Hall
Committee Members:
Shannon Blunt, ChairChristopher Allen
Erik Perrins
James Stiles
Richard Hale
Abstract
Interference has been a subject of interest to radars for generations due to its ability to degrade performance. Commercial radars can experience radio frequency (RF) interference from a different RF service (such as radio broadcasting, television broadcasting, communications, satellites, etc.) if it operates simultaneously in the same spectrum. The RF spectrum is a finite asset that is regulated to mitigate interference and maximum resources. Recently, shared spectrum have been proposed to accommodate the growing commercial demand of communication systems. Airborne radars, performing ground moving target indication (GMTI), encounter interference from clutter scattering that may mask slow-moving, low-power targets. Least-squares (LS) optimal and re-iterative minimum-mean square error (RMMSE) adaptive mismatch processing recent advancements are proposed for GMTI and shared spectrum. Each estimation technique reduces sidelobes, provides less signal-to-noise loss, and less resolution degradation than windowing. For GMTI, LS and RMMSE filters are considered with angle-Doppler filters and pre-existing interference cancellation techniques for better detection performance. Application specific reduce rank versions of the algorithms are also introduced for real-time operation. RMMSE is further considered to separate radar and mobile communication systems operating in the same RF band to mitigate interference and information loss.
April Wade
Exploring Properties, Impact, and Deployment Mechanisms of Profile-Guided Optimizations in Static and Dynamic CompilersWhen & Where:
2001 B Eaton Hall
Committee Members:
Prasad Kulkarni, ChairPerry Alexander
Garrett Morris
Heechul Yun
Kyle Camarda
Abstract
Managed language virtual machines (VM) rely on dynamic or just-in-time (JIT) compilation to generate optimized native code at run-time to deliver high execution performance. Many VMs and JIT compilers collect \emph{profile} data at run-time to enable profile-guided optimizations (PGO) that customize the generated native code to different program inputs. PGOs are generally considered integral for VMs to produce high-quality and performant native code. Likewise, many static, ahead-of-time (AOT) compilers employ PGOs to achieve peak performance, though they are less commonly employed in practice.
We propose a study that analyzes and quantifies the performance benefits of PGOs in both AOT and JIT enviroments, understand the importance of profiling data quantity and quality/accuracy to effectively guide PGOs, and assess the impact of individual PGOs on performance. Additionally, we propose an extension of PGOs found in AOT compiler based on specialization and seek to perform a feasibility study to determine its viability.
Luyao Shang
Memory Based LT Encoders for Delay Sensitive CommunicationsWhen & Where:
246 Nichols Hall
Committee Members:
Erik Perrins, ChairShannon Blunt
Taejoon Kim
David Petr
Tyrone Duncan
Abstract
As the upcoming fifth-generation (5G) and future wireless network is envisioned in areas such as augmented and virtual reality, industrial control, automated driving or flying, robotics, etc, the requirement of supporting ultra-reliable low-latency communications (URLLC) is increasingly urgent than ever. From the channel coding perspective, URLLC requires codewords being transported in finite block-lengths. In this regards, we propose novel encoding algorithms and analyze their performance behaviors for the finite-length Luby transform (LT) codes.
Luby transform (LT) codes, the first practical realization and the fundamental core of fountain codes, play a key role in the fountain codes family. Recently, researchers show that the performance of LT codes for finite block-lengths can be improved by adding memory into the encoder. However, this work only utilizes one memory, leaving the possibilities of exploiting and how to exploiting more memories an open problem. To explore this unknown, in this work we propose an entire family of memory based LT encoders, and analyze their performance behaviors thoroughly over binary erasure channels and AWGN channels.