Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Andrew Riachi
An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux SmapsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Prasad Kulkarni, ChairPerry Alexander
Drew Davidson
Heechul Yun
Abstract
Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.
In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.
Elizabeth Wyss
A New Frontier for Software Security: Diving Deep into npmWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Drew Davidson, ChairAlex Bardas
Fengjun Li
Bo Luo
J. Walker
Abstract
Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week.
However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.
This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains.
Alfred Fontes
Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope ModulationsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Patrick McCormick, ChairShannon Blunt
Jonathan Owen
Abstract
Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.
A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal.
The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.
Qua Nguyen
Hybrid Array and Privacy-Preserving Signaling Optimization for NextG Wireless CommunicationsWhen & Where:
Zoom Defense, please email jgrisafe@ku.edu for link.
Committee Members:
Erik Perrins, ChairMorteza Hashemi
Zijun Yao
Taejoon Kim
KC Kong
Abstract
This PhD research tackles two critical challenges in NextG wireless networks: hybrid precoder design for wideband sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and privacy-preserving federated learning (FL) over wireless networks.
In the first part, we propose a novel hybrid precoding framework that integrates true-time delay (TTD) devices and phase shifters (PS) to counteract the beam squint effect - a significant challenge in the wideband sub-THz massive MIMO systems that leads to considerable loss in array gain. Unlike previous methods that only designed TTD values while fixed PS values and assuming unbounded time delay values, our approach jointly optimizes TTD and PS values under realistic time delays constraint. We determine the minimum number of TTD devices required to achieve a target array gain using our proposed approach. Then, we extend the framework to multi-user wideband systems and formulate a hybrid array optimization problem aiming to maximize the minimum data rate across users. This problem is decomposed into two sub-problems: fair subarray allocation, solved via continuous domain relaxation, and subarray gain maximization, addressed via a phase-domain transformation.
The second part focuses on preserving privacy in FL over wireless networks. First, we design a differentially-private FL algorithm that applies time-varying noise variance perturbation. Taking advantage of existing wireless channel noise, we jointly design differential privacy (DP) noise variances and users transmit power to resolve the tradeoffs between privacy and learning utility. Next, we tackle two critical challenges within FL networks: (i) privacy risks arising from model updates and (ii) reduced learning utility due to quantization heterogeneity. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. We approach to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that ensures a DP guarantee and minimal quantization distortion. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Lastly, inspired by the information-theoretic rate-distortion framework, a privacy-distortion tradeoff problem is formulated to minimize privacy loss under a given maximum allowable quantization distortion. The optimal solution to this problem is identified, revealing that the privacy loss decreases as the maximum allowable quantization distortion increases, and vice versa.
This research advances hybrid array optimization for wideband sub-THz massive MIMO and introduces novel algorithms for privacy-preserving quantized FL with diverse precision. These contributions enable high-throughput wideband MIMO communication systems and privacy-preserving AI-native designs, aligning with the performance and privacy protection demands of NextG networks.
Arin Dutta
Performance Analysis of Distributed Raman Amplification with Different Pumping ConfigurationsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui, ChairMorteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao
Abstract
As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.
Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.
The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.
As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.
Audrey Mockenhaupt
Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target RecognitionWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Patrick McCormick, ChairShannon Blunt
Jon Owen
Abstract
Pending.
Rich Simeon
Delay-Doppler Channel Estimation for High-Speed Aeronautical Mobile Telemetry ApplicationsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Erik Perrins, ChairShannon Blunt
Morteza Hashemi
Jim Stiles
Craig McLaughlin
Abstract
The next generation of digital communications systems aims to operate in high-Doppler environments such as high-speed trains and non-terrestrial networks that utilize satellites in low-Earth orbit. Current generation systems use Orthogonal Frequency Division Multiplexing modulation which is known to suffer from inter-carrier interference (ICI) when different channel paths have dissimilar Doppler shifts.
A new Orthogonal Time Frequency Space (OTFS) modulation (also known as Delay-Doppler modulation) is proposed as a candidate modulation for 6G networks that is resilient to ICI. To date, OTFS demodulation designs have focused on the use cases of popular urban terrestrial channel models where path delay spread is a fraction of the OTFS symbol duration. However, wireless wide-area networks that operate in the aeronautical mobile telemetry (AMT) space can have large path delay spreads due to reflections from distant geographic features. This presents problems for existing channel estimation techniques which assume a small maximum expected channel delay, since data transmission is paused to sound the channel by an amount equal to twice the maximum channel delay. The dropout in data contributes to a reduction in spectral efficiency.
Our research addresses OTFS limitations in the AMT use case. We start with an exemplary OTFS framework with parameters optimized for AMT. Following system design, we focus on two distinct areas to improve OTFS performance in the AMT environment. First we propose a new channel estimation technique using a pilot signal superimposed over data that can measure large delay spread channels with no penalty in spectral efficiency. A successive interference cancellation algorithm is used to iteratively improve channel estimates and jointly decode data. A second aspect of our research aims to equalize in delay-Doppler space. In the delay-Doppler paradigm, the rapid channel variations seen in the time-frequency domain is transformed into a sparse quasi-stationary channel in the delay-Doppler domain. We propose to use machine learning using Gaussian Process Regression to take advantage of the sparse and stationary channel and learn the channel parameters to compensate for the effects of fractional Doppler in which simpler channel estimation techniques cannot mitigate. Both areas of research can advance the robustness of OTFS across all communications systems.
Mohammad Ful Hossain Seikh
AAFIYA: Antenna Analysis in Frequency-domain for Impedance and Yield AssessmentWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Jim Stiles, ChairRachel Jarvis
Alessandro Salandrino
Abstract
This project presents AAFIYA (Antenna Analysis in Frequency-domain for Impedance and Yield Assessment), a modular Python toolkit developed to automate and streamline the characterization and analysis of radiofrequency (RF) antennas using both measurement and simulation data. Motivated by the need for reproducible, flexible, and publication-ready workflows in modern antenna research, AAFIYA provides comprehensive support for all major antenna metrics, including S-parameters, impedance, gain and beam patterns, polarization purity, and calibration-based yield estimation. The toolkit features robust data ingestion from standard formats (such as Touchstone files and beam pattern text files), vectorized computation of RF metrics, and high-quality plotting utilities suitable for scientific publication.
Validation was carried out using measurements from industry-standard electromagnetic anechoic chamber setups involving both Log Periodic Dipole Array (LPDA) reference antennas and Askaryan Radio Array (ARA) Bottom Vertically Polarized (BVPol) antennas, covering a frequency range of 50–1500 MHz. Key performance metrics, such as broadband impedance matching, S11 and S21 related calculations, 3D realized gain patterns, vector effective lengths, and cross-polarization ratio, were extracted and compared against full-wave electromagnetic simulations (using HFSS and WIPL-D). The results demonstrate close agreement between measurement and simulation, confirming the reliability of the workflow and calibration methodology.
AAFIYA’s open-source, extensible design enables rapid adaptation to new experiments and provides a foundation for future integration with machine learning and evolutionary optimization algorithms. This work not only delivers a validated toolkit for antenna research and pedagogy but also sets the stage for next-generation approaches in automated antenna design, optimization, and performance analysis.
Past Defense Notices
JINGWEIJIA TAN
Modeling and Improving the GPGPU Reliability in the Presence of Soft ErrorsWhen & Where:
250 Nichols Hall
Committee Members:
Xin Fu, ChairPrasad Kulkarni
Heechul Yun
Abstract
GPGPUs (general-purpose computing on graphics processing units) emerge as a highly attractive platform for HPC (high performance computing) applications due to its strong computing power. Unlike the graphic processing applications, HPC applications have rigorous requirement on execution correctness, which is generally ignored in the traditional GPU design. Soft Errors, which are failures caused by high-energy neutron or alpha particle strikes in integrated circuits, become a major reliability concern due to the shrinking of feature sizes and growing integration density. In this project, we first build a framework GPGPU-SODA to model the soft-error vulnerability of GPGPU microarchitecture using a publicly available simulator. Based on the framework, we identified the streaming processors are reliability hot-spot in GPGPUs. We further observe that the streaming processors are not fully utilized during the branch divergence and pipeline stalls caused by the long latency operations. We then propose a technique RISE to recycle the streaming processors idle time for soft-error detection in GPGPUs. Experimental results show that RISE obtains the good fault coverage with negligible performance degradation.
KARTHIK PODUVAL
HGS Schedulers for Digital Audio Workstation like ApplicationsWhen & Where:
246 Nichols Hall
Committee Members:
Prasad Kulkarni, ChairVictor Frost
Jim Miller
Abstract
Digital Audio Workstation (DAW) applications are real-time applications that have special timing constraints. HGS is a real-time scheduling framework that allows developers implement custom schedulers based on any scheduling algorithm through a process of direct interaction between client threads and their schedulers. Such scheduling could extend well beyond the common priority model that currently exists and could be a representation of arbitrary application semantics that can be well understood and acted upon by its associated scheduler. We like to term it "need based scheduling". In this thesis we firstly study some DAW implementations and later create a few different HGS schedulers aimed at assisting DAW applications meet their needs.
NEIZA TORRICO PANDO
High Precision Ultrasound Range Measurement SystemWhen & Where:
2001B Eaton Hall
Committee Members:
Chris Allen, ChairSwapan Chakrabarti
Ron Hui
Abstract
Real-time, precise range measurement between objects is useful for a variety of applications. The slow propagation of acoustic signals (330 m/s) in air makes the use of ultrasound frequencies an ideal approach to measure an accurate time of flight. The time of flight can then be used to calculate the range between two objects. The objective of this project is to achieve a precise range measurement within 10 cm uncertainty and an update rate of 30 ms for distances up to 10 m between unmanned aerial vehicles (UAVs) when flying in formation. Both transmitter and receiver are synchronized with a 1 pulse per second signal coming from a GPS. The time of flight is calculated using the cross-correlation of the transmitted and received waves. To allow for various users, a 40 kHz signal is phase modulated with Gold or Kasami codes.
CAMERON LEWIS
3D Imaging of Ice SheetsWhen & Where:
317 Nichols Hall
Committee Members:
Prasad Gogineni, ChairChris Allen
Carl Leuschen
Fernando Rodriguez-Morales
Rick Hale
Abstract
Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves affect both the mass balance of the ice sheet and the global climate system. This melting and refreezing influences the development of Antarctic Bottom Water, which help drive the oceanic thermohaline circulation, a critical component of the global climate system. Basal melt rates can be estimated through traditional glaciological techniques, relying on conversation of mass. However, this requires accurate knowledge of the ice movement, surface accumulation and ablation, and firn compression. Boreholes can provide direct measurement of melt rates, but only provide point estimates and are difficult and expensive to perform. Satellite altimetry measurements have been heavily relied upon for the past few decades. Thickness and melt rate estimates require the same conservation of mass a priori knowledge, with the additional assumption that the ice shelf is in hydrostatic equilibrium. Even with newly available, ground truthed density and geoid estimates, satellite data derived ice shelf thickness and melt rate estimates suffers from relatively course spatial resolution and interpolation induced error. Non destructive radio echo sounding (RES) measurements from long range airborne platforms provide best solution for fine spatial and temporal resolution over long survey traverses and only require a priori knowledge of firn density and surface accumulation. Previously, RES data derived basal melt rate experiments have been limited to ground based experiments with poor coverage and spatial resolution. To improve upon this, an airborne multi channel wideband radar has been developed for the purpose of imaging shallow ice and ice shelves. A moving platform and cross track antenna array will allow for fine resolution 3 D imaging of basal topography. An initial experiment will use a ground based system to image shallow ice and generate 3 D imagery as a proof of concept. This will then be applied to ice shelf data collected by an airborne system.
TRUC ANH NGUYEN
Transfer Control for Resilient End-to-End TransportWhen & Where:
246 Nichols Hall
Committee Members:
James Sterbenz, ChairVictor Frost
Gary Minden
Abstract
Residing between the network layer and the application layer, the transport
layer exchanges application data using the services provided by the network. Given the unreliable nature of the underlying network, reliable data transfer has become one of the key requirements for those transport-layer protocols such as TCP. Studying the various mechanisms developed for TCP to increase the correctness of data transmission while fully utilizing the network's bandwidth provides us a strong background for our study and development of our own resilient end-to-end transport protocol. Given this motivation, in this thesis, we study the dierent
TCP's error control and congestion control techniques by simulating them under dierent network scenarios using ns-3. For error control, we narrow our research to acknowledgement methods such as cumulative ACK - the traditional TCP's way of ACKing, SACK, NAK, and SNACK. The congestion control analysis covers some TCP variants including Tahoe, Reno, NewReno, Vegas, Westwood, Westwood+, and TCP SACK.
CENK SAHIN
On Fundamental Performance Limits of Delay-Sensitive Wireless CommunicationsWhen & Where:
246 Nichols Hall
Committee Members:
Erik Perrins, ChairShannon Blunt
Victor Frost
Lingjia Liu
Zsolt Talata
Abstract
Mobile traffic is expected to grow at an annual compound rate of 66% in the next 3 years, while among the data types that account for this growth mobile video has the highest growth rate. Since most video applications are delay-sensitive, the delay-sensitive traffic will be the dominant traffic over future wireless communications. Consequently, future mobile wireless systems will face the dual challenge of supporting large traffic volume while providing reliable service for various kinds of delay-sensitive applications (e.g. real-time video, online gaming, and voice-over-IP (VoIP)). Past work on delay-sensitive communications has generally overlooked the physical-layer considerations such as modulation and coding scheme (MCS), probability of decoding error, and coding delay by employing oversimplified models for the physical-layer. With the proposed research we aim to bridge information theory, communication theory, and queueing theory by jointly considering the delay-violation probability and the probability of decoding error to identify the fundamental trade-offs among wireless system parameters such as channel fading speed, average received signal-to-noise ratio (SNR), MCS, and user perceived quality of service. We will model the underlying wireless channel by a finite-state Markov chain, use channnel dispersion to track the probability of decoding error and the coding delay for a given MCS, and focus on the asymptotic decay rate of buffer occupancy for queueing delay analysis. The proposed work will be used to obtain fundamental bounds on the performance of queued systems over wireless communication channels.
GHAITH SHABSIGH
LPI Performance of an Ad-Hoc Covert System Exploiting Wideband Wireless Mobile NetworksWhen & Where:
246 Nichols Hall
Committee Members:
Victor Frost, ChairChris Allen
Lingjia Liu
Erik Perrins
Tyrone Duncan
Abstract
The high level of functionality and flexibility of modern wideband wireless networks, LTE and WiMAX, have made them the preferred technology for providing mobile internet connectivity. The high performance of these systems comes from adopting several innovative techniques such as Orthogonal Frequency Division Multiplexing (OFDM), Automatic Modulation and Coding (AMC), and Hybrid Automatic Repeat Request (HARQ). However, this flexibility also opens the door for network exploitation by other ad-hoc networks, like Device-to-Device technology, or covert systems. In this work effort, we provide the theoretical foundation for a new ad-hoc wireless covert system that hides its transmission in the RF spectrum of an OFDM-based wideband network (Target Network), like LTE. The first part of this effort will focus on designing the covert waveform to achieve a low probability of detection (LPD). Next, we compare the performance of several available detection methods in detecting the covert transmission, and propose a detection algorithm that would represent a worst case scenario for the covert system. Finally, we optimize the performance of the covert system in terms of its throughput, transmission power, and interference on/from the target network.
MOHAMMED ALENAZI
Network Resilience Improvement and Evaluation Using Link AdditionsWhen & Where:
246 Nichols Hall
Committee Members:
James Sterbenz, ChairVictor Frost
Lingjia Liu
Bo Luo
Tyrone Duncan
Abstract
Computer networks are prone to targeted attacks and natural disasters that could disrupt its normal operation and services. Adding links to form a full mesh yields the most resilient network but it incurs unfeasible high cost. In this research, we investigate the resilience improvement of real-world network via adding a cost-efficient set of links. Adding a set of link to get optimal solution using exhaustive search is impracticable given the size of communication network graphs. Using a greedy algorithm, a feasible solution is obtained by adding a set of links to improve network connectivity by increasing a graph robustness metric such as algebraic connectivity or total path diversity. We use a graph metric called flow robustness as a measure for network resilience. To evaluate the improved networks, we apply three centrality-based attacks and study their resilience. The flow robustness results of the attacks show that the improved networks are more resilient than the non-improved networks.
ASHWINI SHIKARIPUR NADIG
Statitistical Approaches to Inferring Object Shape from Single ImagesWhen & Where:
2001B Eaton Hall
Committee Members:
Bo Luo, ChairBrian Potetz
Luke Huan
Jim Miller
Paul Selden
Abstract
Depth inference is a fundamental problem of computer vision with a broad range of potential applications. Monocular depth inference techniques, particularly shape from shading dates back to as early as the 40's when it was first used to study the shape of the lunar surface. Since then there has been ample research to develop depth inference algorithms using monocular cues. Most of these are based on physical models of image formation and rely on a number of simplifying assumptions that do not hold for real world and natural imagery. Very few make use of the rich statistical information contained in real world images and their 3D information. There have been a few notable exceptions though. The study of statistics of natural scenes has been concentrated on outdoor natural scenes which are cluttered. Statistics of scenes of single objects has been less studied, but is an essential part of daily human interaction with the environment. This thesis focuses on studying the statistical properties of single objects and their 3D imagery, uncovering some interesting trends, which can benefit shape inference techniques. I acquired two databases: Single Object Range and HDR (SORH) and the Eton Myers Database of single objects, including laser-acquired depth, binocular stereo, photometric stereo and High Dynamic Range (HDR) photography. The fractal structure of natural images was previously well known, and thought to be a universal property. However, my research showed that the fractal structure of single objects and surfaces is governed by a wholly different set of rules. Classical computer vision problems of binocular and multi-view stereo, photometric stereo, shape from shading, structure from motion, and others, all rely on accurate and complete models of which 3D shapes and textures are plausible in nature, to avoid producing unlikely outputs. Bayesian approaches are common for these problems, and hopefully the findings on the statistics of the shape of single objects from this work and others will both inform new and more accurate Bayesian priors on shape, and also enable more efficient probabilistic inference procedures.
STEVE PENNINGTON
Spectrum Coverage Estimation Using Large Scale MeasurementsWhen & Where:
246 Nichols Hall
Committee Members:
Joseph Evans, ChairArvin Agah
Victor Frost
Gary Minden
Ronald Aust
Abstract
The work presented in this thesis explores the use of geographic data and geostatistical methods to estimate path loss for cognitive radio networks. Path loss models typically employed in this scenario use a general terrain type (i.e., urban, suburban, or rural) and possibly a digital elevation model to predict excess path loss over the free space model. Additional descriptive knowledge of the local environment can be used to make more accurate path loss predictions. This research focuses on the use of visible imagery, digital elevation models, and terrain classification systems for predicting localized propagation characteristics. A low-cost data collection platform was created and used to generate a sufficiently large spectrum measurement set for machine learning. A series of path loss models were fitted to the data using linear and nonlinear methods. These models were then used to create a radio environment map depicting estimated signal strength. All of the models created have good cross-validated prediction results when compared to existing path loss models, although some of the more flexible models had a tendency to overfit the data. A number of geostatistical models were fitted on the data as well.
These models have the advantage of not requiring the transmitter location in order to create a model. The geostatistical models performed very well when given a sufficient density of observations but were not able to generalize as well as some of the regression models. An analysis of the geographical data sets indicated that each had a significant measurable effect on path loss estimation, with the medium resolution imagery and elevation data providing the greatest increase in accuracy. Finally, these models were compared to number of existing path loss models, demonstrating a gain in usable spectrum for cognitive radio network use.