Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Arin Dutta

Performance Analysis of Distributed Raman Amplification with Different Pumping Configurations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao

Abstract

As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.

Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.

The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.

As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research  is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

Pending.


Rich Simeon

Delay-Doppler Channel Estimation for High-Speed Aeronautical Mobile Telemetry Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Morteza Hashemi
Jim Stiles
Craig McLaughlin

Abstract

The next generation of digital communications systems aims to operate in high-Doppler environments such as high-speed trains and non-terrestrial networks that utilize satellites in low-Earth orbit. Current generation systems use Orthogonal Frequency Division Multiplexing modulation which is known to suffer from inter-carrier interference (ICI) when different channel paths have dissimilar Doppler shifts.

A new Orthogonal Time Frequency Space (OTFS) modulation (also known as Delay-Doppler modulation) is proposed as a candidate modulation for 6G networks that is resilient to ICI. To date, OTFS demodulation designs have focused on the use cases of popular urban terrestrial channel models where path delay spread is a fraction of the OTFS symbol duration. However, wireless wide-area networks that operate in the aeronautical mobile telemetry (AMT) space can have large path delay spreads due to reflections from distant geographic features. This presents problems for existing channel estimation techniques which assume a small maximum expected channel delay, since data transmission is paused to sound the channel by an amount equal to twice the maximum channel delay. The dropout in data contributes to a reduction in spectral efficiency.

Our research addresses OTFS limitations in the AMT use case. We start with an exemplary OTFS framework with parameters optimized for AMT. Following system design, we focus on two distinct areas to improve OTFS performance in the AMT environment. First we propose a new channel estimation technique using a pilot signal superimposed over data that can measure large delay spread channels with no penalty in spectral efficiency. A successive interference cancellation algorithm is used to iteratively improve channel estimates and jointly decode data. A second aspect of our research aims to equalize in delay-Doppler space. In the delay-Doppler paradigm, the rapid channel variations seen in the time-frequency domain is transformed into a sparse quasi-stationary channel in the delay-Doppler domain. We propose to use machine learning using Gaussian Process Regression to take advantage of the sparse and stationary channel and learn the channel parameters to compensate for the effects of fractional Doppler in which simpler channel estimation techniques cannot mitigate. Both areas of research can advance the robustness of OTFS across all communications systems.


Mohammad Ful Hossain Seikh

AAFIYA: Antenna Analysis in Frequency-domain for Impedance and Yield Assessment

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Jim Stiles, Chair
Rachel Jarvis
Alessandro Salandrino


Abstract

This project presents AAFIYA (Antenna Analysis in Frequency-domain for Impedance and Yield Assessment), a modular Python toolkit developed to automate and streamline the characterization and analysis of radiofrequency (RF) antennas using both measurement and simulation data. Motivated by the need for reproducible, flexible, and publication-ready workflows in modern antenna research, AAFIYA provides comprehensive support for all major antenna metrics, including S-parameters, impedance, gain and beam patterns, polarization purity, and calibration-based yield estimation. The toolkit features robust data ingestion from standard formats (such as Touchstone files and beam pattern text files), vectorized computation of RF metrics, and high-quality plotting utilities suitable for scientific publication.

Validation was carried out using measurements from industry-standard electromagnetic anechoic chamber setups involving both Log Periodic Dipole Array (LPDA) reference antennas and Askaryan Radio Array (ARA) Bottom Vertically Polarized (BVPol) antennas, covering a frequency range of 50–1500 MHz. Key performance metrics, such as broadband impedance matching, S11 and S21 related calculations, 3D realized gain patterns, vector effective lengths,  and cross-polarization ratio, were extracted and compared against full-wave electromagnetic simulations (using HFSS and WIPL-D). The results demonstrate close agreement between measurement and simulation, confirming the reliability of the workflow and calibration methodology.

AAFIYA’s open-source, extensible design enables rapid adaptation to new experiments and provides a foundation for future integration with machine learning and evolutionary optimization algorithms. This work not only delivers a validated toolkit for antenna research and pedagogy but also sets the stage for next-generation approaches in automated antenna design, optimization, and performance analysis.


Past Defense Notices

Dates

SAHANA RAGHUNANDAN

Analysis of Angle of Arrival Estimation Algorithms for Basal Ice Sheet Tomography

When & Where:


317 Nichols Hall

Committee Members:

John Paden, Chair
Shannon Blunt
Carl Leuschen


Abstract

One of the key requirements for deriving more realistic ice sheet models is to obtain a good set of basal measurements that enable accurate estimation of bed roughness and conditions. For this purpose, 3D tomography of the ice bed has been successfully implemented with the help of angle of arrival estimation (AoA) algorithms such as multiple signal classification (MUSIC) and maximum likelihood estimation (MLE) techniques. These methods have enabled fine resolution in the cross-track dimension using synthetic aperture radar (SAR) images obtained from single pass multichannel data. This project analyzes and compares the results obtained from the spectral MUSIC algorithm, an alternating projection (AP) based MLE technique, and a relatively recent approach called the reiterative superresolution (RISR) algorithm. While the MUSIC algorithm is more attractive computationally compared to MLE, the performance of the latter is known to be superior in a low signal to noise ratio regime. The RISR algorithm takes a completely different approach by using a recursive implementation of the minimum mean square error (MMSE) estimation technique instead of using the sample covariance matrix (SCM) that is central to subspace based algorithms. This renders the algorithm more robust in scenarios where there is a very low sample support. The SAR focused datasets provide a good case study to explore the performance of the three techniques to the application of ice sheet bed elevation estimation.


EHSAN HOSSEINI

Synchronization Techniques for Burst-Mode Continuous Phase Modulation

When & Where:


250 Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Lingjia Liu
Dave Petr
Tyrone Duncan

Abstract

Synchronization is a critical operation in digital communication systems, which establishes and maintains an operational link between transmitter and the receiver. As the advancement of digital modulation and coding schemes continues, the synchronization task becomes more and more challenging since the new standards require high-throughput functionality at low signal-to-noise ratios (SNRs). In this work, we address feedforward synchronization of continuous phase modulations (CPMs) using data-aided (DA) methods, which are best suited for burst-mode communications. In our transmission model, a known training sequence is appended to the beginning of each burst, which is then affected by additive white Gaussian noise (AWGN), and unknown frequency, phase, and timing offsets. 

Based on our transmission model, we derive the optimum training sequence for DA synchronization of CPM signals using the Cramer-Rao bound (CRB), which is a lower bound on the estimation error variance. It is shown that the proposed sequence minimizes the CRB for all three synchronization parameters, and can be applied to the entire CPM family. We take advantage of the structure of the optimized training sequence in order to derive a maximum likelihood joint timing and carrier recovery algorithm. Moreover, a frame synchronization algorithm is proposed, and hence, a complete synchronization scheme is presented in this work. 

The proposed training sequence and synchronization algorithm are extended to shaped-offset quadrature phase-shift keying (SOQPSK) modulation, which is considered for next generation aeronautical telemetry systems. Here, it is shown that the optimized training sequence outperforms the one that is defined in the draft telemetry standard as long as estimation error variances are considered. The overall bit error rate suggest that the optimized training sequence with a shorter length can be utilized such that the SNR loss is less than 0.5 dB of an ideal synchronization scenario.


MARTIN KUEHNHAUSEN

A Framework for Knowledge Derivation Incorporating Trust and Quality of Data

When & Where:


246 Nichols Hall

Committee Members:

Victor Frost, Chair
Luke Huan
Bo Luo
Gary Minden
Tyrone Duncan

Abstract

Today, across all major industries gaining insight from data is seen as an essential part of business. However, while data gathering is becoming inexpensive and relatively easy, analysis and ultimately deriving knowledge from it is increasingly difficult. In many cases, there is the problem of too much data such that important insights are hard to find. The problem is often not lack of data but whether knowledge derived from it is trustworthy. This means distinguishing “good” from “bad” insights based on factors such as context and reputation. Still, modeling trust and quality of data is complex because of the various conditions and relationships in heterogeneous environments. 

The new TrustKnowOne framework and architecture developed in this dissertation addresses these issues by describing an approach to fully incorporate trust and quality of data with all its aspects into the knowledge derivation process. This is based on Berlin, an abstract graph model we developed that can be used to model various approaches to trustworthiness and relationship assessment as well as decision making processes. In particular, processing, assessment, and evaluation approaches are implemented as graph expressions that are evaluated on graph components modeling the data. 

We have implemented and applied our framework to three complex scenarios using real data from public data repositories. As part of their evaluation we highlighted how our approach exhibits both the formalization and flexibility necessary to model each of the realistic scenarios. The implementation and evaluation of these scenarios confirms the advantages of the TrustKnowOne framework over current approaches.


YUANLIANG MENG

Building an Intelligent Knowledgebase of Brachiopod Paleontology

When & Where:


246 Nichols Hall

Committee Members:

Luke Huan, Chair
Brian Potetz
Bo Luo


Abstract

Science advances not only because of new discoveries, but also due to revolutionary ideas drawn from accumulated data. The quality of studies in paleontology, in particular, depends on accessibility of fossil data. This research builds an intelligent system based on brachiopod fossil images and their descriptions published in Treatise on Invertebrate Paleontology. The project is still developing and some significant achievements will be discussed here. 
This thesis has two major parts. The first part describes the digitization, organization and integration of information extracted from the Treatise. The Treatise is in PDF format and it is non-trivial to convert large volumes into a structured, easily accessible digital library. Three important topics will be discussed: (1) how to extract data entries from the text, and save them in a structured manner; (2) how to crop individual specimen images from figures automatically, and associate each image with text entries; (3) how to build a search engine to perform both keyword search and natural language search. The search engine already has a web interface and many useful tasks can be done with ease. 
Verbal descriptions are second-hand information of fossil images and thus have limitations. The second part of the thesis develops an algorithm to compare fossil images directly, without referring to textual information. After similarities between fossil images are calculated, we can use the results in image search, fossil classification, and so on. The algorithm is based on deformable templates, and utilizes expectation propagation to find the optimal deformation. Specifically, I superimpose a ``warp'' on each image. Each node of the warp encapsulates a vector of local texture features, and comparing two images involves two steps: (1) deform the warp to the optimal configuration, so the energy function is minimized; and (2) based on the optimal configuration, compute the distance of two images. Experiment results confirmed that the method is reasonable and robust.


WILLIAM DINKEL

Instrumentation and Evaluation of Distributed Computations

When & Where:


246 Nichols Hall

Committee Members:

Victor Frost, Chair
Arvin Agah
Prasad Kulkarni


Abstract

Distributed computations are a very important aspect of modern computing, especially given the rise of distributed systems used for applications such as web search, massively multiplayer online games, financial trading, and cloud computing. When running these computations across several physical machines it becomes much more difficult to determine exactly what is occurring on each system at a specific point in time. This is due to each server having an independent clock, thus making event timestamps inherently inaccurate across machine boundaries. Another difficulty with evaluating distributed experiments is the coordination required to launch daemons, executables, and logging across all machines, followed by the necessary gathering of all related output data. The goal of this research is to overcome these obstacles and construct a single, global timeline of events from all servers. 
We employ high-resolution clock synchronization to bring all servers within microseconds as measured by a modified version of the Network Time Protocol implementation. Kernel and user-level events with wall-clock timestamps are then logged during basic network socket experiments. These data are then collected from each server and merged into a single dataset, sorted by timestamp, and plotted on a timeline. The entire experiment, from setup to teardown to data collection, is coordinated from a single server. The timeline visualizations provide a narrative of not only how packets flow between servers, but also how kernel interrupt handlers and other events shape an experiment's execution.


DANIEL HEIN

Detecting Attack Prone Software Using Architecture and Repository Mined Change Metrics

When & Where:


2001B Eaton Hall

Committee Members:

Hossein Saiedian, Chair
Arvin Agah
Perry Alexander
Prasad Kulkarni
Reza Barati

Abstract

Billions of dollars are lost every year to successful cyber attacks that are fundamentally enabled by software vulnerabilities. Modern cyber attacks increasingly threaten individuals, organizations, and governments, causing service disruption, inconvenience,and costly incident response. Given that such attacks are primarily enabled by software vulnerabilities, this work examines whether or not existing change metrics, along with architectural modularity and maintainability metrics can be used to predict modules and files that might be analyzed or tested further to excise vulnerabilities prior to release. 
The problem addressed by this research is the residual vulnerability problem, or vulnerabilities that evade detection and persist in released software. Many modern software projects are over a million lines of code, composed of reused components of varying maturity. The sheer size of modern software, along with the reuse of existing open source modules, complicates the questions of where to look, and in what order to look, for residual vulnerabilities. Prediction models based on various code and change metrics (e.g.,churn) have shown promise as indicators of vulnerabilities at the file level. 
This work investigates whether change metrics, along with architectural metrics quantifying modularity and maintainability can be used to identify attack-prone modules. In addition to identifying or predicting attack prone files, this work also examines prioritization and ranking said predictions.


BEN PANZER

Estimating Geophysical Properties of Snow and Sea Ice from Data Collected by an Airborne, Ultra-Wideband Radar

When & Where:


317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Chris Allen
Prasad Gogineni
Fernando Rodriguez-Morales
Richard Hale

Abstract

Large-scale spatial observations of the global sea ice thickness and distribution rely on multiple satellite-based altimeters. Laser altimeters, such as the GLAS instrument aboard ICESat-1 and ATLAS instrument aboard ICESat-2, measure freeboard which is the snow and ice thickness above mean sea level. Deriving sea-ice thickness from these data requires estimating the snow depth on the sea ice. Current means of estimating the snow depth are climatological history, daily precipitation products, and/or data from passive microwave sensors, such as AMSR-E. Radar altimeters, such as SIRAL aboard CryoSat-2, do not have sufficient vertical range resolution to resolve both the air-snow and snow-ice interfaces over sea-ice. Additionally, there is significant ambiguity on the location of the peak return due to penetration into the snow cover. Regardless of the sensor, any error in snow-depth estimation amplifies sea-ice thickness errors due to the assumption of hydrostatic equilibrium used in deriving sea-ice thickness. There clearly is a need for an airborne sensor to provide spatially large-scale measurements of the snow cover in both polar regions to improve the accuracy of sea-ice thickness estimates and provide validation for the satellite-based sensors. 
The Snow Radar was developed at the Center for Remote Sensing of Ice Sheets and deployed as part of NASA Operation IceBridge since 2009 to directly measure snow thickness over sea ice. The radar is an ultra-wideband, frequency-modulated, continuous-wave radar now working over the frequency range of 2 GHz to 8 GHz, resulting in a vertical range resolution of approximately 4 cm after post-processing. The radar has been shown to be capable of detecting snow depth over sea ice from 10 cm to more than 2 meters and results from the radar compare well to multiple in-situ measurements and passive-microwave measurements. 
The focus of the proposed research is estimation of useful geophysical properties of snow-covered sea ice beyond snow depth and subsequent refinement and validation of the snow depth extraction. Geophysical properties of interest are: snow density and wetness, air-snow and snow-ice surface roughness, and sea ice temperature and salinity. Through forward modeling of the radar backscatter response and the corresponding inversion, large-scale estimation of these properties may be possible.


GOUTHAM SELVAKUMAR

Constructing an Environment and Providing a Performance Assesment of Android's Dalvik Virtual Machine on x86 and

When & Where:


250 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Victor Frost
Xin Fu


Abstract

Android is one of the most popular operating systems (OS) for mobile touchscreen devices, including smart-phones and tablet computers. Dalvik is a process virtual machine (VM) that provides an abstraction layer over the Android OS, and runs the Java-based Android applications. The first goal of this project is to construct a development environment for conveniently investigating the properties of Android's Dalvik VM on contemporary x86 and ARM architectures. The normal development environment restricts the Dalvik VM to run on top of Android, and requires an updated Android image to be built and installed on the target device after any change to the Dalvik code. This update-build-install process unnecessarily slows down any Dalvik VM exploration. We have now discovered a (undisclosed) configuration that enables us to study the Dalvik VM as a stand-alone application on top of the Linux OS. 
The second goal of this project is to understand the translation/compilation sub-system in the Dalvik VM, experiment with various modifications to determine the best translation parameters, and compare the Dalvik VM's just-in-time (JIT) compilation characteristics (such as quality of code generated and compilation time) on the x86 and ARM systems with a state-of-the-art Java VM. As expected, we find that JIT compilation is able to significantly improve application performance over basic interpretation. Comparing Dalvik's generated code quality with the Java HotSpot VM, we observe that Dalvik's ARM target is a much more mature compared to Dalvik-x86. However, Dalvik's simple trace-based compilation generates code quality that is much worse as compared to HotSpot. Finally, our experiments also reveal the most effective JIT compilation parameters for the Dalvik VM, and its effect of benchmark performance and memory usage.


ADAM CRIFASI

Framework of Real-Time Optical Nyquist-WDM Receiver using Matlab and Simulink

When & Where:


2001B Eaton Hall

Committee Members:

Ron Hui, Chair
Shannon Blunt
Erik Perrins


Abstract

I investigate an optical Nyquist-WDM Bit Error Rate (BER) detection system. A transmitter and receiver system is simulated, using Matlab and Simulink, to form a working algorithm and to study the effects of the different processes of the data chain. The inherent lack of phase information in the N-WDM scheme presents unique challenges and requires a precise phase recovery system to accurately decode a message. Furthermore, resource constraints are applied by a cost-effective Field Programmable Gate Array (FPGA). To compensate for the speed, gate, and memory constraints of a budget FPGA, several techniques are employed to design the best possible receiver. I study the resource intensive operations and vary their resource utilization to discover the effect on the BER. To conclude, a full VHDL design is delineated, including peripheral initialization, input data sorting and storage, timing synchronization, state machine and control signal implementation, N-WDM demodulation, phase recovery, QAM decoding, and BER calculation.


TIANCHEN LI

Radar Cross-Section Enhancement of a 40 Percent Yak54 Unmanned Aerial Vehicle

When & Where:


2001B Eaton Hall

Committee Members:

Chris Allen, Chair
Ken Demarest
Ron Hui


Abstract

With increasing civilian use of unmanned aerial vehicles (UAVs), flight safety of these unmanned devices in populated area has become one of the most concerned issues among the operators and users. To reduce the rate of colliding, anti-collision systems based on airborne radar system and enhanced autopilot programs are developed. However, for most civilian UAVs being made of non-metal materials which has considerably low radar cross-section (RCS), those UAVs are really hard or even impossible to be detected by radars. This project aims to design a light-weight UAV mounted RCS enhancement device that can increase the visibility of the UAV for airborne radars which work in the frequency band near 
1.445 GHz. In this project, a 40% YAK54 radio controlled UAV is used as the subject UAV. The report also concentrates on the design of passive Van Atta Array reflector approach.