Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Andrew Riachi
An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux SmapsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Prasad Kulkarni, ChairPerry Alexander
Drew Davidson
Heechul Yun
Abstract
Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.
In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.
Elizabeth Wyss
A New Frontier for Software Security: Diving Deep into npmWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Drew Davidson, ChairAlex Bardas
Fengjun Li
Bo Luo
J. Walker
Abstract
Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week.
However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.
This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains.
Alfred Fontes
Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope ModulationsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Patrick McCormick, ChairShannon Blunt
Jonathan Owen
Abstract
Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.
A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal.
The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.
Qua Nguyen
Hybrid Array and Privacy-Preserving Signaling Optimization for NextG Wireless CommunicationsWhen & Where:
Zoom Defense, please email jgrisafe@ku.edu for link.
Committee Members:
Erik Perrins, ChairMorteza Hashemi
Zijun Yao
Taejoon Kim
KC Kong
Abstract
This PhD research tackles two critical challenges in NextG wireless networks: hybrid precoder design for wideband sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and privacy-preserving federated learning (FL) over wireless networks.
In the first part, we propose a novel hybrid precoding framework that integrates true-time delay (TTD) devices and phase shifters (PS) to counteract the beam squint effect - a significant challenge in the wideband sub-THz massive MIMO systems that leads to considerable loss in array gain. Unlike previous methods that only designed TTD values while fixed PS values and assuming unbounded time delay values, our approach jointly optimizes TTD and PS values under realistic time delays constraint. We determine the minimum number of TTD devices required to achieve a target array gain using our proposed approach. Then, we extend the framework to multi-user wideband systems and formulate a hybrid array optimization problem aiming to maximize the minimum data rate across users. This problem is decomposed into two sub-problems: fair subarray allocation, solved via continuous domain relaxation, and subarray gain maximization, addressed via a phase-domain transformation.
The second part focuses on preserving privacy in FL over wireless networks. First, we design a differentially-private FL algorithm that applies time-varying noise variance perturbation. Taking advantage of existing wireless channel noise, we jointly design differential privacy (DP) noise variances and users transmit power to resolve the tradeoffs between privacy and learning utility. Next, we tackle two critical challenges within FL networks: (i) privacy risks arising from model updates and (ii) reduced learning utility due to quantization heterogeneity. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. We approach to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that ensures a DP guarantee and minimal quantization distortion. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Lastly, inspired by the information-theoretic rate-distortion framework, a privacy-distortion tradeoff problem is formulated to minimize privacy loss under a given maximum allowable quantization distortion. The optimal solution to this problem is identified, revealing that the privacy loss decreases as the maximum allowable quantization distortion increases, and vice versa.
This research advances hybrid array optimization for wideband sub-THz massive MIMO and introduces novel algorithms for privacy-preserving quantized FL with diverse precision. These contributions enable high-throughput wideband MIMO communication systems and privacy-preserving AI-native designs, aligning with the performance and privacy protection demands of NextG networks.
Arin Dutta
Performance Analysis of Distributed Raman Amplification with Different Pumping ConfigurationsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui, ChairMorteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao
Abstract
As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.
Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.
The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.
As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.
Past Defense Notices
XIAOLI LI
Constructivism LearningWhen & Where:
246 Nichols Hall
Committee Members:
Luke Huan, ChairVictor Frost
Bo Luo
Richard Wang
Alfred Ho*
Abstract
Aiming to achieve the learning capabilities possessed by intelligent beings, especially human, researchers in machine learning field have the long-standing tradition of borrowing ideas from human learning, such as reinforcement learning, active learning, and curriculum learning. Motivated by a philosophical theory called "constructivism", in this work, we propose a new machine learning paradigm, constructivism learning. The constructivism theory has had wide-ranging impact on various human learning theories about how human acquire knowledge. To adapt this human learning theory to the context of machine learning, we first studied how to improve leaning performance by exploring inductive bias or prior knowledge from multiple learning tasks with multiple data sources, that is multi-task multi-view learning, both in offline and lifelong setting. Then we formalized a Bayesian nonparametric approach using sequential Dirichlet Process Mixture Models to support constructivism learning. To further exploit constructivism learning, we also developed a constructivism deep learning method utilizing Uniform Process Mixture Models.
MOHANAD AL-IBADI
Array Processing Techniques for Ice-Sheet Bottom TrackingWhen & Where:
317 Nichols Hall
Committee Members:
Shannon Blunt, ChairJohn Paden
Eric Perrins
Jim Stiles
Huazhen Fang*
Abstract
In airborne multichannel radar sounder signal processing, the collected data are most naturally represented in a cylindrical coordinate system: along-track, range, and elevation angle. The data are generally processed in each of these dimensions sequentially to focus or resolve the data in the corresponding dimension such that a 3D image of the scene can be formulated. Pulse-compression is used to process the data along the range dimension, synthetic aperture radar (SAR) processing is used to process the data in the along-track dimension, and array-processing techniques are used for the elevation angle dimension. After the first two steps, the 3D scene is resolved into toroids with constant along-track and constant range that are centered on the flight path. The targets lying in a particular toroid need to be resolved by estimating their respective elevation angles.
In the proposed work, we focus on the array processing step, where several direction of arrival (DoA) estimation methods will be used to resolve the targets in the elevation-angle dimension, such as MUltiple Signal Classification (MUSIC) and maximum-likelihood estimation (MLE). A tracker is then used on the output of the DoA estimation to track the ice-bottom interface. We propose to use the tree re-weighted message passing algorithm or Kalman filtering, based on the array-processing technique, to track the ice-bottom. The outcome of this is a digital elevation model (DEM) of the ice-bottom. While most published work assumes a narrowband model for the array, we will use a wideband model and focus on issues related to wideband arrays. Along these lines, we propose a theoretical study to evaluate the performance of the radar products based on the array characteristics using different array-processing techniques, such as wideband MLE and focusing-matrices methods. In addition, we will investigate tracking targets using a sparse array composed of three sub-arrays, each separated by a large multiwavelength baseline. Specifically, we propose to develop and investigate the performance of a Kalman tracking solution to this wideband sparse array problem when applied to data collected by the CReSIS radar sounder.
QIAOZHI WANG
Towards the Understanding of Private Content -- Content-based Privacy Assessment and Protection in Social NetworksWhen & Where:
2001B Eaton Hall
Committee Members:
Bo Luo, ChairFengjun Li
Richard Wang
Heechul Yun
Prajna Dhar*
Abstract
In the 2016 presidential election, social networks showed their great power as a “modern form of communication”. With the increasing popularity of social networks, privacy concerns arise. For example, it has been shown that microblogs are revealed to audiences that are significantly larger than users' perceptions. Moreover, when users are emotional, they may post messages with sensitive content and later regret doing so. As a result, users become very vulnerable – private or sensitive information may be accidentally disclosed, even in tweets about trivial daily activities.
Unfortunately, existing research projects on data privacy, such as the k-anonymity and differential privacy mechanisms, mostly focus on protecting individual’s identity from being discovered in large data sets. We argue that the key component of privacy protection in social networks is protecting sensitive content, i.e. privacy as having the ability to control dissemination of information. The overall objectives of the proposed research are: to understand the sensitive content of social network posts, to facilitate content-based protection of private information, and to identify different types of sensitive information. In particular, we propose a user-centered, quantitative measure of privacy based on textual content, and a customized privacy protection mechanism for social networks.
We consider private tweet identification and classification as dual-problems. We propose to develop an algorithm to identify all types of private messages, and, more importantly, automatically score the sensitiveness of private message. We first collect the opinions from a diverse group of users w.r.t. sensitiveness of private information through Amazon Mechanical Turk, and analyze the discrepancies between users' privacy expectations and actual information disclosure. We then develop a computational method to generate the context-free privacy score, which is the “consensus” privacy score for average users. Meanwhile, classification of private tweets is necessary for customized privacy protection. We have made the first attempt to understand different types of private information, and to automatically classify sensitive tweets into 13 pre-defined topic categories. In proposed research, we will further include personal attitudes, topic preferences, and social context into the scoring mechanism, to generate a personalized, context-aware privacy score, which will be utilized in a comprehensive privacy protection mechanism.
STEVE HAENCHEN
A Model to Identify Insider Threats Using Growing Hierarchical Self-Organizing Map of Electronic Media IndicatorsWhen & Where:
1 Eaton Hall
Committee Members:
Hossein Saiedian, ChairArvin Agah
Prasad Kulkarni
Bo Luo
Reza Barati
Abstract
Fraud from insiders costs an estimated $3.7 trillion annually. Current fraud prevention and detection methods that include analyzing network logs, computer events, emails, and behavioral characteristics have not been successful in reducing the losses. The proposed Occupational Fraud Prevention and Detection Model uses existing data from the field of digital forensics along with text clustering algorithms, machine learning, and a growing hierarchical self-organizing map model to predict insider threats based on computer usage behavioral characteristics.
The proposed research leverages research results from information security, software engineering, data science and information retrieval, context searching, search patterns, and machine learning to build and employ a database server and workstations to support 50+ terabytes of data representing entire hard drives from work computers. Forensic software FTK and EnCase are used to generate disk images and test extraction results. Primary research tools are built using modern programming languages. The research data is derived from disk images obtained from actual investigations when fraud was asserted and other disk images when fraud was not asserted.
The research methodology includes building a data extraction tool that is a disk level reader to store the disk, partition, and operating system data in a relational database. An analysis tool is also created to convert the data into information representing usage patterns including summarization, normalization, and redundancy removal. We build a normalizing tool that uses machine learning to adjust the baselines for company, department, and job deviations. A prediction component is developed to derive insider threat scores reflecting the anomalies from the adjusted baseline. The resulting product will allow identification of the computer users most likely to commit fraud so investigators can focus their limited resources on the suspects.
Our primarily plan to evaluate and validate our research results is via empirical study, statistical evaluation and benchmarking with tests of precision and recall from a second set of disk images.
JAMIE ROBINSON
Code Cache Management in Managed Language VMs to Reduce Memory Consumption for Embedded SystemsWhen & Where:
129 Nichols Hall
Committee Members:
Prasad Kulkarni, ChairBo Luo
Heechul Yun
Abstract
The compiled native code generated by a just-in-time (JIT) compiler in managed language virtual machines (VM) is placed in a region of memory called the code cache. Code cache management (CCM) in a VM is responsible to find and evict methods from the code cache to maintain execution correctness and manage program performance for a given code cache size or memory budget. Effective CCM can also boost program speed by enabling more aggressive JIT compilation, powerful optimizations, and improved hardware instruction cache and I-TLB performance.
Though important, CCM is an overlooked component in VMs. We find that the default CCM policies in Oracle’s production-grade HotSpot VM perform poorlyeven at modest memory pressure. We develop a detailed simulation-based framework to model and evaluate the potential efficiency of many different CCM policies in a controlled and realistic, but VM-independent environment. We make the encouraging discovery that effective CCM policies can sustain high program performance even for very small cache sizes.
Our simulation study provides the rationale and motivation to improve CCM strategies in existing VMs. We implement and study the properties of several CCM policies in HotSpot. We find that in spite of working within the bounds of the HotSpot VM’s current CCM sub-system, our best CCM policy implementation in HotSpot improves program performance over the default CCM algorithm by 39%, 41%, 55%, and 50% with code cache sizes that are 90%, 75%, 50%, and 25% of the desired cache size, on average.
AIME DE BERNER
Application of Machine Learning Techniques to the Diagnosis of Vision DisordersWhen & Where:
2001B Eaton Hall
Committee Members:
Arvin Agah, ChairNicole Beckage
Jerzy Grzymala-Busse
Abstract
In the age of data collection and as we search for knowledge, over time numerous techniques have been developed and used to capture, manipulate, and to process data to acquire the hidden correlations, relations, patterns, and mappings that one may not be able to see. Computers as machines with the help of improved algorithms have proven to provide Artificial Intelligence (AI) by applying models to predict outcomes within an acceptable margin of error. Through performance metrics applied using Data Mining and Machine Learning models to predict human vision disorders, we are able to see promising models. AI techniques used in this work include an improved version of C.45 called C.48, Neuro-Networks, K-Nearest-Neighbor, Random Forest, Support Vector Machines, AdaBoost, among many. The best predictive models were determined that could be applied to the diagnosis of vision disorders, focusing on Strabismus, the need for patient referral to a specialist.
HAO XUE
Understanding Information Credibility in Social NetworksWhen & Where:
246 Nichols Hall
Committee Members:
Fengjun Li, ChairLuke Huan
Prasad Kulkarni
Bo Luo
Hyunjin Seo
Abstract
With the advancement of Internet, increasing portions of people's social and communicative activities now take place in the digital world. The growth and popularity of online social networks have tremendously facilitate the online interaction and information exchange. More people now rely online information for news, opinions, and social networking. As the representative of online social-collaborative platforms, online review systems has enabled people to share information effectively and efficiently. A large volume of user generated content is produced daily, which allows people to make reasonable judgments about the quality of service or product of an unknown provider. However, the freedom and ease of of publishing information online has made these systems no longer the sources of reliable information. Not only does biased and misleading information exist, financial incentives drive individual and professional spammers to insert deceptive reviews to manipulate review rating and content. What's worse, advanced Artificial Intelligence has made it possible to generate realistic-looking reviews automatically. In this proposal, we present our work of measuring the credibility of information in online review systems. We first propose to utilize the social relationships and rating deviations to assist the computation of trustworthiness of users. Secondly, we propose a content-based trust propagation framework by extracting the opinions expressed in review content. The opinion extraction approach we used was a supervised-learning based methods, which has flexibility limitations. Thus, we propose a enhanced framework that not only automates the opinion mining process, but also integrates social relationships with review content. Finally, we propose our study of the credibility of machine-generated reviews.
MOHAMMADREZA HAJIARBABI
A Face Detection and Recognition System for Color Images using Neural Networks with Boosting and Deep LearningWhen & Where:
2001B Eaton Hall
Committee Members:
Arvin Agah, ChairPrasad Kulkarni
Bo Luo
Richard Wang
Sara Wilson*
Abstract
A face detection and recognition system is a biometric identification mechanism which compared to other methods is shown to be more important both theoretically and practically. In principle, the biometric identification methods use a wide range of techniques such as machine learning, computer vision, image processing, pattern recognition and neural networks. A face recognition system consists of two main components, face detection and recognition.
In this dissertation a face detection and recognition system using color images with multiple faces is designed, implemented, and evaluated. In color images, the information of skin color is used in order to distinguish between the skin pixels and non-skin pixels, dividing the image into several components. Neural networks and deep learning methods has been used in order to detect skin pixels in the image. In order to improve system performance, bootstrapping and parallel neural networks with voting have been used. Deep learning has been used as another method for skin detection and compared to other methods. Experiments have shown that in the case of skin detection, deep learning and neural networks methods produce better results in terms of precision and recall compared to the other methods in this field.
The step after skin detection is to decide which of these components belong to human face. A template based method has been modified in order to detect the faces. The designed algorithm also succeeds if there are more than one face in the component. A rule based method has been designed in order to detect the eyes and lips in the detected components. After detecting the location of eyes and lips in the component, the face can be detected.
After face detection, the faces which were detected in the previous step are to be recognized. Appearance based methods used in this work are one of the most important methods in face recognition due to the robustness of the algorithms to head rotation in the images, noise, low quality images, and other challenges. Different appearance based methods have been designed, implemented and tested. Canonical correlation analysis has been used in order to increase the recognition rate.
JASON GEVARGIZIAN
Automatic Measurement Framework: Expected Outcome Generation and Measurer Synthesis for Remote AttestationWhen & Where:
246 Nichols Hall
Committee Members:
Prasad Kulkarni, ChairArvin Agah
Perry Alexander
Andy Gill
Kevin Leonard
Abstract
A system is said to be trusted if it can be unambiguously identified and observed as behaving in accordance with expectations. Remote attestation is a mechanism to establish trust in a remote system.
Remote attestation requires measurement systems that can sample program state from a wide range of applications, each of which with different program features and expected behavior. Even in cases where applications are similar in purpose, differences in attestation critical structures and program variables render any one measurer incapable of sampling multiple applications. Furthermore, any set of behavioral expectations vague enough to match multiple applications would be too weak to serve as a rubric to establish trust in any one of them. As such, measurement functionality must be tailored to each and every critical application on the target system.
Establishing behavioral expectations and customizing measurement systems to gather meaningful data to evidence said expectations is difficult. The process requires an expert, typically the application developer or a motivated appraiser, to analyze the application's source in order to detail program behavioral expectations critical for establishing trust and to identify critical program structures and variables that can be sampled to evidence said trust. This effort required to customize measurement systems manually prohibits widespread adoption of remote attestation in trusted computing.
We propose automatic generation of expected outcomes and synthesis of measurement policies for a configurable general purpose measurer to enable large scale adoption of remote attestation for trusted computing. As such, we mitigate the cost incurred by existing systems that require manual measurement specification and design by an expert sufficiently skilled and knowledgeable regarding the target application and the methods for evidencing trust in the context of remote attestation.
SALLY SAJADIAN
Model Predictive Control of Impedance Source Inverter for Photovoltaic ApplicationsWhen & Where:
2001B Eaton Hall
Committee Members:
Reza Ahmadi, ChairGlenn Prescott
Alessandro Salandrino
Jim Stiles
Huazhen Fang
Abstract
A model predictive controlled power electronics interface (PEI) based on impedance source inverter for photovoltaic (PV) applications is proposed in this work. The proposed system has the capability of operation in both grid-connected and islanded mode. Firstly, a model predictive based maximum power point tracking (MPPT) method is proposed for PV applications based on single stage grid-connected Z-source inverter (ZSI). This technique predicts the future behavior of the PV side voltage and current using a digital observer that estimates the parameters of the PV module. Therefore, by predicting a priori the behavior of the PV module and its corresponding effects on the system, it improves the control efficacy. The proposed method adaptively updates the perturbation size in the PV voltage using the predicted model of the system to reduce oscillations and increase convergence speed. The operation of the proposed method is verified experimentally. The experimental results demonstrate fast dynamic response to changes in solar irradiance level, small oscillations around maximum power point at steady-state, and high MPPT effectiveness from low to high solar irradiance level. The second part of this work focuses on the dual-mode operation of the proposed PEI based on ZSI with capability to operate in islanded and grid-connected mode. The transition from islanded to grid-connected mode and vice versa can cause significant deviation in voltage and current due to mismatch in phase, frequency, and amplitude of voltages. The proposed controller using MPC offers seamless transition between the two modes of operations. The main predictive controller objectives are direct decoupled power control in grid-connected mode and load voltage regulation in islanded mode. The proposed direct decoupled active and reactive power control in grid connected mode enables the dual-mode ZSI to behave as a power conditioning unit for ancillary services such as reactive power compensation. The proposed controller features simplicity, seamless transition between modes of operations, fast dynamic response, and small tracking error in steady state condition of controller objectives. The operation of the proposed system is verified experimentally.