Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Arin Dutta

Performance Analysis of Distributed Raman Amplification with Different Pumping Configurations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao

Abstract

As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.

Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.

The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.

As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research  is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

Pending.


Rich Simeon

Delay-Doppler Channel Estimation for High-Speed Aeronautical Mobile Telemetry Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Morteza Hashemi
Jim Stiles
Craig McLaughlin

Abstract

The next generation of digital communications systems aims to operate in high-Doppler environments such as high-speed trains and non-terrestrial networks that utilize satellites in low-Earth orbit. Current generation systems use Orthogonal Frequency Division Multiplexing modulation which is known to suffer from inter-carrier interference (ICI) when different channel paths have dissimilar Doppler shifts.

A new Orthogonal Time Frequency Space (OTFS) modulation (also known as Delay-Doppler modulation) is proposed as a candidate modulation for 6G networks that is resilient to ICI. To date, OTFS demodulation designs have focused on the use cases of popular urban terrestrial channel models where path delay spread is a fraction of the OTFS symbol duration. However, wireless wide-area networks that operate in the aeronautical mobile telemetry (AMT) space can have large path delay spreads due to reflections from distant geographic features. This presents problems for existing channel estimation techniques which assume a small maximum expected channel delay, since data transmission is paused to sound the channel by an amount equal to twice the maximum channel delay. The dropout in data contributes to a reduction in spectral efficiency.

Our research addresses OTFS limitations in the AMT use case. We start with an exemplary OTFS framework with parameters optimized for AMT. Following system design, we focus on two distinct areas to improve OTFS performance in the AMT environment. First we propose a new channel estimation technique using a pilot signal superimposed over data that can measure large delay spread channels with no penalty in spectral efficiency. A successive interference cancellation algorithm is used to iteratively improve channel estimates and jointly decode data. A second aspect of our research aims to equalize in delay-Doppler space. In the delay-Doppler paradigm, the rapid channel variations seen in the time-frequency domain is transformed into a sparse quasi-stationary channel in the delay-Doppler domain. We propose to use machine learning using Gaussian Process Regression to take advantage of the sparse and stationary channel and learn the channel parameters to compensate for the effects of fractional Doppler in which simpler channel estimation techniques cannot mitigate. Both areas of research can advance the robustness of OTFS across all communications systems.


Mohammad Ful Hossain Seikh

AAFIYA: Antenna Analysis in Frequency-domain for Impedance and Yield Assessment

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Jim Stiles, Chair
Rachel Jarvis
Alessandro Salandrino


Abstract

This project presents AAFIYA (Antenna Analysis in Frequency-domain for Impedance and Yield Assessment), a modular Python toolkit developed to automate and streamline the characterization and analysis of radiofrequency (RF) antennas using both measurement and simulation data. Motivated by the need for reproducible, flexible, and publication-ready workflows in modern antenna research, AAFIYA provides comprehensive support for all major antenna metrics, including S-parameters, impedance, gain and beam patterns, polarization purity, and calibration-based yield estimation. The toolkit features robust data ingestion from standard formats (such as Touchstone files and beam pattern text files), vectorized computation of RF metrics, and high-quality plotting utilities suitable for scientific publication.

Validation was carried out using measurements from industry-standard electromagnetic anechoic chamber setups involving both Log Periodic Dipole Array (LPDA) reference antennas and Askaryan Radio Array (ARA) Bottom Vertically Polarized (BVPol) antennas, covering a frequency range of 50–1500 MHz. Key performance metrics, such as broadband impedance matching, S11 and S21 related calculations, 3D realized gain patterns, vector effective lengths,  and cross-polarization ratio, were extracted and compared against full-wave electromagnetic simulations (using HFSS and WIPL-D). The results demonstrate close agreement between measurement and simulation, confirming the reliability of the workflow and calibration methodology.

AAFIYA’s open-source, extensible design enables rapid adaptation to new experiments and provides a foundation for future integration with machine learning and evolutionary optimization algorithms. This work not only delivers a validated toolkit for antenna research and pedagogy but also sets the stage for next-generation approaches in automated antenna design, optimization, and performance analysis.


Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Past Defense Notices

Dates

KENNETH DEWAYNE BROWN

A Mobile Wireless Channel State Recognition Algorithm

When & Where:


2001B Eaton Hall

Committee Members:

Glenn Prescott, Chair
Chris Allen
Gary Minden
Erik Perrins
Richard Hale

Abstract

The scope of this research is a blind mobile wireless channel state recognition (CSR) algorithm that detects channel time and frequency dispersion. Hidden Markov Models (HMM) are utilized to represent the statistical relationship between these hidden channel dispersive state process and an observed received waveform process. The HMMs provide sufficient sensitivity to detect the hidden channel dispersive state process. First-order and second-order statistical features are assumed to be sufficient to discriminate channel state from the receive waveform observations. State hard decisions provide sufficient information, and can be combined, to increase the reliability of a time block channel state estimate. To investigate the feasibility of the proposed CSR algorithm, this research effort has architected, designed, and verified a blind statistical feature recognition process capable of detecting whether a mobile wireless channel is coherent, single time, single frequency, or dual dispersive. Channel state waveforms are utilized to compute the transition and output probability parameters for a set of feature recognition HMMs. Time and frequency statistical features are computed from consecutive sample blocks and input into the set of trained HMMs which compute a state sequence conditional probability for each feature. The conditional probabilities identify how well the input waveform statistically agrees with the previous training waveforms. Hard decisions were produced from each feature state probability estimate and combined to produce a single output channel dispersive state estimate for each input time block. To verify the CSR algorithm performance, combinations of state sequence blocks were input to the process and state recognition accuracy was characterized. Initial results suggest that CSR based on blind waveform statistical feature recognition is feasible.


WENRONG ZENG

Content-Based Access Control

When & Where:


250 Nichols Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Jerzy Grzymala-Busse
Prasad Kulkarni
Alfred Tat-Kei Ho

Abstract

In conventional database access control models, access control policies are explicitly specified for each role against each data object manually. Nowadays, in large-scale content-centric data sharing, 
conventional approaches could be impractical due to exponential explosion and the sensitivity of data objects. In this proposal, we first introduce Content-Based Access Control (CBAC), an innovative access control model for content-centric information sharing. As a complement to conventional access control models, the CBAC model makes access control decisions based on the content similarity between user credentials and data content automatically. In CBAC, each user is allowed by a meta-rule to access “a subset” of the designated data objects of the whole database, while the boundary of the subset is dynamically determined by the textual content of data objects. We then present an enforcement mechanism for CBAC that exploits Oracle’s Virtual Private Database (VPD). To further improve the performance of the proposed approach, we introduce a content-based blocking mechanism to improve the efficiency of CBAC enforcement to further 
reveal a more relavant part of the data objects comparing with the user credentials and data content. We also utilized a tagging mechanism for more accurate textual content matching for short text snippets (e.g. short VarChar attributes) to extract topics other than pure word occurences to 
represent the content of data. Experimental results show that CBAC makes reasonable access control decisions with a small overhead.


MARIANNE JANTZ

Detecting and Understanding Dynamically Dead Instructions for Contemporary Machines

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Xin Fu
Man Kong


Abstract

Instructions executed by the processor are dynamically dead if the values they produce are not used by the program. Researchers have discovered that a surprisingly large fraction of executed instructions are dynamically dead. Dynamically dead instructions (DDI) can potentially slow-down program execution and waste power. Unfortunately, although the issue of DDI is well-known, there has not been any comprehensive study to understand and explain the occurence of DDI, evaluate its performance impact, and resolve the problem, especially for contemporary architectures. 
The goals of our research are to quantify and understand the properties of DDI, as well as, systematically characterize them for existinng state-of-the-art compilers and popular architectures in order to develop compiler and/or architectural techniques to avoid their execution at runtime. In this thesis, we describe our GCC-based framework to instrument binary programs to generate control-flow and data-flow (registers and memory) traces at runtime. We present the percentage of DDI in our benchmark programs, as well as, characteristics of the DDI. We display that context information can have a siginificant impact on the probability that an instruction will be dynamically dead. We show that a low percentage of static instructions actually contribute to the overall DDI in our benchmark programs. We also describe the outcome of our manual study to analyze and categorize the instances of dead instructions in our x86 benchmarks into seven distinct categories. We briefly describe our plan to develop compiler and architecture based techniques to eliminate each category of DDI in future programs. And finally, we find that x86 and ARM programs, compiled with GCC, generally contain a significant amount of DDI. However, x86 programs present fewer DDI than the ARM benchmarks, which display similar percentages of DDI as earlier research for other architectures. Therefore, we suggest that the ARM architecture observes a non-negligible fraction of DDI and should be examined further. Overall, we believe that a close synergy between static code generation and program execution techniques may be the most effective strategy to eliminate DDI.


YUHAO YANG

Protecting Attributes and Contents in Online Social Networks

When & Where:


2001B Eaton Hall

Committee Members:

Bo Luo, Chair
Arvin Agah
Luke Huan
Prasad Kulkarni
Alfred Tat-Kei Ho

Abstract

With the extreme popularity of online social networks, security and privacy issues become critical. In particular, it is important to protect user privacy without preventing them from normal socialization. User privacy in the context of data publishing and structural re-identification attacks has been well studied. However, protection of attributes and data content was mostly neglected in the research community. While social network data is rarely published, billions of messages are shared in various social networks on a daily basis. Therefore, it is more important to protect attributes and textual content in social networks. 

We first study the vulnerabilities of user attributes and contents, in particular, the identifiability of the users when the adversary learns a small piece of information about the target. We have presented two attribute-reidentification attacks that exploit information retrieval and web search techniques. We have shown that large portions of users with online presence are very identifiable, even with a small piece of seed information, and the seed information could be inaccurate. 
To protect user attributes and content, we will adopt the social circle model derived from the concepts of “privacy as user perception” and “information boundary”. Users will have different social circles, and share different information in different circles. We propose to automatically identify social circles based on three observations: (1) friends in the same circle are connected and share many friends in common; (2) friends in the same circle are more likely to interact; (3) friends in the same circle tend to have similar interests and share similar content. We propose to adopt multi-view clustering to model and integrate such observations to identify implicit circles in a user’s personal network. Moreover, we propose an evaluation mechanism that evaluates the quality of the clusters (circles). 
Furthermore, we propose to exploit such circles for cross-site privacy protection for users –new messages (blogs, micro-blogs, updates, etc) will be evaluated and distributed to the most relevant circle(s). We monitor information distributed to each circle to protect users against information aggregation attacks, and also enforce circle boundaries to prevent sensitive information leakage.


MICHAEL JANTZ

Automatic Cross-Layer Framework to Improve Memory Power and Efficiency

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Xin Fu
Andy Gill
Bo Luo
Karen Nordheden

Abstract

Recent computing trends include an increased focus on power and energy consumption and the need to support multi-tenant use cases in which physical resources need to be multiplexed efficiently without causing performance interference. Many recent works have focused on how to best allocate CPU, storage and network resources to meet competing service quality objectives and reduce power. At the same time, data-intensive computing is placing larger demands on physical memory systems than ever before. In comparison to other resources, however, it is challenging to obtain precise control over distribution of memory capacity, bandwidth, or power, when virtualizing and multiplexing system memory. That is because these effects intimately depend upon the results of activity across multiple layers of the vertical execution stack, which are often not available in any individual component. 

The goal of our proposed work is to exercise collaboration between the compiler, operating system, and memory controller for a hybrid memory architecture to reduce energy consumption, while balancing performance trade-offs. Analysis, data structure partitioning, and code layout transformations will be conducted by the compiler and two-way communication between the applications and OS will guide memory management. The OS, together with the hardware memory controller, will allocate, map, and migrate pages to minimize energy consumption for a specified performance tolerance.


NIRANJAN SUNDARARAJAN

Study of Balanced and Unbalanced RFID Tags Attached to Charge Pumps

When & Where:


246 Nichols Hall

Committee Members:

Ken Demarest, Chair
Dan Deavours
Jim Stiles


Abstract

Ultra High frequency Radio Frequency Identification (UHF RFID) technology has gained wide prominence in recent years. The main drawback of a UHF RFID tag antenna is that it is sensitive to the environment in which it is placed. That is the performance of a RFID tag deteriorates when placed on conductive or dielectric objects. Most UHF RFID antennas use variations of a balanced folded dipole, such as a T-match antenna. In this project, we answer the question, would it be beneficial having an unbalanced version of a T-match antenna (Gamma match antenna) in a RFID tag compared to having a conventional balanced T-match antenna? To test this we analyzed the performance of a gamma match and T-match antenna, when attached to a charge pump, which generally acts as a load for a RFID antenna in a RFID tag. Also, we propose a procedure to find out the best impedance to drive a charge pump and outline a simple procedure to design a balanced T-match antenna for any desirable input impedance. Later, we transform a balanced T-match antenna into a unbalanced Gamma match antenna and tested to see that a Gamma match antenna is able to deliver more power and voltage to a charge pump than a T-match antenna. Finally we validate these results by studying and comparing the Z-parameters of a Gamma match and T-match antenna.


HARIPRASAD SAMPATHKUMAR

A Framework for Information Retrieval and Knowledge Discovery from Online Healthcare Social Networks

When & Where:


246 Nichols Hall

Committee Members:

Bo Luo, Chair
Xue-Wen Chen
Jerzy Grzymala-Busse
Prasad Kulkarni
Jie Zhang

Abstract

Information used to assist biomedical research has largely comprised of data available in published sources like scientific literature or clinical sources like patient health records. Information from such sources, though extensive and organized, is often not readily available due to its proprietary and/or privacy-sensitive nature. Collecting such information through clinical and pharmaceutical studies is expensive and the information is limited to the diversity of people involved in the study. With the growth of Web 2.0, more and more people openly share their health experiences with other similar patients on healthcare related social networks. The data available in these networks can act as a new source that provides for unrestricted, high volume, highly diverse and up-to-date information needed for assisting biomedical and pharmaceutical research. However, this data is often unstructured, noisy and scattered, making it unsuitable for use in its current form. The goal of this research is to develop an Information Retrieval and Knowledge Discovery framework that is capable of automatically collecting such data from online healtcare networks, extracting useful information and representing it in a form that would facilitate knowledge discovery in biomedical and pharmaceutical research. Information retrieval, Text mining and Ontology modeling techniques are employed in building this framework. An Adverse Drug Reaction discovery tool and a patient profiling tool are being developed to demonstrate the utility of this framework.


SRINIVAS PALGHAT VISWANATH

Design and Development of a Social Media Aggregator

When & Where:


2001B Eaton Hall

Committee Members:

Fengjun Li, Chair
Victor Frost
Prasad Kulkarni


Abstract

There are so many social network aggregators available in the market, e.g.SocialNetwork.in, FriedFeed, Pluggio, Postano, Hootsuite etc. A social network aggregator is a one-stop shop which provides a single point of entry to manage operations of multiple social network accounts and keep track of social media streams. Once a user establishes the sites credentials onto the aggregator, it pulls static data like user profile information, dynamic data like news feed and user posts. 

This project aims to design a unified interface of static and dynamic data from facebook, foursquare and twitter for a particular user. Unlike other social aggregators that display dynamic social media stream data in different tabs, each corresponding to a social networking site, we merge dynamic data like timeline from facebook and sent tweets from twitter together and display them on a single stream sorted according to the posting date. Similarly, news feed from facebook and twitter home are merged together and can be seen on a single stream. To simplify cross-social-network management, we support unified operations such as Posting. New posts/tweets can be easily posted at the same time on both the sites through this application. User can further specify the access privileges (i.e., seen by public, friends, friends of friends or only me) of the posts on Facebook, for dynamic privacy protection. 

Least but not last, this aggregator supports integration of user profiles from the three social networks. An edit distance based similarity score is calculated to determine the likelihood of profiles from three social networks belong to a same friend. For those with a perfect score, the matched profiles are combined and displayed in an additional dialog.


BRIGID HALLING

Towards a Formal Verification of the Trusted Platform Module

When & Where:


250 Nichols Hall

Committee Members:

Perry Alexander, Chair
Andy Gill
Fengjun Li


Abstract

The Trusted Platform Module (TPM) serves as the root-of-trust in a trusted computing environment, and therefore warrants formal specification and verification. This thesis presents results of an effort to specify and verify an abstract TPM 1.2 model using PVS that is useful for understanding the TPM and verifying protocols that use it. TPM commands are specified as state transformations and sequenced to represent protocols using a state monad. Preconditions, postconditions, and invariants are specified for individual commands and validated. All specifications are written and verified automatically using the PVS decision procedures and rewriting system.


ANNETTE TETMEYER

A POS Tagging Approach to Capture Security Requirements within an Agile Software Development Process

When & Where:


2001B Eaton Hall

Committee Members:

Hossein Saiedian, Chair
Arvin Agah
Prasad Kulkarni


Abstract

Software use is an inescapable reality. Computer systems are embedded into devices from the mundane to the complex and significantly impact daily life. Increased use expands the opportunity for malicious use which threatens security and privacy. Factors such as high profile data breaches, rising cost due to security incidents, competitive advantage and pending legislation are driving software developers to integrate security into software development rather than adding security after a product has been developed. Security requirements must be elicited, modeled, analyzed, documented and validated beginning at the initial phases of the software engineering process rather than being added at later stages. However, approaches to developing security requirements have been lacking which presents barriers to security requirements integration during the requirements phase of software development. In particular, software development organizations working within short development lifecycles (often characterized as agile lifecyle) and minimal resources need a light and practical approach to security requirements engineering that can be easily integrated into existing agile processes. 
In this thesis, we present an approach for eliciting, analyzing, prioritizing and developing security requirements which can be integrated into existing software development lifecycles for small, agile organizations. The approach is based on identifying candidate security goals, categorizing security goals based on security perspectives, understanding the stakeholder goals to develop preliminary security requirements and prioritizing preliminary security requirements. The identification activity implements part of speech tagging to scan requirements artifacts for security terminology to discover candidate security goals. The categorization activity applies a general security perspective to candidate goals. Elicitation activities are undertaken to gain a deeper understanding of the security goals from stakeholders. Elicited goals are prioritized using risk management techniques and security requirements are developed from validated goals. Security goals may fail the validation activity, requiring further iterations of analysis, elicitation, and prioritization activities until stakeholders are satisfied with or have eliminated the security requirement. Finally, candidate security requirements are output which can be further modeled, defined and validated using other approaches. A security requirements repository is integrated into our proposed approach for future security requirements refinement and reuse. We validate the framework through an industrial case study with a small, agile software development organization.