Epistemic Models for Security
Lead PI:
Robert Harper
Abstract

Noninterference defines a program to be secure if changes to high-security inputs cannot alter low-security outputs thereby indirectly stating the epistemic property that no low-security principal acquires knowledge of high-security data.  We consider a directly epistemic account of information-flow control focusing on the knowledge flows engendered by the program's execution.  Storage effects are of primary interest, since principals acquire and disclose knowledge from the execution only through these effects.  The information-flow properties of the individual effectful actions are characterized using a substructural epistemic logic that accounts for the knowledge transferred through their execution.  We prove that a low-security principal never acquires knowledge of a high-security input by executing a well-typed program.  Moreover, the epistemic approach facilitates going beyond noninterference to account for authorized declassification.  We prove that a low-security principal acquires knowledge of a high-security input only if there is an authorization proof.

PI: Robert Harper

Robert Harper
An Investigation of Scientific Principles Involved in Attack-Tolerant Software
Lead PI:
Mladen Vouk
Abstract

High-assurance systems, for which security is especially critical, should be designed to a) auto-detect attacks (even when correlated); b) isolate or interfere with the activities of a potential or actual attack; and (3) recover a secure state and continue, or fail safely. Fault-tolerant (FT) systems use forward or backward recovery to continue normal operation despite the presence of hardware or software failures. Similarly, an attack-tolerant (AT) system would recognize security anomalies, possibly identify user “intent”, and effect an appropriate defense and/or isolation. Some of the underlying questions in this context are. How is a security anomaly different from a “normal” anomaly, and how does one reliably recognize it? How does one recognize user intent? How does one deal with security failure-correlation issues? What is the appropriate safe response to potential security anomaly detection? The key hypothesis is that all security attacks always produce an anomalous state signature that is detectable at run-time, given enough of appropriate system, environment, and application provenance information. If that is true (and we plan to test that), then fault-tolerance technology (existing or newly develop) may be used with success to prevent or mitigate a security attack. A range of AT technologies will be reviewed, developed and assessed.

Team

PI: Mladen Vouk
Student: Da Young Lee

Mladen Vouk
Understanding the Fundamental Limits in Passive Inference of Wireless Channel Characteristics
Lead PI:
Huaiyu Dai
Abstract

It is widely accepted that wireless channels decorrelate fast over space, and half a wavelength is the key distance metric used in existing wireless physical layer security mechanisms for security assurance. We believe that this channel correlation model is incorrect in general: it leads to wrong hypothesis about the inference capability of a passive adversary and results in false sense of security, which will expose the legitimate systems to severe threats with little awareness. In this project, we focus on establishing correct modeling of channel correlation in wireless environments of interest, and properly evaluating the safety distance metric of existing and emerging wireless security mechanisms, as well as cyber-physical systems employing these security mechanisms. Upon successful completion of the project, the expected outcome will allow us to accurately determine key system parameters (e.g., the security zone for secrete key establishment from wireless channels) and confidently assess the security assurance in wireless security mechanisms. More importantly, the results will correct the previous misconception of channel de-correlation, and help security researchers develop new wireless security mechanisms based on a proven scientific foundation.

TEAM

PIs: Huaiyu Dai, Peng Ning
Student: Xiaofan He

Huaiyu Dai
Modeling the risk of user behavior on mobile devices
Lead PI:
Ben Watson
Co-Pi:
Abstract

It is already true that the majority of users' computing experience is a mobile one. Unfortunately that mobile experience is also more risky: users are often multitasking, hurrying or uncomfortable, leading them to make poor decisions. Our goal is to use mobile sensors to predict when users are distracted in these ways, and likely to behave insecurely. We will study this possibility in a series of lab and field experiments.

TEAM

PIs: Benjamin Watson, Will Enck, Anne McLaughlin, Michael Rappa

Ben Watson
An Adoption Theory of Secure Software Development Tools
Lead PI:
Emerson Murphy-Hill
Abstract

Programmers interact with a variety of tools that help them do their jobs, from "undo" to FindBugs' security warnings to entire development environments. However, programmers typically know about only a small subset of tools that are available, even when many of those tools might be valuable to them. In this project, we investigate how and why software developers find out about -- and don't find out about -- software security tools. The goal of the project is to help developers use more relevant security tools, more often.

TEAM

PI: Emerson Murphy-Hill
Student: Jim Witschey

Emerson Murphy-Hill
Low-level Analytics Models of Cognition for Novel Security Proofs
Abstract

A key concern in security is identifying differences between human users and “bot” programs that emulate humans. Users with malicious intent will often utilize wide-spread computational attacks in order to exploit systems and gain control. Conventional detection techniques can be grouped into two broad categories: human observational proofs (HOPs) and human interactive proofs (HIPs). The key distinguishing feature of these techniques is the degree to which human participants are actively engaged with the “proof.” HIPs require explicit action on the part of users to establish their identity (or at least distinguish them from bots). On the other hand, HOPs are passive. They examine the ways in which users complete the tasks they would normally be completing and look for patterns that are indicative of humans vs. bots. HIPs and HOPs have significant limitations. HOPs are susceptible to imitation attacks, in which bots carry out scripted actions designed to look like human behavior. HIPs, on the other hand, tend to be more secure because they require explicit action from a user to complete a dynamically generated test. Because humans have to expend cognitive effort in order pass HIPs, they can be disruptive or reduce productivity. We are developing the knowledge and techniques to enable “Human Subtlety Proofs” (HSPs) that blend the stronger security characteristics of HIPs with the unobtrusiveness of HOPs. HSPs will improve security by providing a new avenue for actively securing systems from non-human users.

TEAM

PIs: David Roberts, Robert St. Amant
Students: Titus Barik, Arpan Chakraborty, Brent Harrison

Normative Trust Toward a Principled Basis for Enabling Trustworthy Decision Making
Lead PI:
Munindar Singh
Abstract

This project seeks to develop a deeper understanding of trust than is supported by current methods, which largely disregard the underlying relationships based on which people trust or not trust each other. Accordingly, we begin from the notion of what we term normative relationships—or norms for short—directed from one principal to another. An example of a normative relationship is a commitment: is the first principal committed to doing something for the second principal? (The other main types of normative relationships are authorizations, prohibitions, powers, and sanctions.) Our broad research hypothesis is that trust can be modeled in terms of the relevant norms being satisfied or violated. To demonstrate the viability of this approach, we are mining commitments from emails (drawn from the well-known Enron dataset) and using them to assess trust. Preliminary results indicate that our methods can effectively estimate the trust-judgment profiles of human subjects.

TEAM

PI: Munindar Singh
Student: Anup Kalia

Munindar Singh

Dr. Munindar P. Singh is Alumni Distinguished Graduate Professor in the Department of Computer Science at North Carolina State University. He is a co-director of the DoD-sponsored Science of Security Lablet at NCSU, one of six nationwide. Munindar’s research interests include computational aspects of sociotechnical systems, especially as a basis for addressing challenges such as ethics, safety, resilience, trust, and privacy in connection with AI and multiagent systems.

Munindar is a Fellow of AAAI (Association for the Advancement of Artificial Intelligence), AAAS (American Association for the Advancement of Science), ACM (Association for Computing Machinery), and IEEE (Institute of Electrical and Electronics Engineers), and was elected a foreign member of Academia Europaea (honoris causa). He has won the ACM/SIGAI Autonomous Agents Research Award, the IEEE TCSVC Research Innovation Award, and the IFAAMAS Influential Paper Award. He won NC State University’s Outstanding Graduate Faculty Mentor Award as well as the Outstanding Research Achievement Award (twice). He was selected as an Alumni Distinguished Graduate Professor and elected to NCSU’s Research Leadership Academy.

Munindar was the editor-in-chief of the ACM Transactions on Internet Technology from 2012 to 2018 and the editor-in-chief of IEEE Internet Computing from 1999 to 2002. His current editorial service includes IEEE Internet Computing, Journal of Artificial Intelligence Research, Journal of Autonomous Agents and Multiagent Systems, IEEE Transactions on Services Computing, and ACM Transactions on Intelligent Systems and Technology. Munindar served on the founding board of directors of IFAAMAS, the International Foundation for Autonomous Agents and MultiAgent Systems. He previously served on the editorial board of the Journal of Web Semantics. He also served on the founding steering committee for the IEEE Transactions on Mobile Computing. Munindar was a general co-chair for the 2005 International Conference on Autonomous Agents and MultiAgent Systems and the 2016 International Conference on Service-Oriented Computing.

Munindar’s research has been recognized with awards and sponsorship by (alphabetically) Army Research Lab, Army Research Office, Cisco Systems, Consortium for Ocean Leadership, DARPA, Department of Defense, Ericsson, Facebook, IBM, Intel, National Science Foundation, and Xerox.

Twenty-nine students have received Ph.D. degrees and thirty-nine students MS degrees under Munindar’s direction.

Subscribe to