Scalable Trust Semantics & Infrastructure
Lead PI:
Perry Alexander
Co-Pi:
Abstract

Remote attestation provides a run-time capability for appraising system behavior and establishing trust. Using remote attestation, an appraiser requests evidence describing a target. The target responds by performing measurement to gather evidence then adds cryptographic signatures to assure integrity and authenticity. The appraiser takes the evidence and assesses the target’s behavior to determine if the target is who and what it claims to be.

Remote attestation has enormous potential for establishing trust in highly distributed IoT and cyber-physical systems. However, significant work remains to build an overarching science of remote attestation. Successful completion of this project will result in a science of trust and remote attestation for cyber-physical systems. Specifically:

  • Semantics of trust—Definitions of trust and metrics for soundness of evaluation and appraisal
  • Semantics of measurement, attestation and appraisal—Metrics for soundness and sufficiency of evidence, semantic mechanisms for identity and attestation, formal definitions of evidence and meta-evidence appraisal
  • Systematic mechanisms for establishing roots of trust—Metrics for evaluating roots of trust and general mechanisms for establishing roots of trust on cyber-physical systems
  • Attestation protocol representation and semantics—Formal, executable representations for attestation protocols and tools for static analysis
  • Implementing and scaling trust infrastructure—Hierarchical frameworks for trust infrastructure including virtualized TPM implementations, trust aggregation and trust as a service
Perry Alexander

Perry Alexander is the AT&T Foundation Distinguished Professor of Electrical and Computer Science and Director of the Institute for Information Sciences at the University of Kansas. His research and teaching interests include formal verification and synthesis, trusted systems, and programming language semantics. His My teaching interests include formal methods, programming languages and semantics, digital systems design and software engineering. His research interests include formal methods, system-level design, trusted computing, design and specification language semantics, and component retrieval.

Institution: University of Kansas
Sponsor: National Security Agency
Formal Approaches to the Ontology & Epistemology of Resilience
Lead PI:
John Symons
Abstract

Security Science requires reflection on its foundational concepts. Our contention is that in order to make informed decisions about trade-offs with respect to resilient properties of systems we must first precisely characterize the differences between the mechanisms underlying valuable functions, those functions themselves, and the conditions underlying the persistence of the systems in question.

In practice, we recognize that some systems are more fragile than others. Clearly, some communities, cultural practices, or corporations are more susceptible to disruption than others. Common sense can only guide judgments about resilience in a very narrow range of cases. Common sense and experience tells us, for example, that a book club is likely to be a more fragile community than a scout troop. But beyond a very informal qualitative feel for the distinction between more or less resilient systems, common sense intuitions are likely to fail to serve as a good guide to what is and isn’t resilient.

We are sometimes surprised in dramatic ways. The Soviet Union was far less robust than the intelligence community in the United States had thought in the 1980s, but the global financial system was far more robust than many had expected in 2008.

A system or network can be resilient either by being difficult to destroy or by being able to recover from attacks quickly. Resilient institutions like, for example, Oxford or Cambridge Universities, the Catholic and Eastern Orthodox churches, or long lasting Japanese or Dutch business enterprises have persisted for centuries or millennia through dramatic shocks and direct attacks. The resilience of these systems resists easy explanation. Security Science has focused on network-based measures of resilience. This is a valuable formal approach, but its range of application is narrower than the general problem requires. In order to make progress on these questions, a broader theoretical approach is required and we will need to call on a range of other formal and informal methods.

When we say that a system persists, we can mean a variety of things. If we consider an electrical power system or a communications network, for example, our initial evaluation of persistence might involve deciding whether or not the system continues to function. Is the grid continuing to deliver power where it is needed? Is it still possible to send and receive messages reliably through the communications network? This is a functional account of the individuation of systems. The functional account is foundational to contemporary thinking in the science of security. While it is an intuitively sensible and pragmatically grounded way of thinking about systems, it does not shed light on the question of resilience. Functions are also difficult to capture in a purely network theoretic strategy for reasons that this research group will explore and explain.

Resilience is certainly tied to function in important ways. If what we value about a communications network is its functional properties, we are likely to think it more resilient if it continues to perform its functions reliably. While pragmatic considerations are important, conditions for persistence or individuation are not properly understood in terms of our pragmatic preferences with respect to the functional properties of systems. The fact that it is important to us that the network functions in accordance with our interests is distinct from the question of what it is that makes the network resilient. We might have, for example, an invulnerably resilient network with less than ideal functionality. As we decide on trade-offs in the context of security, it is necessary to understand distinctions of this kind.

Philosophers have tackled the problem of determining the correct approach to ontological questions (questions about the nature of the kinds of things that exist) and can shed light on many of the questions concerning resilience. Not only are many philosophers familiar with the graph theoretic foundations of network theory, but they are also used to dealing with questions concerning persistence using techniques from modal logic and category theory. More importantly, philosophers are used to recognizing distinctions in these domains that others often miss.

It is the contention of this group, for example, that excessive attention to abstract functional level descriptions can potentially distract us from other aspects of systems that contribute to resilience and are important to defend.

In order to understand why some systems are resilient and others are not we propose to apply existing work in philosophy of science and metaphysics. Successful completion of this research effort will result in principled and formally tractable ways to think about the differences between:

  • Conditions for the individuation of systems
  • Conditions for the identification of systems
  • Properties that contribute to the persistence of systems
  • Properties that contribute to the functional reliability of systems
John Symons

Dr. Symons is a professor of philosophy at KU and a member of The Academic Center for Biomedical and Health Humanities (HealthHum). His current work is centered in philosophy of technology with ties to formal epistemology, philosophy of psychology, and metaphysics of emergence.

As Director of the Center for Cyber-Social Dynamics, Dr. Symons engages in the interdisciplinary and cross-cultural study of the relationship between internet and data-driven technologies and society, politics, and culture in order to help our communities to mindfully and ethically shape technologies to promote human flourishing.

Performance Period: 01/01/2018 - 01/01/2018
Institution: University of Kansas
Sponsor: National Security Agency
Cloud-Assisted IoT Systems Privacy
Abstract

The key to realizing the smart functionalities envisioned through the Internet of Things (IoT) is to securely and efficiently communicate, store, and make sense of the tremendous data generated by IoT devices. Therefore, integrating IoT with the cloud platform for its computing and big data analysis capabilities becomes increasingly important, since IoT devices are computational units with strict performance and energy constraints. However, when data is transferred among interconnected devices or to the cloud, new security and privacy issues arise. In this project, we investigate the privacy threats in the cloud-assisted IoT systems, in which heterogeneous and distributed data are collected, integrated and analyzed by different IoT applications. The goal of the project is to develop a privacy threat analysis framework to provide a systematic methodology for modeling privacy threats in the cloud-assisted IoT systems.

Successful completion of this project will result in: (i) a systematic methodology to model privacy threats in data communication, storage, and analysis processes in the cloud-assisted IoT systems; (ii) a privacy threats analysis framework with extensive catalogue of application-specific privacy needs and privacy-specific threat categorization; and (iii) a privacy protection framework that maps existing privacy enhancing technologies (PETs) to the identified privacy needs and threats of IoT applications to simplify the selection of sound privacy protection countermeasures.

Performance Period: 01/01/2018 - 01/01/2018
Institution: University of Kansas
Sponsor: National Security Agency
Uncertainty in Security Analysis
Lead PI:
David Nicol
Abstract

Cyber-physical system (CPS) security lapses may lead to catastrophic failure. We are interested in the scientific basis for discovering unique CPS security vulnerabilities to stepping-stone attacks that penetrate through network of intermediate hosts to the ultimate targets, the compromise of which leads to instability, unsafe behaviors, and ultimately diminished availability. Our project advances this scientific basis through design and evaluation of CPS, driven by uncertainty-aware formalization of system models, adversary classes, and security metrics. We propose to define metrics, develop and study analysis algorithms that provide formal guarantees on them with respect to different adversary classes and different defense mechanisms.

David Nicol

Prof. David M. Nicol is the Herman M. Dieckamp Endowed Chair in Engineering at the University of Illinois at Urbana‐Champaign, and a member of the Department of Electrical and Computer Engineering. He also serves as the Director of the Information Trust Institute (iti.illinois.edu), and the Director of the Advanced Digital Sciences Center (Singapore). He is PI for two national centers for infrastructure resilience: the DHS‐funded Critical Infrastructure Resilience Institute (ciri.illinois.edu), and the DoE funded Cyber Resilient Energy Delivery Consortium (cred‐c.org); he is also PI for the Boeing Trusted Software Center, and co-PI for the NSA‐funded Science of Security lablet.

Prior to joining UIUC in 2003 he served on the faculties of the computer science departments at Dartmouth College (1996‐2003), and before that the College of William and Mary (1987‐1996). He has won recognition for excellence in teaching at all three universities. His research interests include trust analysis of networks and software, analytic modeling, and parallelized discrete‐event simulation, research which has led to the founding of startup company Network Perception, and election as Fellow of the IEEE and Fellow of the ACM. He is the inaugural recipient of the ACM SIGSIM Outstanding Contributions award, and co‐author of the widely used undergraduate textbook “Discrete‐Event Systems Simulation”.

Nicol holds a B.A. (1979) degree in mathematics from Carleton College, M.S. (1983) and Ph.D. (1985) degrees in computer science from the University of Virginia.

Institution: University of Illinois at Urbana-Champaign
Sponsor: National Security Agency
Monitoring, Fusion, and Response for Cyber Resilience
Lead PI:
William Sanders
Abstract

We believe that diversity and redundancy can help us prevent an attacker from hiding all of his or her traces. Therefore, we will strategically deploy diverse security monitors and build a set of techniques to combine information originating at the monitors. We have shown that we can formulate monitor deployment as a constrained optimization problem wherein the objective function is the utility of monitors in detecting intrusions. In this project, we will develop methods to select and place diverse monitors at different architectural levels in the system and evaluate the trustworthiness of the data generated by the monitors. We will build event aggregation and correlation algorithms to achieve inferences for intrusion detection. Those algorithms will combine the events and alerts generated by the deployed monitors with important system-related information, including information on the system architecture, users, and vulnerabilities. Since the rule-based detection systems fail to detect novel attacks, we will adapt and extend existing anomaly detection methods. We will build on our previous SoS-funded work that resulted in the development of the special-purpose intrusion detection methods.

William Sanders
Institution: University of Illinois at Urbana-Champaign
Sponsor: National Security Agency
Automated Synthesis Framework For Network Security and Resilience
Lead PI:
Matt Caesar
Co-Pi:
Abstract

We propose to develop the analysis methodology needed to support scientific reasoning about the resilience and security of networks, with a particular focus on network control and information/data flow. The core of this vision is an automated synthesis framework (ASF), which will automatically derive network state and repairs from a set of specified correctness requirements and security policies. ASF consists of a set of techniques for performing and integrating security and resilience analyses applied at different layers (i.e., data forwarding, network control, programming language, and application software) in a real-time and automated fashion. The ASF approach is exciting because developing it adds to the theoretical underpinnings of SoS, while using it supports the practice of SoS.

Matt Caesar
Institution: University of Illinois at Urbana-Champaign
Sponsor: National Security Agency
Principles of Secure BootStrapping for IoT
Lead PI:
Ninghui Li
Abstract

This project seeks to aid developers in designing and implementing protocols for establishing mutual trust between users, Internet of Things (IoT) devices, and their intended environment through identifying principles of secure bootstrapping, including tradeoffs among security objectives, device capabilities, and usability.

Ninghui Li
Institution: North Carolina State University
Sponsor: National Security Agency
Predicting the Difficulty of Compromise through How Attackers Discover Vulnerabilities
Lead PI:
Andy Meneely
Co-Pi:
Abstract

The goal of this project is to aid security engineers in predicting the difficulty of system compromises through the development and evaluation of attack surface measurement techniques based upon attacker-centric vulnerability discovery processes.

Andy Meneely
Institution: North Carolina State University
Sponsor: National Security Agency
Coordinated Machine Learning-Based Vulnerability & Security Patching for Resilient Virtual Computing Infrastructure
Lead PI:
Xiaohui (Helen) Gu
Abstract

This research aims at aiding administrators of virtualized computing infrastructures in making services more resilient to security attacks through applying machine learning to reduce both security and functionality risks in software patching by continually monitoring patched and unpatched software to discover vulnerabilities and triggering proper security updates.

Xiaohui (Helen) Gu
Institution: North Carolina State University
Sponsor: National Security Agency
Operationalizing Contextual Integrity
Lead PI:
Serge Egelman
Abstract

According to Nissenbaum’s theory of contextual integrity (CI), protecting privacy means ensuring that personal information flows appropriately; it does not mean that no information flows (e.g., confidentiality), or that it flows only if the information subject allows it (e.g., control). Flow is appropriate if it conforms to legitimate, contextual informational norms. Contextual informational norms prescribe information flows in terms of five parameters: actors (sender, subject, recipient), information types, and transmission principles. Actors and information types range over respective contextual ontologies. Transmission principles (a term introduced by the theory) range over the conditions or constraints under which information flows, for example, whether confidentially, mandated by law, with notice, with consent, in accordance with subject's preference, and so on. The theory holds that our privacy expectations are a product of informational norms, meaning that people will judge particular information flows as respecting or violating privacy according to whether or not—in the first approximation—they conform to contextual informational norms. If so, we say contextual integrity has been preserved.

The theory has been recognized in policy arenas, has been formalized, has guided empirical social science research, and has shaped system development. Yet, despite resolving many longstanding privacy puzzles and its promising potential in practical realms, its direct application to pressing needs of design and policy has proven challenging. One challenge is that the theory requires knowledge of data flows, and in practice, systems may not be able to provide this, particularly once data leaves a device. The challenge of bridging theory and practice, in this case, grounding scientific research and design practice in the theory of CI, is not only tractable, but with sufficient effort devoted to operationalizing the relevant concepts, could enhance our methodological toolkit for studying individuals’ understandings and valuations of privacy in relation to data-intensive technologies and principles to guide design.

In our view, capturing people’s complex attitudes toward privacy, including expectations and preferences in situ, will require methodological innovation and new techniques that apply the theory of contextual integrity. These methodologies and techniques have to accommodate the five independent parameters of contextual norms, scale to diverse contexts in which privacy decision-making takes place, and be sensitive not only to the variety of preferences and expectations within respective contexts, but to distinguish preferences from expectations. What we learn about privacy attitudes by following such methods and techniques should serve in the discovery and identification of contextual information norms, and yield results that are sufficiently rigorous to serve as a foundation for the design of effective privacy interfaces. The first informs public policy and law with information about what people generally expect and what is generally viewed as objectionable; the second informs designers not only about mechanisms to help people to make informed decisions, but also what substantive constraints on flow should or could be implemented within design. Instead of ubiquitous “notice and choice” regimes, the project will aim to identify situations where clear norms, for example, those identified through careful study, can be embedded in technology (systems, applications, platforms) as constraints on flow and where no such norms emerge, variations may be selected according to user preferences. Thus, this project will yield a set of practical, usable, and scalable technologies and tools that can be applied to both existing and future technologies, thereby providing a scientific basis for future privacy research.

Serge Egelman

Serge Egelman is the Research Director of the Usable Security and Privacy group at the International Computer Science Institute (ICSI), which is an independent research institute affiliated with the University of California, Berkeley. He is also Chief Scientist and co-founder of AppCensus, Inc., which is commercializing his research by performing on-demand privacy analysis of mobile apps for compliance purposes. He conducts research to help people make more informed online privacy and security decisions, and is generally interested in consumer protection. This has included improvements to web browser security warnings, authentication on social networking websites, and most recently, privacy on mobile devices. Seven of his research publications have received awards at the ACM CHI conference, which is the top venue for human-computer interaction research; his research on privacy on mobile platforms has received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, the USENIX Security Distinguished Paper Award, and privacy research awards from two different European data protection authorities, CNIL and AEPD. His research has been cited in numerous lawsuits and regulatory actions, as well as featured in the New York Times, Washington Post, Wall Street Journal, Wired, CNET, NBC, and CBS. He received his PhD from Carnegie Mellon University and has previously performed research at Xerox Parc, Microsoft, and NIST.

Performance Period: 01/01/2018 - 01/01/2018
Institution: International Computer Science Institute, Cornell Tech
Sponsor: National Security Agency
Subscribe to