ABOUT THE PROJECT:
OUR TEAM:
Frank Pfenning
Frank Pfenning
Norman Sadeh
Garth Gibson
Our ability to design appropriate information security mechanisms and sound security policies depends on our understanding of how end-users actually behave. To improve this understanding, we will establish a large panel of end-users whose complete online behavior will be captured, monitored, and analyzed over an extended period of time. Establishing such a panel will require the design of sound measurement methodologies, while paying particular attention to the protection of end-users' confidential information. Once established, our panel will offer an unprecedented window on real-time, real-life security and privacy behavior "in the wild." The panel will combine tracking, experimental, and survey data, and will provide a foundation on which sound models of both user and attacker behavior can rest. These models will lead to the scientific design of intervention policies and technical countermeasures against security threats. In other words, in addition to academic research, this research will also lead to actionable recommendations for policy makers and firms.
An important emerging trend in the engineering of complex software-based systems is the ability to incorporate self-adaptive capabilities. Such systems typically include a set of monitoring mechanisms that allow a control layer to observe the running behavior of a target system and its environment, and then repair the system when problems are detected. Substantial results in applying these concepts have emerged over the past decade, addressing quality dimensions such as reliability, performance, and database optimization. In particular, at Carnegie Mellon we have shown how architectural models, updated at runtime, can form the basis for effective and scalable problem detection and correction. However, to-date relatively little research has been done to apply these techniques to support detection of security-related problems and identification of remedial actions. In this project we propose to develop scientific foundations, as well as practical tools and techniques, to support self-securing systems, focusing specifically on questions of scalable assurance.
Prof. David Garlan and Dr. Bradley Schmerl have been working in the area of architecture-based self-adaptation for over a decade. They have developed both foundations and tools – specifically, a platform called “Rainbow” – that are considered seminal work in this area of architecture-based adaptation. Ivan Ruchkin is a Ph.D. candidate working under the direction of Prof. Garlan in the area of formal modeling of dynamic changes in systems from an architectural perspective. His work will support assurances that operations that change a system at run-time are sound, and do not violate the properties and rules defined by the architecture.
PI: Prof. David Garlan (Faculty),
Staff: Dr. Bradley Schmerl (Research Faculty)
Students: Ivan Ruchkin (Ph.D. Student), new student to be recruited.
David Garlan is a Professor in the School of Computer Science at Carnegie Mellon University. His research interests include:
Dr. Garlan is a member of the Institute for Software Research and Computer Science Department in the School of Computer Science.
He is a Professor of Computer Science in the School of Computer Science at Carnegie Mellon University. He received his Ph.D. from Carnegie Mellon in 1987 and worked as a software architect in industry between 1987 and 1990. His research interests include software architecture, self-adaptive systems, formal methods, and cyber-physical systems. He is recognized as one of the founders of the field of software architecture, and, in particular, formal representation and analysis of architectural designs. He is a co-author of two books on software architecture: "Software Architecture: Perspectives on an Emerging Discipline", and "Documenting Software Architecture: Views and Beyond." In 2005 he received a Stevens Award Citation for “fundamental contributions to the development and understanding of software architecture as a discipline in software engineering.” In 2011 he received the Outstanding Research award from ACM SIGSOFT for “significant and lasting software engineering research contributions through the development and promotion of software architecture.” In 2016 he received the Allen Newell Award for Research Excellence. In 2017 he received the IEEE TCSE Distinguished Education Award and also the Nancy Mead Award for Excellence in Software Engineering Education He is a Fellow of the IEEE and ACM.
The objective of this project is to develop a theory of system resiliency for complex adaptive socio-technical systems. A secondary objective is to develop the modeling framework and associated metrics for examining the resiliency of complex socio-technical systems in the face of various cyber and non-cyber attacks, such that the methodology can be used to support both basic level simulation based experimentation and assessment of actual socio-technical systems.
Professor Kathleen M. Carley
Geoffrey Morgan (Student)
Mike Kowalchuk (Research Staff)
Dr. Kathleen M. Carley is a Professor of Computation, Organizations and Society in the department – Institute for Software Research, in the School of Computer Science at Carnegie Mellon University & CEO of Carley Technologies Inc. Dr. Carley is the director of the center for Computational Analysis of Social and Organizational Systems (CASOS) which has over 25 members, both students and research staff. Dr. Carley’s received her Ph.D. in Mathematical Sociology from Harvard University, and her undergraduate degrees in Economics and Political Science from MIT. Her research combines cognitive science, organization science, social networks and computer science to address complex social and organizational problems. Her specific research areas are dynamic network analysis, computational social and organization theory, adaptation and evolution, text mining, and the impact of telecommunication technologies and policy on communication, information diffusion, disease contagion and response within and among groups particularly in disaster or crisis situations. She and the members of the CASOS center have developed infrastructure tools for analyzing large scale dynamic networks and various multi-agent simulation systems. The infrastructure tools include ORA, AutoMap and SmartCard. ORA is a statistical toolkit for analyzing and visualizing multi-dimensional networks. ORA results are organized into reports that meet various needs such as the management report, the mental model report, and the intelligence report. Another tool is AutoMap, a text-mining system for extracting semantic networks from texts and then cross-classifying them using an organizational ontology into the underlying social, knowledge, resource and task networks. SmartCard is a network and behavioral estimation system for cities in the U.S.. Carley’s simulation models meld multi-agent technology with network dynamics and empirical data resulting in reusable large scale models: BioWar a city-scale model for understanding the spread of disease and illness due to natural epidemics, chemical spills, and weaponized biological attacks; and Construct an information and belief diffusion model that enables assessment of interventions. She is the current and a founding editor of the journal Computational Organization Theory and has published over 200 papers and co-edited several books using computational and dynamic network models.
Research
We propose to address the scale problem in security analysis by developing representation and reasoning techniques that support higher level structure and that enable the security community to factor out common core reasoning principles from the particular elements, rules, and data facts that are specific to the application at hand. This principle, separation of reasoning engine and problem specification, has been pursued with great success in satisfiability modulo theories (SMT) solving. Probabilistic reasoning is powerful for many specific domains, but does not have any full-fledged extensions to the level of scalable higher-level representations that are truly first-order. Based on our preliminary results, we propose to develop first-order probabilistic programs and study how they can represent security analysis questions for systems with both distributed aspects and quantitative uncertainty. We propose to study instance-based methods for reasoning about first-order probabilistic programs. Instance-based methods enjoy a good trade-off between generality and efficiency. They can leverage classical advances in probabilistic reasoning for finite-dimensional representations and lift them to the full expressive and descriptive power of first-order representations. With this approach, we achieve inter-system scaling by decoupling the generics from the specifics and we hope to improve intra-system scaling by being able to combine more powerful reasoning techniques from classically disjoint domains.
Relevance
This project is of relevance to the security community, since, if successful, it would provide more general and more flexible off-the-shelve solutions that can be used to address security analysis questions for particular systems. To simplify the design of problem-specific computer-aided security analysis procedures, this project addresses a separation of the problem description from the reasoning techniques about them. At the same time, it increases representational and computational power to scale to systems with uncertainty and distributed effects.
Impact
This project has the potential to help solve security analysis questions that scale to distributed systems and the presence of uncertainty. Both aspects are central in security challenges like Stuxnet. Because each security analysis question is different, it is more economically feasible to assemble particular security analyses from suitable reasoning components. This project addresses one such component with a good trade-off between generality and efficiency.
The project team includes Andre Platzer who is an assistant professor in the computer science department at Carnegie Mellon University. He is an expert in verification and analysis of hybrid, distributed, and stochastic dynamic systems, including cyber-physical systems. The team further includes Erik P. Zawadzki, who is a fourth year graduate student in the computer science department at Carnegie Mellon University and is developing reasoning techniques for first-order MILPs and fast propositional solvers for probabilistic model counting.
Compositional security is a recognized central scientific challenge for trustworthy computing. Contemporary systems are built up from smaller components. However, even if each component is secure in isolation, the composed system may not achieve the desired end-to-end security property: an adversary may exploit complex interactions between components to compromise security. Such attacks have shown up in the wild in many different settings, including web browsers and infrastructure, network protocols and infrastructure, and application and systems software. While there has been progress on understanding secure composition in some settings, such as information flow control for non-interference-style properties, cryptographic protocols, and certain classes of software systems, a number of important scientific questions pertaining to secure composition remain open. The broad goal of this project is to improve our scientific understanding of secure composition.
This project will focus on the following research directions:
Compositional Reasoning Principles for Higher-order Code. We propose to develop sound secure composition principles for trusted systems that interact with adversaries who have the capability to supply executable code to and modify code of the trusted system. We will build on prior work by Datta and collaborators on sound composition principles for trusted systems that interact with adversaries that can supply data (but not code) to tackle this problem. Theoretical Categorization of Compositionality. We will generalize from the insights acquired during the development of compositional reasoning principles and prior technical work in the computer security community on challenges in composing certain specific classes of security policies to arrive at semantic characterizations of classes of security policies that are and are not amenable to a compositional analysis. We plan to start from the security policy algebra in the prior work by Pincus and Wing, and expand the theory to larger classes of policy compositions.
Correspondence Between High-level Policies and Low-level Properties. We will develop a technical connection—a correspondence theorem—between high-level security policies (expressed either in the policy algebra or a logic) and lower level security properties expressed in our program logic. This result will connect our two lines of work and might also be useful in explaining failures of security policy composition in terms of failures of environmental assumptions. Case Studies. We will conduct a compositional analysis of a class of systems that combine trusted computing technology and memory protection mechanisms to provide strong security guarantees against powerful adversaries. We will also revisit the real-world examples in the Pincus- Wing paper using the new formalism.
Secure software depends upon the ability of software developers to respond to security risks early in the software development process. Despite a wealth of security requirements, often called security controls, there is a shortfall in the adoption and implementation of these requirements. This shortfall is due to the extensive expertise and higher level cognitive skillsets required to comprehend, decompose and reassemble security requirements concepts in the context of an emerging system design. To address this shortfall, we propose to develop two empirical methods: (1) a method to derive security requirements patterns from requirements catalogues using expert knowledge; and (2) a method to empirically evaluate these patterns for their "usability" by novice software developers against a set of common problem descriptions, including the developer's ability to formulate problems, select and instantiate patterns. The study results will yield a framework for discovering and evaluation security requirements patterns and new scientific knowledge about the limitations of patterns-based approaches when applied by novice software developers.
Hard Problem:
Security requirements are difficult to apply in design and must incorporate system architecture, functional requirements, security policies, regulations, and standards.
PI(s):Travis Breaux, Laurie Williams, Jianwei Niu
Dr. Breaux is the Director of the CMU Requirements Engineering Lab, where his research program investigates how to specify and design software to comply with policy and law in a trustworthy, reliable manner. His work historically concerned the empirical extraction of legal requirements from policies and law, and has recently studied how to use formal specifications to reason about privacy policy compliance, how to measure and reason over ambiguous and vague policies, and how security and privacy experts and novices estimate the risk of system designs.
To learn more, read about his ongoing research projects or contact him.
Mobile applications are a critical emerging segment of the software industry, and security for web-based mobile applications is of increasing concern. We hypothesize that many of the most important security vulnerabilities in web-based mobile applications are a consequence of expressing programs at a low level of abstraction, in which important security properties are implicit and only indirectly related to code. In order to test this hypothesis, we are building a system for expressing web-based mobile applications at a higher level of abstraction, in which security properties are made explicit through expressions of design intent, and in which those properties are more directly related to code. We will evaluate whether such an approach can reduce or eliminate the most common vulnerabilities of web-based mobile software, while imposing a low or even negative marginal cost on developers.
Jonathan Aldrich is an Associate Professor of the School of Computer Science. He does programming languages and software engineering research focused on developing better ways of expressing and enforcing software design within source code, typically through language design and type systems. Jonathan works at the intersection of programming languages and software engineering. His research explores how the way we express software affects our ability to engineer software at scale. A particular theme of much of his work is improving software quality and programmer productivity through better ways to express structural and behavioral aspects of software design within source code. Aldrich has contributed to object-oriented typestate verification, modular reasoning techniques for aspects and stateful programs, and new object-oriented language models. For his work specifying and verifying architecture, he received a 2006 NSF CAREER award and the 2007 Dahl-Nygaard Junior Prize. Currently, Aldrich excited to be working on the design of Wyvern, a new modularly extensible programming language.