Trust, Recommendation Systems, and Collaboration
Lead PI:
John Baras
Co-Pi:
Abstract

Our goal is to develop a transormational framework for a science of trust, and its impact on local policies for collaboration, in networked multi-agent systems. The framework will take human bahavior into account from the start by treating humans as integrated components of these networks, interacting dynamically with other elements. The new analytical framework will be integrated, and validated, with empirical methods of analyzing experimental data on trust, recommendation and reputation, from several datasets available to us, in order to capture fundamental trends and patterns of human behavior, including trust and mistrust propagation, confidence in trust, phase transitions in the dynamic graph models involved in the new framework, stability or instability of collaborations.

Trust as a concept, has been developed and used in several settings and in various forms. It has been devloped and applied in social and economic networks as well as information and communication networks. An important challenge is the diversity of descriptions and uses of trust that have appeared in prior work. Another challenge is the relative scarcity of quantitative and formal methods for modeling and evaluating trust. Methods for modeling trust have varied from simple empirical models based on statistical experiments, to simple scalar weights, to more sophisticated policy-based methods. Furthermore, there are very few works attempting to link empirical data on trust (in particular data on human behavior) to various formal and quantitative models.

Our new framework is based on our recently developed foundational model for networked multi-agent systems in which we consider three interacting dynamic graphs on the same underlying set of nodes: a social/agent network, which is relational; an information network, which is also relational; and a communication network that is physical. These graphs are directed and their links and nodes are annotated with dynamically changing "weights" representing trust metrics whose formal definition and mathematical representation can take one of several options, e.g. weights can be scalars, vectors, or even policies (i.e. rules). Such models, in much simpler mathematical form, have been used in social- and economic-network studies under the name of value directed graphs. The model we are developing is far more sophisticated, and thus much more expressive. We will incorporate within such models complex human behavior in various forms.

Within this new framework that we are developing, we are specifically focusing on investigating the following fundamental problems: (a) Theories and principles governing the spreading dynamics of trust and msitrust among memebers of a network; (b) Design and analysis of recommendation systems, their dynamics and integrity; (c) Development of a framework for understanding the composition of trust across various networks at the different layers of our basic model; (d) Analysis of the effects of trust on collaboration in networked multi-agent systems, using game-theoretic and economic principles.

Various practical applications are also pursued to demonstrate the results in various practical settings.

In these investigations we principally use the following analytical methods and appropriate extensions: (i) Multiple partially ordered semirings; (ii) Constrained-coalitional games on dynamic networks; (iii) Embeddings of complex annotated graphs in nonlinear parametric spaces for the development of scalable and fast algorithms (e.g. hyperbolic networks and hyperbolic embeddings); (iv) Sophisticated statistical analysis of experimental data on trust and associated human behavioral patterns.

John Baras
Understanding Developers' Reasoning about Privacy and Security
Lead PI:
Katherine Shilton
Co-Pi:
Abstract

Cloud and mobile computing creates new platforms where applications developed by third-party vendors can access users' devices and computer users' private data. Examples include iPhone and Android apps, and cloud-based application marketplaces.  This project is a synergistic effort combining social behavioral science and secure software systems design. The first thrust of the project seeks to understand users' privacy expectations for their private data, and how the privacy policies vary in different social contexts. With this understanding, we will investigate how to build a platform such that 1) app developers can develop applications that respect users' privacy without being security experts; and 2) the system can understand and enforce users' fine-grained privacy policies, with minimal interruptions to a user's normal workflow. The second thrust of the project seeks to understand how developers make decisions about incorporating privacy and security features into applications, and test interventions to encourage data protection. This project will ask:  1. What encourages developers to adopt new privacy and security practices? 2. How do mobile application developers make choices between privacy, security and other priorities? 3. How can interventions (such as education, availability of best practices, or new software tools) encourage privacy and security by design?

Katherine Shilton
User-Centered Design for Security
Lead PI:
Jennifer Golbeck
Co-Pi:
Abstract

Human choice and behavior are critical to the effectiveness of many security systems; unfortunately, security designers often take little consideration of user preferences, perceptions, abilities, and usability workflow. To address these challenges, we propose research on the user-centric design of security applications, and the development of new usable-security measurement techniques and metrics to inform the design and development of new cybersecurity applications. We will focus on two primary tasks: (1) Empirical measurments of human behavior, the gathering of empirical data of about human behavior vis-a-vis cyber security systems; and, (2) Developing user-based security and usability metrics, the development of new metrics for measuring security based on user perception of security-usability using data collected from empirical studies. 

Jennifer Golbeck
Does the Presence of Honest Users Affect Intruder Behavior?
Lead PI:
Michel Cukier
Co-Pi:
Abstract

More appropriate and efficient security solutions against system trespassing incidents can be developed once the attack threat is better understood. However, few empirical studies exist to assess the attack threat. Our proposed research applies “soft science” models (i.e. sociological psychological and criminological) in effort to better understand the threat of system trespassing. The proposed research will draw on data collected on attackers who gain illegitimate access to computers by finding the correct combination username/password on SSH to a computer running Unix, during a randomized experiment. Once an attacker has access to the computer, he/she can build the attack over a period of 30 days. Previous research has shown that a warning banner does not have an effect when attackers launch an attack but does when deciding which computer to use to develop an attack.

Michel Cukier

Michel Cukier is the director for the Advanced Cybersecurity Experience for Students (ACES) undergraduate Honors College program. He is a professor of reliability engineering with a joint appointment in the Department of Mechanical Engineering.

His research covers dependability and security issues. His latest research focuses on the empirical quantification of cybersecurity. He has published more than 70 papers in journals and refereed conference proceedings in those areas.

He was the program chair of the 21st IEEE International Symposium on Software Reliability Engineering (ISSRE 2010) and the program chair of the Dependable Computing and Communication Symposium of the IEEE International Conference on Dependable Systems and Networks (DSN-2012).

Cukier is the primary investigator of a National Science Foundation REU Site on cybersecurity in collaboration with Women in Engineering, where more than 85 percent of the participants are female students. He co-advises the UMD Cybersecurity Club, which has a membership of more than 400 students.

He received a degree in physics engineering from the Free University of Brussels, Belgium, in 1991, and a doctorate in computer science from the National Polytechnic Institute of Toulouse, France, in 1996. From 1996 to 2001, he was a researcher in the Perform research group in the Coordinated Science Laboratory at the University of Illinois, Urbana-Champaign. He joined the University of Maryland in 2001 as an assistant professor.

Abstract

Past studies have shown that vulnerabilities in software are often exploited for years after the existence of the vulnerability is disclosed. Our project will leverage Symantec's WINE data set to understand the rate at which vulnerabilities are patched and how the number of affected machines changes over time. We will also conduct a study with system administrators to statistically investigate various hypotheses related to how sys-admins prioritize which vulnerabilities to patch. Finally, we are conducting user studies to determine the reasons why users choose to patch software and examine whether this qualitative data is supported by the WINE data set. Our goal is to develop guidelines to improve the rate of patching from both the technical and user perspectives.

V Subrahmanian
Empirical Models for Vulnerabilities and Attacks
Lead PI:
Tudor Dumitras
Co-Pi:
Abstract

The security of deployed and actively used systems is a moving target, influenced by factors that are not captured in the existing security models and metrics. For example, estimating the number of vulnerabilities in source code does not account for the fact that cyber attackers never exploit some of the discovered vulnerabilities, in the presence of reduced attack surfaces and technologies that render exploits less likely to succeed. Conversely, old vulnerabilities continue to impact security in the wild because some users do not deploy the corresponding software patches. As such, we currently do not know how to assess the security of systems in active use. In this project, we will conduct empirical studies of security in the real world, seeking to understand the deployment-specific factors and the user behaviors that influence the security of systems in active use. We will employ a variety of data sources, including public vulnerability databases, malware analysis platforms and Symantec’s Worldwide Intelligence Network Environment (WINE), which includes field data collected on 10+ million real hosts targeted by cyber attacks (rather than honeypots or small-scale lab settings).

Tudor Dumitras

Tudor Dumitras is an Assistant Professor in the Electrical & Computer Engineering Department at the University of Maryland, College Park. His research focuses on Big Data approaches to problems in system security and dependability. In his previous role at Symantec Research Labs he built the Worldwide Intelligence Network Environment (WINE) - a platform for experimenting with Big Data techniques. He received an Honorable Mention in the NSA competition for the Best Scientific Cybersecurity Paper of 2012. He also received the 2011 A. G. Jordan Award from the ECE Department at Carnegie Mellon University, the 2009 John Vlissides Award from ACM SIGPLAN, and the Best Paper Award at ASP-DAC'03. Tudor holds a Ph.D. degree from Carnegie Mellon University.

Trustworthy and Composable Software Systems with Contracts
Lead PI:
David Van Horn
Co-Pi:
Abstract

Over the past decade, language-based security mechanisms—such as type systems, model checkers, symbolic executors, and other program analyses—have been successfully used to uncover or prevent many important (exploitable) software vulnerabilities, such as buffer overruns, side channels, unchecked inputs (leading to code injection), and race conditions, among others. But despite significant advances, current work makes two unrealistic assumptions: (1) the analyzed code comprises a complete program (as opposed to a framework or set of components), and (2) the software is written in a single programming language. These assumptions ignore the reality of modern software, which is composed of large sets of interacting components constructed in several programming languages that provide varying degrees of assurance that the components are well-behaved. In this project, we aim to address these limitations by developing new static-analysis techniques based on software contracts, which provide a way to extend the analysis of components to reason about security of an entire heterogeneous system.

David Van Horn
Verification of Hyperproperties
Lead PI:
Michael Hicks
Co-Pi:
Abstract

Hyperproperties [Clarkson and Schneider 2010] can express security policies, such as secure information flow and service level agreements, which the standard kinds of trace properties used in program verification cannot.
Our objective is to develop verification methodologies for hyperproperties.
We intend to apply those methodologies to the construction of secure systems from components with known security properties, thereby addressing the problem of compositional security.

Michael Hicks
Limiting Recertification in Highly Configurable Systems: Analyzing Interactions and Isolation among Configuration Options
Lead PI:
Juergen Pfeffer
Co-Pi:
Abstract

In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>10^2000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.

HARD PROBLEM(S) ADDRESSED

Scalability and composability: Isolating conguration options or controlling their interactions will lead us toward composable analysis with regard to conguration options.
Predictive security metrics: To what degree can conguration-related indicate implementations that are more prone to vulnerabilities or in which vulnerabilities have more severe consequences?

Impact on Science of Security

We complement the Science of Security endeavor with a focus on the often overlooked problems of configuration options in systems. Whereas current approaches work on specific snapshots and require expensive recertification, our approaches extend underlying mathematical models (data-dependence graphs) with configuration knowledge and will thus scale analyses and reduce the need for repeating analyses. Furthermore, we expect that configuration complexity and configuration-specific program-dependence is a suitable empirical predictor for the likelihood and severity of vulnerabilities in complex systems. Finally, technical and empirical results of our work will also bring new approaches to the field of social network analysis that can be very powerful and applicable for Science of Security far beyond the scope of the current Lablet.

PUBLICATIONS

1. Kaestner, Christian & Pfeffer, Juergen (2014). Limiting Recertification in Highly Configurable Systems. Analyzing Interactions and Isolation among Configuration Options. HotSoS 2014: 2014 Symposium and Bootcamp on the Science of Security, April 8-9, Raleigh, NC.

ACCOMPLISHMENT HIGHLIGHTS

  • Short paper (poster) presentation at HotSoS 2014

OUR TEAM

  • PI: Juergen Pfeffer

    Co-PI: Christian Kaestner

Juergen Pfeffer
Multi-model run-time security analysis
Lead PI:
Juergen Pfeffer
Co-Pi:
Abstract

Our research focuses on creating the scientific foundations to support model-based run-time diagnosis and repair of security attacks. Specifically, our research develops models that (a) scale gracefully with the size of system and have appropriate real-time characteristics for run-time use, and (b) support composition through multi-model analysis. Network models will complement architectural models in two ways: (a) to characterize the organizational context of a system, and (b) to detect anomalies through network representations of architectural behavior. The former can be particularly effective, for example, in detecting and preventing insider attacks, which are often linked to organizational issues. The latter will lead to the creation of a new set of architectural metrics (e.g., based on network measures) to rapidly detect anomalous behaviors.

PI: Juergen Pfeffer
Co-PIs: David Garlan, Bradley Schmerl
 

Hard Problem(s) Addressed

  • Composability through multiple semantic models (here, architectural, organizational, and behavioral), which provide separation of concerns, while supporting synergistic benefits through integrated analyses.
  • Scalability to large complex distributed systems using architectural models.
  • Resilient architectures through the use of adaptive models that can be used at run-time to predict, detect and repair security attacks.
  • Predictive security metrics by adapting social network-based metrics to the problem of architecture-level anomaly detection.

Impact on Science of Security

We address composability through multiple semantic models (here, architectural, organizational, and behavioral), which provide separation of concerns, while supporting synergistic benefits through integrated analyses. Our work is related to the thrust of resilience, through the use of adaptive models that can be used at run-time to predict, detect and repair security attacks. Finally, our work also bears on the topic of security metrics, since we will be adapting social network-based metrics to the problem of architecture-level anomaly detection.

Juergen Pfeffer
Subscribe to