Decentralization in Security: Consequences and Incentive Design
Lead PI:
Yevgeniy Vorobeychik
Abstract

In security, our concern is typically with securing a particular network, or eliminating security holes in a particular piece of software.  These are important, but they miss the fact that being secure is fundamentally about security of all constituent parts, rather that any single part in isolation. In principle, if we can control all the pieces of a system, we can secure all possible channels of attack.  Typically, system and security design of various components are performed by different agents, having varying and often conflicting interests. Our goal is to develop this framework, and associated computational tools to address security holistically, accounting for incentives of all the parties.

In particular, the project aspires to investigate the many facets of decentralization in security. The overarching aim is to answer the following three questions in a variety of relevant settings: 1) what does decentralization of security decisions and associated incentive misalignment imply for overall system security; 2) in the world of decentralized security decisions, how should an organization optimally secure itself; and 3) how can one design incentives or constraints to improve the overall system security.  Much of the project focus will be on interdependence of security decisions, giving rise to competing decision externalities: positive externalities, where securing one’s system reduces exposure risk for others, and negative externalities, where security of one system incentivizes the attacker to attack another. The former will tend to lead to under-investment in security; the latter are expect to push organizations to invest too much.

Yevgeniy Vorobeychik

Yevgeniy Vorobeychik is an Assistant Professor of Computer Science and Computer Engineering at Vanderbilt University. Previously, he was a Principal Member of Technical Staff at Sandia National Laboratories. Between 2008 and 2010 he was a post-doctoral research associate at the University of Pennsylvania Computer and Information Science department. He received Ph.D. (2008) and M.S.E. (2004) degrees in Computer Science and Engineering from the University of Michigan, and a B.S. degree in Computer Engineering from Northwestern University. His work focuses on game theoretic modeling of security, algorithmic and behavioral game theory and incentive design, optimization, complex systems, epidemic control, network economics, and machine learning. Dr. Vorobeychik has published over 60 research articles on these topics. Dr. Vorobeychik was nominated for the 2008 ACM Doctoral Dissertation Award and received honorable mention for the 2008 IFAAMAS Distinguished Dissertation Award. In 2012 he was nominated for the Sandia Employee Recognition Award for Technical Excellence. He was also a recipient of a NSF IGERT interdisciplinary research fellowship at the University of Michigan, as well as a distinguished Computer Engineering undergraduate award at Northwestern University.

Project URL
Reasoning about Protocols with Human Participants
Lead PI:
Jonathan Katz
Co-Pi:
Abstract

Existing protocol analysis are typically confined to the electronic messages exchanged among computer systems running at the endpoints. In this project we take a broader view in which a protocol additionally encompasses both physical technologies as well as human participants. Our goal is to develop techniques for analyzing and proving security of protocols involving all these entities, with open-audit, remote voting systems such as Remotegrity as our starting point.

Jonathan Katz

Jonathan Katz is a professor in the Department of Computer Science and a core faculty member in the Maryland Cybersecurity Center with an appointment in the University of Maryland Institute for Advanced Computer Studies. He is also a Fellow of the Joint Center for Quantum Information and Computer Science. 

Katz research interests include cryptography, computer and network security and theoretical computer science. 

He is a recipient of the Humboldt Research Award, the ACM SIGSAC Outstanding Contribution Award, a University of Maryland Distinguished Teacher-Scholar Award, an NSF CAREER award and more. Katz is also a Fellow of the International Association for Cryptologic Research (IACR). He co-authored the textbook "Introduction to Modern Crytography" and a monograph on digital signature schemes.

Katz has held visiting appointments at UCLA, the École normale supérieure in Paris, France, and IBM in Hawthorne, NY.

He received his doctorate in computer science from Columbia University. 

Trust, Recommendation Systems, and Collaboration
Lead PI:
John Baras
Co-Pi:
Abstract

Our goal is to develop a transormational framework for a science of trust, and its impact on local policies for collaboration, in networked multi-agent systems. The framework will take human bahavior into account from the start by treating humans as integrated components of these networks, interacting dynamically with other elements. The new analytical framework will be integrated, and validated, with empirical methods of analyzing experimental data on trust, recommendation and reputation, from several datasets available to us, in order to capture fundamental trends and patterns of human behavior, including trust and mistrust propagation, confidence in trust, phase transitions in the dynamic graph models involved in the new framework, stability or instability of collaborations.

Trust as a concept, has been developed and used in several settings and in various forms. It has been devloped and applied in social and economic networks as well as information and communication networks. An important challenge is the diversity of descriptions and uses of trust that have appeared in prior work. Another challenge is the relative scarcity of quantitative and formal methods for modeling and evaluating trust. Methods for modeling trust have varied from simple empirical models based on statistical experiments, to simple scalar weights, to more sophisticated policy-based methods. Furthermore, there are very few works attempting to link empirical data on trust (in particular data on human behavior) to various formal and quantitative models.

Our new framework is based on our recently developed foundational model for networked multi-agent systems in which we consider three interacting dynamic graphs on the same underlying set of nodes: a social/agent network, which is relational; an information network, which is also relational; and a communication network that is physical. These graphs are directed and their links and nodes are annotated with dynamically changing "weights" representing trust metrics whose formal definition and mathematical representation can take one of several options, e.g. weights can be scalars, vectors, or even policies (i.e. rules). Such models, in much simpler mathematical form, have been used in social- and economic-network studies under the name of value directed graphs. The model we are developing is far more sophisticated, and thus much more expressive. We will incorporate within such models complex human behavior in various forms.

Within this new framework that we are developing, we are specifically focusing on investigating the following fundamental problems: (a) Theories and principles governing the spreading dynamics of trust and msitrust among memebers of a network; (b) Design and analysis of recommendation systems, their dynamics and integrity; (c) Development of a framework for understanding the composition of trust across various networks at the different layers of our basic model; (d) Analysis of the effects of trust on collaboration in networked multi-agent systems, using game-theoretic and economic principles.

Various practical applications are also pursued to demonstrate the results in various practical settings.

In these investigations we principally use the following analytical methods and appropriate extensions: (i) Multiple partially ordered semirings; (ii) Constrained-coalitional games on dynamic networks; (iii) Embeddings of complex annotated graphs in nonlinear parametric spaces for the development of scalable and fast algorithms (e.g. hyperbolic networks and hyperbolic embeddings); (iv) Sophisticated statistical analysis of experimental data on trust and associated human behavioral patterns.

John Baras
Understanding Developers' Reasoning about Privacy and Security
Lead PI:
Katherine Shilton
Co-Pi:
Abstract

Cloud and mobile computing creates new platforms where applications developed by third-party vendors can access users' devices and computer users' private data. Examples include iPhone and Android apps, and cloud-based application marketplaces.  This project is a synergistic effort combining social behavioral science and secure software systems design. The first thrust of the project seeks to understand users' privacy expectations for their private data, and how the privacy policies vary in different social contexts. With this understanding, we will investigate how to build a platform such that 1) app developers can develop applications that respect users' privacy without being security experts; and 2) the system can understand and enforce users' fine-grained privacy policies, with minimal interruptions to a user's normal workflow. The second thrust of the project seeks to understand how developers make decisions about incorporating privacy and security features into applications, and test interventions to encourage data protection. This project will ask:  1. What encourages developers to adopt new privacy and security practices? 2. How do mobile application developers make choices between privacy, security and other priorities? 3. How can interventions (such as education, availability of best practices, or new software tools) encourage privacy and security by design?

Katherine Shilton
User-Centered Design for Security
Lead PI:
Jennifer Golbeck
Co-Pi:
Abstract

Human choice and behavior are critical to the effectiveness of many security systems; unfortunately, security designers often take little consideration of user preferences, perceptions, abilities, and usability workflow. To address these challenges, we propose research on the user-centric design of security applications, and the development of new usable-security measurement techniques and metrics to inform the design and development of new cybersecurity applications. We will focus on two primary tasks: (1) Empirical measurments of human behavior, the gathering of empirical data of about human behavior vis-a-vis cyber security systems; and, (2) Developing user-based security and usability metrics, the development of new metrics for measuring security based on user perception of security-usability using data collected from empirical studies. 

Jennifer Golbeck
Subscribe to