Cloud and mobile computing creates new platforms where applications developed by third-party vendors can access users' devices and computer users' private data. Examples include iPhone and Android apps, and cloud-based application marketplaces. This project is a synergistic effort combining social behavioral science and secure software systems design. The first thrust of the project seeks to understand users' privacy expectations for their private data, and how the privacy policies vary in different social contexts. With this understanding, we will investigate how to build a platform such that 1) app developers can develop applications that respect users' privacy without being security experts; and 2) the system can understand and enforce users' fine-grained privacy policies, with minimal interruptions to a user's normal workflow. The second thrust of the project seeks to understand how developers make decisions about incorporating privacy and security features into applications, and test interventions to encourage data protection. This project will ask: 1. What encourages developers to adopt new privacy and security practices? 2. How do mobile application developers make choices between privacy, security and other priorities? 3. How can interventions (such as education, availability of best practices, or new software tools) encourage privacy and security by design?
Human choice and behavior are critical to the effectiveness of many security systems; unfortunately, security designers often take little consideration of user preferences, perceptions, abilities, and usability workflow. To address these challenges, we propose research on the user-centric design of security applications, and the development of new usable-security measurement techniques and metrics to inform the design and development of new cybersecurity applications. We will focus on two primary tasks: (1) Empirical measurments of human behavior, the gathering of empirical data of about human behavior vis-a-vis cyber security systems; and, (2) Developing user-based security and usability metrics, the development of new metrics for measuring security based on user perception of security-usability using data collected from empirical studies.
More appropriate and efficient security solutions against system trespassing incidents can be developed once the attack threat is better understood. However, few empirical studies exist to assess the attack threat. Our proposed research applies “soft science” models (i.e. sociological psychological and criminological) in effort to better understand the threat of system trespassing. The proposed research will draw on data collected on attackers who gain illegitimate access to computers by finding the correct combination username/password on SSH to a computer running Unix, during a randomized experiment. Once an attacker has access to the computer, he/she can build the attack over a period of 30 days. Previous research has shown that a warning banner does not have an effect when attackers launch an attack but does when deciding which computer to use to develop an attack.
Michel Cukier is the director for the Advanced Cybersecurity Experience for Students (ACES) undergraduate Honors College program. He is a professor of reliability engineering with a joint appointment in the Department of Mechanical Engineering.
His research covers dependability and security issues. His latest research focuses on the empirical quantification of cybersecurity. He has published more than 70 papers in journals and refereed conference proceedings in those areas.
He was the program chair of the 21st IEEE International Symposium on Software Reliability Engineering (ISSRE 2010) and the program chair of the Dependable Computing and Communication Symposium of the IEEE International Conference on Dependable Systems and Networks (DSN-2012).
Cukier is the primary investigator of a National Science Foundation REU Site on cybersecurity in collaboration with Women in Engineering, where more than 85 percent of the participants are female students. He co-advises the UMD Cybersecurity Club, which has a membership of more than 400 students.
He received a degree in physics engineering from the Free University of Brussels, Belgium, in 1991, and a doctorate in computer science from the National Polytechnic Institute of Toulouse, France, in 1996. From 1996 to 2001, he was a researcher in the Perform research group in the Coordinated Science Laboratory at the University of Illinois, Urbana-Champaign. He joined the University of Maryland in 2001 as an assistant professor.
Past studies have shown that vulnerabilities in software are often exploited for years after the existence of the vulnerability is disclosed. Our project will leverage Symantec's WINE data set to understand the rate at which vulnerabilities are patched and how the number of affected machines changes over time. We will also conduct a study with system administrators to statistically investigate various hypotheses related to how sys-admins prioritize which vulnerabilities to patch. Finally, we are conducting user studies to determine the reasons why users choose to patch software and examine whether this qualitative data is supported by the WINE data set. Our goal is to develop guidelines to improve the rate of patching from both the technical and user perspectives.
The security of deployed and actively used systems is a moving target, influenced by factors that are not captured in the existing security models and metrics. For example, estimating the number of vulnerabilities in source code does not account for the fact that cyber attackers never exploit some of the discovered vulnerabilities, in the presence of reduced attack surfaces and technologies that render exploits less likely to succeed. Conversely, old vulnerabilities continue to impact security in the wild because some users do not deploy the corresponding software patches. As such, we currently do not know how to assess the security of systems in active use. In this project, we will conduct empirical studies of security in the real world, seeking to understand the deployment-specific factors and the user behaviors that influence the security of systems in active use. We will employ a variety of data sources, including public vulnerability databases, malware analysis platforms and Symantec’s Worldwide Intelligence Network Environment (WINE), which includes field data collected on 10+ million real hosts targeted by cyber attacks (rather than honeypots or small-scale lab settings).
Tudor Dumitras is an Assistant Professor in the Electrical & Computer Engineering Department at the University of Maryland, College Park. His research focuses on Big Data approaches to problems in system security and dependability. In his previous role at Symantec Research Labs he built the Worldwide Intelligence Network Environment (WINE) - a platform for experimenting with Big Data techniques. He received an Honorable Mention in the NSA competition for the Best Scientific Cybersecurity Paper of 2012. He also received the 2011 A. G. Jordan Award from the ECE Department at Carnegie Mellon University, the 2009 John Vlissides Award from ACM SIGPLAN, and the Best Paper Award at ASP-DAC'03. Tudor holds a Ph.D. degree from Carnegie Mellon University.
Over the past decade, language-based security mechanisms—such as type systems, model checkers, symbolic executors, and other program analyses—have been successfully used to uncover or prevent many important (exploitable) software vulnerabilities, such as buffer overruns, side channels, unchecked inputs (leading to code injection), and race conditions, among others. But despite significant advances, current work makes two unrealistic assumptions: (1) the analyzed code comprises a complete program (as opposed to a framework or set of components), and (2) the software is written in a single programming language. These assumptions ignore the reality of modern software, which is composed of large sets of interacting components constructed in several programming languages that provide varying degrees of assurance that the components are well-behaved. In this project, we aim to address these limitations by developing new static-analysis techniques based on software contracts, which provide a way to extend the analysis of components to reason about security of an entire heterogeneous system.
Hyperproperties [Clarkson and Schneider 2010] can express security policies, such as secure information flow and service level agreements, which the standard kinds of trace properties used in program verification cannot.
Our objective is to develop verification methodologies for hyperproperties.
We intend to apply those methodologies to the construction of secure systems from components with known security properties, thereby addressing the problem of compositional security.
In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>10^2000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.
HARD PROBLEM(S) ADDRESSED
Scalability and composability: Isolating conguration options or controlling their interactions will lead us toward composable analysis with regard to conguration options.
Predictive security metrics: To what degree can conguration-related indicate implementations that are more prone to vulnerabilities or in which vulnerabilities have more severe consequences?
Impact on Science of Security
We complement the Science of Security endeavor with a focus on the often overlooked problems of configuration options in systems. Whereas current approaches work on specific snapshots and require expensive recertification, our approaches extend underlying mathematical models (data-dependence graphs) with configuration knowledge and will thus scale analyses and reduce the need for repeating analyses. Furthermore, we expect that configuration complexity and configuration-specific program-dependence is a suitable empirical predictor for the likelihood and severity of vulnerabilities in complex systems. Finally, technical and empirical results of our work will also bring new approaches to the field of social network analysis that can be very powerful and applicable for Science of Security far beyond the scope of the current Lablet.
PUBLICATIONS
1. Kaestner, Christian & Pfeffer, Juergen (2014). Limiting Recertification in Highly Configurable Systems. Analyzing Interactions and Isolation among Configuration Options. HotSoS 2014: 2014 Symposium and Bootcamp on the Science of Security, April 8-9, Raleigh, NC.
ACCOMPLISHMENT HIGHLIGHTS
- Short paper (poster) presentation at HotSoS 2014
OUR TEAM
-
PI: Juergen Pfeffer
Co-PI: Christian Kaestner
Our research focuses on creating the scientific foundations to support model-based run-time diagnosis and repair of security attacks. Specifically, our research develops models that (a) scale gracefully with the size of system and have appropriate real-time characteristics for run-time use, and (b) support composition through multi-model analysis. Network models will complement architectural models in two ways: (a) to characterize the organizational context of a system, and (b) to detect anomalies through network representations of architectural behavior. The former can be particularly effective, for example, in detecting and preventing insider attacks, which are often linked to organizational issues. The latter will lead to the creation of a new set of architectural metrics (e.g., based on network measures) to rapidly detect anomalous behaviors.
PI: Juergen Pfeffer
Co-PIs: David Garlan, Bradley Schmerl
Hard Problem(s) Addressed
- Composability through multiple semantic models (here, architectural, organizational, and behavioral), which provide separation of concerns, while supporting synergistic benefits through integrated analyses.
- Scalability to large complex distributed systems using architectural models.
- Resilient architectures through the use of adaptive models that can be used at run-time to predict, detect and repair security attacks.
- Predictive security metrics by adapting social network-based metrics to the problem of architecture-level anomaly detection.
Impact on Science of Security
We address composability through multiple semantic models (here, architectural, organizational, and behavioral), which provide separation of concerns, while supporting synergistic benefits through integrated analyses. Our work is related to the thrust of resilience, through the use of adaptive models that can be used at run-time to predict, detect and repair security attacks. Finally, our work also bears on the topic of security metrics, since we will be adapting social network-based metrics to the problem of architecture-level anomaly detection.
Noninterference defines a program to be secure if changes to high-security inputs cannot alter low-security outputs thereby indirectly stating the epistemic property that no low-security principal acquires knowledge of high-security data. We consider a directly epistemic account of information-flow control focusing on the knowledge flows engendered by the program's execution. Storage effects are of primary interest, since principals acquire and disclose knowledge from the execution only through these effects. The information-flow properties of the individual effectful actions are characterized using a substructural epistemic logic that accounts for the knowledge transferred through their execution. We prove that a low-security principal never acquires knowledge of a high-security input by executing a well-typed program. Moreover, the epistemic approach facilitates going beyond noninterference to account for authorized declassification. We prove that a low-security principal acquires knowledge of a high-security input only if there is an authorization proof.
PI: Robert Harper