Low-level Analytics Models of Cognition for Novel Security Proofs
Abstract

A key concern in security is identifying differences between human users and “bot” programs that emulate humans. Users with malicious intent will often utilize wide-spread computational attacks in order to exploit systems and gain control. Conventional detection techniques can be grouped into two broad categories: human observational proofs (HOPs) and human interactive proofs (HIPs). The key distinguishing feature of these techniques is the degree to which human participants are actively engaged with the “proof.” HIPs require explicit action on the part of users to establish their identity (or at least distinguish them from bots). On the other hand, HOPs are passive. They examine the ways in which users complete the tasks they would normally be completing and look for patterns that are indicative of humans vs. bots. HIPs and HOPs have significant limitations. HOPs are susceptible to imitation attacks, in which bots carry out scripted actions designed to look like human behavior. HIPs, on the other hand, tend to be more secure because they require explicit action from a user to complete a dynamically generated test. Because humans have to expend cognitive effort in order pass HIPs, they can be disruptive or reduce productivity. We are developing the knowledge and techniques to enable “Human Subtlety Proofs” (HSPs) that blend the stronger security characteristics of HIPs with the unobtrusiveness of HOPs. HSPs will improve security by providing a new avenue for actively securing systems from non-human users.

TEAM

PIs: David Roberts, Robert St. Amant
Students: Titus Barik, Arpan Chakraborty, Brent Harrison

Normative Trust Toward a Principled Basis for Enabling Trustworthy Decision Making
Lead PI:
Munindar Singh
Abstract

This project seeks to develop a deeper understanding of trust than is supported by current methods, which largely disregard the underlying relationships based on which people trust or not trust each other. Accordingly, we begin from the notion of what we term normative relationships—or norms for short—directed from one principal to another. An example of a normative relationship is a commitment: is the first principal committed to doing something for the second principal? (The other main types of normative relationships are authorizations, prohibitions, powers, and sanctions.) Our broad research hypothesis is that trust can be modeled in terms of the relevant norms being satisfied or violated. To demonstrate the viability of this approach, we are mining commitments from emails (drawn from the well-known Enron dataset) and using them to assess trust. Preliminary results indicate that our methods can effectively estimate the trust-judgment profiles of human subjects.

TEAM

PI: Munindar Singh
Student: Anup Kalia

Munindar Singh

Dr. Munindar P. Singh is Alumni Distinguished Graduate Professor in the Department of Computer Science at North Carolina State University. He is a co-director of the DoD-sponsored Science of Security Lablet at NCSU, one of six nationwide. Munindar’s research interests include computational aspects of sociotechnical systems, especially as a basis for addressing challenges such as ethics, safety, resilience, trust, and privacy in connection with AI and multiagent systems.

Munindar is a Fellow of AAAI (Association for the Advancement of Artificial Intelligence), AAAS (American Association for the Advancement of Science), ACM (Association for Computing Machinery), and IEEE (Institute of Electrical and Electronics Engineers), and was elected a foreign member of Academia Europaea (honoris causa). He has won the ACM/SIGAI Autonomous Agents Research Award, the IEEE TCSVC Research Innovation Award, and the IFAAMAS Influential Paper Award. He won NC State University’s Outstanding Graduate Faculty Mentor Award as well as the Outstanding Research Achievement Award (twice). He was selected as an Alumni Distinguished Graduate Professor and elected to NCSU’s Research Leadership Academy.

Munindar was the editor-in-chief of the ACM Transactions on Internet Technology from 2012 to 2018 and the editor-in-chief of IEEE Internet Computing from 1999 to 2002. His current editorial service includes IEEE Internet Computing, Journal of Artificial Intelligence Research, Journal of Autonomous Agents and Multiagent Systems, IEEE Transactions on Services Computing, and ACM Transactions on Intelligent Systems and Technology. Munindar served on the founding board of directors of IFAAMAS, the International Foundation for Autonomous Agents and MultiAgent Systems. He previously served on the editorial board of the Journal of Web Semantics. He also served on the founding steering committee for the IEEE Transactions on Mobile Computing. Munindar was a general co-chair for the 2005 International Conference on Autonomous Agents and MultiAgent Systems and the 2016 International Conference on Service-Oriented Computing.

Munindar’s research has been recognized with awards and sponsorship by (alphabetically) Army Research Lab, Army Research Office, Cisco Systems, Consortium for Ocean Leadership, DARPA, Department of Defense, Ericsson, Facebook, IBM, Intel, National Science Foundation, and Xerox.

Twenty-nine students have received Ph.D. degrees and thirty-nine students MS degrees under Munindar’s direction.

A Science of Timing Channels in Modern Cloud Environments
Lead PI:
Michael Reiter
Abstract

The eventual goal of our research is to develop a principled design for comprehensively mitigating access-driven timing channels in modern compute clouds, particularly of the "infrastructure as a service" (IaaS) variety. This type of cloud permits the cloud customer to deploy arbitrary guest virtual machines (VMs) to the cloud. The security of the cloud-resident guest VMs depends on the virtual machine monitor (VMM), e.g., Xen, to adequately isolate guest VMs from one another. While modern VMMs are designed to logically isolate guest VMs, there remains the possibility of timing "side channels" that permit one guest VM to learn information about another guest VM simply by observing features that reflect the others' effects on the hardware platform. Such attacks are sometimes referred to as "access-driven" timing attacks.

TEAM

PI: Michael Reiter (UNC)
Student: Yinqian Zhang, Peng Li

Michael Reiter
Studying Latency and Stability of Closed-Loop Sensing-Based Security Systems
Lead PI:
Rudra Dutta
Abstract

In this project, our focus is on understanding a class of security systems in analytical terms at a certain level of abstraction.  Specifically, the systems we intend to look at are (I) multipath routing (for increasing reliability), (ii) dynamic firewalls.  For multipath routing, the threat scenario is jamming – the nodes that are disabled due to the jamming take the place of compromised components in that they fail to perform their proper function.  The multipath and diverse path mechanisms are intended to allow the system to perform its overall function (critical message delivery) despite this.  The project will focus on quantifying and bounding this ability to function redundantly.  For the firewall, the compromise consists of an attacker guessing at the firewall rules and being able to circumvent them.  The system is designed to withstand this by dynamically changing the ruleset to be applied over time. Our project will focus on quantifying or characterizing this ability.

TEAM

PIs: Rudra Dutta, Meeko Oishi (UNM-Albuquerque)
Student Trisha Biswas

Rudra Dutta

 

Rudra Dutta was born in Kolkata, India, in 1968. After completing elementary schooling in Kolkata, he received a B.E. in Electrical Engineering from Jadavpur University, Kolkata, India, in 1991, a M.E. in Systems Science and Automation from Indian Institute of Science, Bangalore, India in 1993, and a Ph.D. in Computer Science from North Carolina State University, Raleigh, USA, in 2001. From 1993 to 1997 he worked for IBM as a software developer and programmer in various networking related projects. He has been employed from 2001 - 2007 as Assistant Professor, from 2007 - 2013 as Associate Professor, and since 2013 as Professor, in the department of Computer Science at the North Carolina State University, Raleigh. During the summer of 2005, he was a visiting researcher at the IBM WebSphere Technology Institute in RTP, NC, USA. His current research interests focus on design and performance optimization of large networking systems, Internet architecture, wireless networks, and network analytics.

His research is supported currently by grants from the National Science Foundation, the National Security Agency, and industry, including a recent GENI grant and a FIA grant from NSF. He has served as a reviewer for many premium journals, on NSF, DoE, ARO, and NSERC (Canada) review panels, as part of the organizing committee of many premium conferences, including Program Co-chair for the Second International Workshop on Traffic Grooming. Most recently, he has served as Program Chair for the Optical Networking Symposium at IEEE Globecom 2008, General Chair of IEEE ANTS 2010, and as guest editor of a special issue on Green Networking and Communications of the Elsevier Journal of Optical Switching and Networking. He is currently serving on the Steering Committee of IEEE ANTS 2013, and on the editorial board of the Elsevier Journal of Optical Switching and Networking.

He is married with two children and lives in Cary, North Carolina with his family. His father and his sister's family live in Kolkata, India.

Spatiotemporal Security Analytics and Human Cognition
Lead PI:
David L. Roberts
Abstract

A key concern in security is identifying differences between human users and “bot” programs that emulate humans. Users with malicious intent will often utilize wide-spread computational attacks in order to exploit systems and gain control. Conventional detection techniques can be grouped into two broad categories: human observational proofs (HOPs) and human interactive proofs (HIPs). The key distinguishing feature of these techniques is the degree to which human participants are actively engaged with the “proof.” HIPs require explicit action on the part of users to establish their identity (or at least distinguish them from bots). On the other hand, HOPs are passive. They examine the ways in which users complete the tasks they would normally be completing and look for patterns that are indicative of humans vs. bots. HIPs and HOPs have significant limitations. HOPs are susceptible to imitation attacks, in which bots carry out scripted actions designed to look like human behavior. HIPs, on the other hand, tend to be more secure because they require explicit action from a user to complete a dynamically generated test. Because humans have to expend cognitive effort in order pass HIPs, they can be disruptive or reduce productivity. We are developing the knowledge and techniques to enable “Human Subtlety Proofs” (HSPs) that blend the stronger security characteristics of HIPs with the unobtrusiveness of HOPs. HSPs will improve security by providing a new avenue for actively securing systems from non-human users.

TEAM

PI: David Roberts
Student: Titus Barik

David L. Roberts
Towards a Scientific Basis for User Center Security Design
Co-Pi:
Abstract

Human interaction is an integral part of any system. Users have daily interactions with a system and make many decisions that affect the overall state of security. The fallibility of users has been shown but there is little research focused on the fundamental principles to optimize the usability of security mechanisms. We plan to develop a framework to design, develop and evaluate user interaction in a security context. We will (a) examine current security mechanisms and develop basic principles which can influence security interface design; (b) introduce new paradigms for security interfaces that utilize those principles; (c) design new human-centric security mechanisms for several problem areas to illustrate the paradigms; and (d) conduct repeatable human subject experiments to evaluate and refine the principles and paradigms developed in this research.

TEAM

PIs: Ting Yu, Ninghui Li (Purdue), Robert Proctor (Purdue)
Student: Zach Jorgensen

Quantifying Mobile Malware Threats
Abstract

In this project, we aim to systematize the knowledge base about existing mobile malware (especially on Android) and quantify their threats so that we can develop principled solutions to provably determine their presence or absence in existing marketplaces. The hypothesis is that there exist certain fundamental commonalities among existing mobile malware. Accordingly, we propose a mobile malware genome project called MalGenome with a large collection of mobile malware samples.  Based on the collection, we can then precisely systematize their fundamental commonalities (in terms of violated security properties and behaviors) and quantify their possible threats on mobile devices.  After that, we can develop principled solutions to scalably and accurately determine their presence in existing marketplaces. Moreover, to predict or uncover unknown (or zero-day) malware, we can also leverage the systematized knowledge base to generate an empirical prediction model. This model can also be rigorously and thoroughly evaluated for its repeatability and accuracy.

TEAM

PI: Xuxian Jiang
Student: Yajin Zhou

An Investigation of Scientific Principles Involved in Software Security Engineering
Lead PI:
Laurie Williams
Co-Pi:
Abstract

Fault elimination part of software security engineering hinges on pro-active detection of potential vulnerabilities during software development stages. This project is currently working on a) an attack operational profile definition based on known software vulnerability classifications, and b) assessment of software testing strategies based on two assumptions a) funding and time constraint are a practical limit on the quality of security engineering (how to assess and leverage that), and b) how to automatically generate test cases that would be as efficient as human non-operational testing of software.

TEAM

PIs: Mladen Vouk, Laurie Williams, Jeffrey Carver
Student: Patrick Morrison

Laurie Williams

Laurie Williams is a Distinguished University Professor in the Computer Science Department of the College of Engineering at North Carolina State University (NCSU). Laurie is a co-director of the NCSU Secure Computing Institute and the NCSU Science of Security Lablet. She is also the Chief Cybersecurity Technologist of the SecureAmerica Institute. Laurie's research focuses on software security; agile software development practices and processes, particularly continuous deployment; and software reliability, software testing and analysis. Laurie has more than 240 refereed publications.

Laurie is an IEEE Fellow. Laurie was named an ACM Distinguished Scientist in 2011, and is an NSF CAREER award winner. In 2009, she was honored to receive the ACM SIGSOFT Influential Educator Award. At NCSU, Laurie was named a University Faculty Scholars in 2013. She was inducted into the Research Leadership Academy and awarded an Alumni Association Outstanding Research Award in 2016. In 2006, she won the Outstanding Teaching award for her innovative teaching and is an inductee in the NC State's Academy of Outstanding Teachers.

Laurie leads the Software Engineering Realsearch research group at NCSU. With her students in the Realsearch group, Laurie has been involved in working collaboratively with high tech industries like ABB Corporation, Cisco, IBM Corporation, Merck, Microsoft, Nortel Networks, Red Hat, Sabre Airline Solutions, SAS, Tekelec (now Oracle), and other healthcare IT companies. They also extensively evaluate open source software.

Laurie is one of the foremost researchers in agile software development and in the security of healthcare IT applications. She was one of the founders of the first XP/Agile conference, XP Universe, in 2001 in Raleigh which has now grown into the Agile 200x annual conference. She is also the lead author of the book Pair Programming Illuminated and a co-editor of Extreme Programming Perspectives. Laurie is also the instructor of a highly-rated professional agile software development course that has been widely taught in Fortune 500 companies. She also is a certified instructor of John Musa's software reliability engineering course, More Reliable Software Faster and Cheaper.

Laurie received her Ph.D. in Computer Science from the University of Utah, her MBA from Duke University Fuqua School of Business, and her BS in Industrial Engineering from Lehigh University.   She worked for IBM Corporation for nine years in Raleigh, NC and Research Triangle Park, NC before returning to academia.

Argumentation as a Basis for Reasoning about Security
Lead PI:
Munindar Singh
Abstract

This project involves the application of argumentation techniques for reasoning about policies, and security decisions in particular. Specifically, we are producing a security-enhanced argumentation framework that (a) provides not only inferences to draw but also actions to take; (b) considers multiparty argumentation; (c) measures the mass of evidence on both attacking and supporting arguments in order to derive a defensible conclusion with confidence; and (d) develops suitable critical questions as the basis for argumentation. The end result would be a tool that helps system administrators and other stakeholders capture and reason about their rationales as a way of ensuring that they make sound decisions regarding policies.

TEAM

PIs: Munindar P. Singh, Simon D. Parsons (CUNU)
Student: Nirav Ajmeri

Munindar Singh

Dr. Munindar P. Singh is Alumni Distinguished Graduate Professor in the Department of Computer Science at North Carolina State University. He is a co-director of the DoD-sponsored Science of Security Lablet at NCSU, one of six nationwide. Munindar’s research interests include computational aspects of sociotechnical systems, especially as a basis for addressing challenges such as ethics, safety, resilience, trust, and privacy in connection with AI and multiagent systems.

Munindar is a Fellow of AAAI (Association for the Advancement of Artificial Intelligence), AAAS (American Association for the Advancement of Science), ACM (Association for Computing Machinery), and IEEE (Institute of Electrical and Electronics Engineers), and was elected a foreign member of Academia Europaea (honoris causa). He has won the ACM/SIGAI Autonomous Agents Research Award, the IEEE TCSVC Research Innovation Award, and the IFAAMAS Influential Paper Award. He won NC State University’s Outstanding Graduate Faculty Mentor Award as well as the Outstanding Research Achievement Award (twice). He was selected as an Alumni Distinguished Graduate Professor and elected to NCSU’s Research Leadership Academy.

Munindar was the editor-in-chief of the ACM Transactions on Internet Technology from 2012 to 2018 and the editor-in-chief of IEEE Internet Computing from 1999 to 2002. His current editorial service includes IEEE Internet Computing, Journal of Artificial Intelligence Research, Journal of Autonomous Agents and Multiagent Systems, IEEE Transactions on Services Computing, and ACM Transactions on Intelligent Systems and Technology. Munindar served on the founding board of directors of IFAAMAS, the International Foundation for Autonomous Agents and MultiAgent Systems. He previously served on the editorial board of the Journal of Web Semantics. He also served on the founding steering committee for the IEEE Transactions on Mobile Computing. Munindar was a general co-chair for the 2005 International Conference on Autonomous Agents and MultiAgent Systems and the 2016 International Conference on Service-Oriented Computing.

Munindar’s research has been recognized with awards and sponsorship by (alphabetically) Army Research Lab, Army Research Office, Cisco Systems, Consortium for Ocean Leadership, DARPA, Department of Defense, Ericsson, Facebook, IBM, Intel, National Science Foundation, and Xerox.

Twenty-nine students have received Ph.D. degrees and thirty-nine students MS degrees under Munindar’s direction.

Shared Perceptual Visualizations For System Security
Abstract

We are studying how to harness human visual perception in information display, with a specific focus on ways to combine layers of data in a common, well-understood display framework. Our visualization techniques are designed to present data in ways that are efficient and effective, allowing an analyst to explore large amounts of data rapidly and accurately.

TEAM

PI: Christopher G. Healey
Student: Terry Rogers

Subscribe to