C3E Challenge Problems

 

C3E brings together a diverse set of experts in creative collaboration to tackle tough intellectual cybersecurity challenges and point toward novel and practical solutions. The discussions held at the symposium help to inform the Challenge Problems (CP) for the coming year. At the C3E Symposium on 17-19 October 2022, the tracks looked at cybersecurity in compromised environments focusing on the role of AI and ML in critical systems and the human element, and at impacts on their resilience through the lens of system risk and mitigation.  These two options were developed from presentations and discussions during breakout sessions. 

The challenge problems for each year are listed below and linked to more information about each of the problems. 

2022: Cybersecurity and Software in the Supply Chain
2020/2021: Economics of Security and Continuous Assurance
2019: Cognitive Securing and Human-Machine Teaming
2018: AI/ML Cyber Defense
2016 & 2017: Modeling Consequences of Ransomware on Critical Infrastructures
2015: Cyber Security
2014: Metadata-based Malicious Cyber Discovery


 

2023 Challenge Problems

 

A follow-on program is available for researchers to address issues raised at the Symposium. For 2022-2023, the CPs focus on the following themes: (1) AI and ML can enable rapid collaborative human-machine symbiotic decision-making in critical systems across ubiquitous domains, and require an exploration of the human, including an exploration of the human element in understanding and shaping important outcomes and (2) associated impacts on the emerging resilience posture through the lenses of system risk and mitigation.  We will be engaging 5-10 researchers on a part time basis to identify and explore specific issues developed around these themes during C3E. Researchers will then present their findings at the 2023 C3E symposium. We are seeking a funding grant to pay a small honorarium to these researchers for their efforts over the next 9-10 months.

 

Overall Challenge. The overall challenge is to improve cybersecurity and software by better understanding issues in AI/ML, resilience in cyber critical systems, and human factors. 

 

Objectives. The anticipated outcome will include a description of the critical security events taking place and the research process followed for the effort. That process may include details on how the research was conducted and possible issues or limitations associated with one of the themes. The results might include new models or actual software code.

 

Deliverables. Researchers are required to prepare a ten-minute video presentation to be presented at C3E 2023, a poster that describes their research, and a technical article suitable for publication in a major academic publication.

 

Researchers might also provide models, actual software applications (APPs) for open-source systems, narratives to define improvement to current processes, or AI/ML tools or techniques for operational assessments and risk analysis.

 

The C3E Team encourages researchers to think creatively on how to improve cybersecurity using AI/ML and human participation, or how to improve cybersecurity resilience. Researchers may choose either for their proposed work, or work at the intersection of both, in addressing one of the given challenges. There are multiple options for each CP.

 

At the recent C3E Symposium each track focused on the themes and also possible topics for the CP. Their outcomes have shaped the CP as shown below.

 

Challenge Problem 1: AI/ML and the Human Element

 

The first group responded to three questions:

  1. How can we develop AI effectively and with safety guarantees? 
  2. How can AI engage with humans in joint decision-making?
  3. How do we have humans build a mental model of AI that is consistent with AI's behavior, and vice versa?

The group responded by identifying the following research questions and issues.

 

Option #1: Closing the Understanding Gap Between Simulations and Real-World Situations using Realistic Human Models

 

Problem:

  • Cyber Human interaction data is not readily accessible
  • Rational and simplistic human behavior models are used in simulated environments
  • These models are missing realistic assumptions about human behavior such as attention, memory limitations, errors, biases, stress, and workload
  • Collecting and modeling is difficult
  • What are the modeling techniques that can solve such problems?
  • Cognitive model from fields such as neuroscience and psychology combined with data-driven and neural network model

Challenge:

  • Define an approach to collect human interaction data for cyber situations
  • What human interaction behaviors should be included in a dataset?
  • Define metrics and identify tools and methodologies to measure different behavior/cognitive factors.
  • Define approaches to model the interaction using cognitive/AI approaches
  • What are the relevant cognitive models?
  • Alternatively, develop new cyber specific models

Option #2: Cyber Analyst Assistant AI

 

Problem:

  • How do we develop AI that continuously builds models of human users and other AI agents teaming with AI?
  • Given different ways to solve the same problem, how do we enable AI to build, maintain and utilize models of human users and use AI to delegate subtasks to different members of the human-machine team? How do you minimize the generation of false positives or unnecessary feedback to the cyber analyst?
  • Can AI methods create models and keep them up to date as tasks evolve and/or the team composition changes?
  • Current limitations:
  • Analysts learn historical attack models and work through hypotheses to enrich alerts and evidence to make their own assessment.
  • Practice today is rule-based with lack of AI tools
  • Tools/algorithms exist but are narrow and not aligned to Cyber Analyst domain

Challenge:

  • Create a Cyber Analyst Assistant AI
  • Develop a shared model framework
  • System to represent hypotheses that an analyst is exploring
  • Shared model includes hypotheses and options to enrich the model
  • AI can suggest alternative hypotheses and/or perform analysis to enrich and improve the model
  • Model represents the incident, history, steps taken, and predicted next steps
  • Model library is continuously updated
  • Cooperative human-AI problem solving
  • Design appropriate environments, testbeds, and datasets for training such cyber assistants       

Big Challenge:

  • Create cyber analyst domain taxonomy of errors and analysis delays.
  • Support hackathon that brings together team of a cyber analyst, psychologist and AI expert.
  • Develop AI-driven provenance methods that can aid in understanding advanced persistent threats

Option #3: Trust in AI

 

Problem:

  • Are there different levels of trust (confidence, understanding, reliance, acceptance, etc.) for different applications of AI in cybersecurity?
  • What can be done to increase and decrease trust in this area?
  • What are specific vulnerabilities that trust in AI produces?
  • How can we exploit the trust in AI for cybersecurity operations? What is the balance between AI and the human?
  • How can we make adversaries not trust their tools or trust them inappropriately?
  • What is the AI confidence level at any given time in the OODA loop?

Challenge:

  • Evaluate current research in Explainable Artificial Intelligence (XAI).
  • Is it sufficient for establishing trust in AI?
  • What needs to be added to XAI to establish trust in AI?
  • Develop a qualitative study of AI cybersecurity users to understand the factors that lead to trust and distrust.
  • Develop a taxonomy of uses of AI in cybersecurity (website analysis, phishing detection, malware detection).
  • Identify gaps humans could potentially fill based on human cognitive (mental) models that relate to real world situation awareness.

Option #4: Follow-up Option from the 2022 Challenge Problem on Static Analysis Coverage

 

Problem:

  • Current static and dynamic software testing products provide detailed indications of potential vulnerabilities that are either malicious or unintentional.  
  • Most static analysis packages are missing reporting on what was and was not actually evaluated for the analysis report. 
  • Reporting of the actual coverage of the static analysis tools with respect to what kinds of vulnerabilities (buffer overflow, memory leakage, SQL injection, hardcoded passwords, etc.) were examined and what code segments or modules were examined (and which were not) is a missing output.
  • The previous 2022 Challenge Problem proposed an initial reporting mechanism of coverage metrics, but in a format as just simple print statements. For meaningful results interpretable by humans, what is needed is an analysis of the raw reporting material with a visual presentation.

Challenge:

  • Augment a readily available open-source static analyzer to report vulnerabilities checked and modules examined.
  • Apply AI/ML and other techniques to use this new coverage report to create a human understandable “Coverage Dashboard” to detect emerging trends in the potential vulnerabilities and gaps in the static analyzer source code examination. 

Your proposal for the AI/ML and the Human Element topic may address research on any of these topics or combinations of them.

 

 

Challenge Problem 2: Resilience, Architecture, and Autonomy

 

The second group addressed resilience, architecture, and autonomy including the question of how to design and analyze system architectures that deliver required service in the face of compromised components. The group defined three option areas for research activity.

 

Option #1: Active Agents for Resiliency and Autonomy

  • Design and create a Recommender System based on AI/ML or related technology to help defensive operators make better decisions through the use of data from sensors and other operational metrics
  • What are the meaningful metrics to review/consider that are understandable/explainable and in the correct context?
  • How can the generated candidate solutions be collaborated with and curated?
     
  • Design and create an Agent based on AI/ML or related technology to ensure correct operations that follow the “Commander’s Intent” (rules, strategies, decisions, processes, etc.)
  • How does the Agent ingest long time scales and disconnected operations over time?
  • How can the Agent be made resilient to changes in behavior or inputs?
     
  • Design and create an Attribution (Friend or Foe) System based on AI/ML or related technology that identifies good vs. bad actors in a compromised environment.    
  • Consider that the Agent must decide actions as it works autonomously.
  • Agent must learn and react to behaviors of both humans and systems
     
  • Develop appropriate metrics to drive design decisions and validate that the implementation meets the design specifications.          

Option #2: Resilient Architectures

  • Provide examples of resilient architectures for network offense and defense
  • Provide concrete examples and evidence to drive new developments, e.g., what needs to change?
  • Provide details to understand trade-offs between resiliency and other performance goals
     
  • Provide research on the consequences of automation
  • What are the consequences of automation?
  • How does the human get greater understanding of automation failures of internal states or modes?
  • Does automation require more human vigilance to detect failures?
  • Is automation of automation feasible, e.g., can automation triage automation for failures?
  • What human-AI interfaces would be useful to enhance robustness and response to failures? 
     
  • How do humans effectively manage and understand resilient architectures for scalability? 
     
  • Research and develop adaptable Honeypots thru AI/ML or related technology that react to learning from on-going attacks
  • How could Honeypots absorb attack information and then adapt/reconfigure (dynamically regenerate) based on this new information?

 

Option #3: Trust Factor in Resilient and Autonomous Systems

  • What is compelling evidence of trustworthiness?
  • Is avoiding unintended consequences sufficient?
  • Is providing informed risk decision calculus (positive vs negative outcomes, real-time) sufficient?
  • Is composing AI functions with sales/trusted components sufficient? 
     
  • How do you give AI a “Voice” in strategy decisions?
  • How do you synthesize inputs from the “AI Staff” and the humans? Does each input get equal consideration?
  • Given that systems and processes are risk averse, how can reward/punishment asymmetry be overcome?
     
  • What are automation tradeoffs relative to objectives?
  • What are efficient automation tradeoffs?
  • Which model: fast when needed or deep when needed?
  • Meta-reasoning, time awareness?     
     
  • How do you develop autonomous systems with imperfect/incomplete information?
  • How do you overcome limitations of models, sensors, and data (exploit “common sense”)?
  • Are there solutions from economics or game theory?
  • Is decomposing functions and targeting for max effectiveness provide a solution?

Given the aforementioned option areas for research activity included here for the CP, participants are encouraged to review this list of topics and propose on one or more for follow-up research for the 2023 C3E Workshop.

 

Proposal Process – Next Steps

 

If you are interested, please send a short description (1 to 5 pages) of your proposal including metrics to measure your research success to Dr. Don Goff, Co-PI with Dan Wolf for the CP, at dgoff@cyberpackventures.com by February 13, 2023. The proposals will go through a peer review process with 8-10 selected for funding, if available, in the range of $2000 to $10,000 per effort. Announcement of the approved funding will be made in late-February 2023. The awards will be in the form of an honorarium and will not provide sufficient support for full time engagement.

 

Our plan is to submit a proposal to NSF for another funding grant. If approved, these funds would be used for the honoraria.

 

Please send any questions to the Co-PIs Don at dgoff@cyberpackventures.com or Dan at dwolf@cyberpackventures.com

 

Additional details will be provided via email to the workshop participants and on the SOS-VO web site.