HCSS 2026
Date: May 11, 2026 – May 13, 2026
Location: Annapolis, MD
The Twenty-Sixth
High Confidence Software and Systems (HCSS) Conference
May 11-13, 2026 | Annapolis, Maryland
Call for Presentations
Introduction
The twenty-sixth annual High Confidence Software and Systems (HCSS) Conference will be held May 11-13, 2026, at the Historic Inns of Annapolis in Annapolis, Maryland. We solicit proposals to present talks at the conference.
Background
Our security, safety, privacy, and well-being increasingly depend upon the correctness, reliability, resilience, and integrity of software-intensive systems of all kinds, including cyber-physical systems (CPS). These systems must be capable of interacting correctly, safely, and securely with humans, with diverse other systems, and with the physical world even as they operate in changing, difficult-to-predict, and possibly malicious environments. New foundations in science, technology, and methodologies continue to be needed. Moreover, these methods and tools must be transitioned into mainstream use to build and assure these systems—and to move towards more effective models for acceptance and certification.
Conference Scope, Goals, and Vision
The High Confidence Software and Systems (HCSS) Conference draws together researchers, practitioners, and management leaders from government, universities, non-profits, and industry. The conference provides a forum for dialogue centered upon the development of scientific foundations for the assured engineering of software-intensive complex computing systems and the transition of science into practice. The technical emphasis of the HCSS conference is on mathematically-based tools and techniques, scientific foundations supporting evidence creation, systems assurance, and security. The HCSS vision is one of engaging and growing a community—including researchers and skilled practitioners—that is focused around the creation of dependable systems that are capable, efficient, and responsive; that can work in dangerous or inaccessible environments; that can support large-scale, distributed coordination; that augment human capabilities; that can advance the mission of national security; and that enhance quality of life, safety, and security.
Conference Themes
We invite submissions on any topic related to high-confidence software and systems that aligns with the conference scope and goals listed above. In addition, the 2026 HCSS Conference will highlight the following three themes: (1) Beyond AI, (2) AI as an Enabler, and (3) Assuring AI.
Beyond AI – This theme emphasizes classical techniques, high confidence software challenges, and technology transitions and case studies. Submissions are sought that push the boundaries of established methods without reliance on AI, focusing on the fundamentals of rigorous engineering. Focus areas include:
· Models and representations for software knowledge: The recent joint report on software understanding (CISA, DARPA, NSA, OUSD(R&E)) has highlighted the long-standing challenge of capturing, expressing, and applying formal and informal software knowledge such as models, analyses, design rationale, and the like. We seek submissions that relate to ways to express formal and informal software knowledge. This knowledge can range from models, analyses, and proofs to hazard and threat analyses, test cases, coverage analyses, and informal design rationale. This includes ways to organize and analyze this information, including modeling comprehensiveness and consistency, such as through argumentation structures. Analyses may include a focus on debloating and reducing abstraction.
· Verifiable automatic code generation/translation: In the High Confidence Systems community, we have a variety of tools which allow us to achieve formal guarantees about different artifacts such as specifications, models, and code. These guarantees, however, can be hard to transfer between paradigms or languages, with the result that verification and proof work may need to be replicated as a system's development proceeds from one stage to the next, with corresponding shifts in model choices. We seek submissions that focus on ways we can confidently translate guarantees of security in one proof ecosystem into other target formalisms. Submissions are encouraged covering work along these lines, not necessarily dependent on AI, including generation of security artifacts such as verified protocol implementations or trustworthy data parsers from high level specifications, translation of code between different languages, and unifying verification toolchains.
· Formal methods for weird networks: To reduce potential for censoring and monitoring of network communications, both the national security and internet freedom communities have designed and deployed hidden networks with the intent that users are not readily discoverable. (Tor and DARPA RACE & PWND2 are examples.) We seek submissions that focus on formal models of emergent communication pathways (called weird networks) to fundamentally improve the deployment and detection of robust and resilient hidden networks. We also seek submissions focused on technologies that improve confidence in the information domain, including combining formal definitions of hidden networks to yield mathematical guarantees of privacy and performance.
AI as an Enabler – This theme emphasizes the role of AI in accelerating and transforming processes integral to development of high-confidence software systems, that are currently expensive and bottlenecked on human labor. Across these application domains, a key question is how to go beyond “vibe coding” and use AI in a way that produces known-trustworthy results. Focus areas include:
· Security and Resilience Engineering: We seek submissions that focus on leveraging AI to fundamentally change the economics and effectiveness of securing high-confidence systems. The research should focus on exploring AI’s role in augmenting human efforts in red teaming, vulnerability discovery, and threat modeling. This also includes AI that can identify logical flaws and vulnerabilities in complex codebases or deployed systems. We also seek submissions on AI-augmented approaches for automated patch generation and validation, as well as system-scale mitigations of a discovered vulnerability
· Software development and Systems Engineering: We seek submissions that use AI to generate, refine, and verify artifacts that are crucial for building high-confidence software systems. The research should focus on generating Rigorous Specifications from high-level requirements, creating Digital Twins or comprehensive system models, or generating High-Assurance Code that adheres to strict safety or security standards (e.g., in avionics or autonomous vehicles). Furthermore, research on AI-assisted Test Case Generation and Coverage Analysis is also of interest—generating test suites that systematically poses and probes corner cases.
· Accelerating applications of formal verification and provers: We seek submissions that leverage AI to accelerate integration and application of formal verification and provers in software development processes. The research should focus on assisting developers in identifying critical properties, generating formal assertions, or translating informal requirements into formal specifications. The research could also focus on approaches to tackle the computational complexity of formal methods by automatically identifying abstractions and generating proof outlines for properties of complex systems.
Assuring AI – This theme emphasizes the critical need for rigor in the post-AI software supply chain. The introduction of AI coding assistants and models as supply chain components requires equivalent, or greater, assurance mechanisms than those applied to traditional software. Focus areas include:
· Extrinsic evidence for AI components: We seek submissions on developing systems to track and assure the use of AI in software supply chains. We have standards like SLSA for tracking and assuring the use of code generators, build systems, and compilers and we catalogue open-source usage via Software Bills of Materials (SBOMs). We seek research on similar methods to track AI model provenance and the usage of AI in code development.
· Intrinsic Evidence for AI Artifacts: Model provenance and usage tracking provides extrinsic evidence of trustworthiness, but we must go beyond this to produce intrinsic evidence of model or software quality. We seek submissions on approaches that analyze the results of AI coding assistants and AI models to show that they are high quality and devoid of any malicious behavior. This includes novel static and dynamic analyses tailored towards AI-generated code, as well as methods to probe AI models directly for security and correctness properties.
Conference Presentations
The conference program features invited speakers, panel discussions, poster presentations, and a technical track of contributed talks.
Technical Track Presentations
The technical track features two kinds of talks:
Experience reports. These talks inform participants about how emerging HCSS and CPS techniques play out in real-world applications, focusing especially on lessons learned and insights gained. Although experience reports do not have to be highly technical, they should emphasize substantive and comprehensive reflection, building on data and direct experience. Experience reports focus on topics such as transitioning science into practice, architecture and requirements, use of advanced languages and tools, evaluation and assessment, team practice and tooling, supply-chain issues, etc.
Technical talks. These talks highlight state-of-the-art techniques and methods for high-confidence software systems with an emphasis on how those techniques and methods can be used in practice. Presenters of these talks should strive to make their material accessible to the broader HCSS community even as they discuss deep technical results in areas as diverse as concurrency analysis, hybrid reasoning approaches, theorem proving, separation logic, synthesis, analytics, various modeling techniques, etc.
If you are interested in offering a talk—or nominating someone else to be invited to do so—please upload an abstract of one page or less for your proposed talk or a one paragraph description of your nominee’s proposed talk by January 19, 2026 to [HCSS Conference Submit]. Abstracts and nomination paragraphs should clearly indicate why the talk would be relevant to HCSS and which, if any, conference themes the talk would address. Notifications of accepted presentations will be made by February 17, 2025.
Further instructions for electronically submitting print-ready abstracts and final slide presentations will be provided in the acceptance notification messages. Abstracts of accepted presentations will be published on the HCSS Conference website.
Important Dates
CfP Abstracts Due: January 19, 2025
Notification of Decisions: February 17, 2025
HCSS Conference: May 11-13, 2025
Planning Committee
Co-Chairs
Mike Dodds, Galois, Inc.
Sandeep Neema, Vanderbilt University
Steering Group
Perry Alexander, University of Kansas
June Andronick, Proofcraft
Darren Cofer, Collins Aerospace
Kathleen Fisher, RAND
John Hatcliff, Kansas State University
John Launchbury, Galois, Inc.
Patrick Lincoln, DARPA
Stephen Magill, Amazon Web Services
Brad Martin, Galois, Inc.
Lee Pike, Logothetica
Ray Richards, RTX BBN Technologies
Bill Scherlis, Carnegie Mellon University
Eric Smith, Kestrel Institute
Adam W, National Cyber Security Centre
Sean Weaver, DARPA
Matt Wilding, DARPA
Kristin Yvonne Rozier, Iowa State University
Organization
Katie Dey, Vanderbilt University
Sponsor Agency
National Security Agency
Submitted by Regan Williams
on
The Twenty-Sixth
High Confidence Software and Systems (HCSS) Conference
May 11-13, 2026 | Annapolis, Maryland
Call for Presentations
Introduction
The twenty-sixth annual High Confidence Software and Systems (HCSS) Conference will be held May 11-13, 2026, at the Historic Inns of Annapolis in Annapolis, Maryland. We solicit proposals to present talks at the conference.
Background
Our security, safety, privacy, and well-being increasingly depend upon the correctness, reliability, resilience, and integrity of software-intensive systems of all kinds, including cyber-physical systems (CPS). These systems must be capable of interacting correctly, safely, and securely with humans, with diverse other systems, and with the physical world even as they operate in changing, difficult-to-predict, and possibly malicious environments. New foundations in science, technology, and methodologies continue to be needed. Moreover, these methods and tools must be transitioned into mainstream use to build and assure these systems—and to move towards more effective models for acceptance and certification.
Conference Scope, Goals, and Vision
The High Confidence Software and Systems (HCSS) Conference draws together researchers, practitioners, and management leaders from government, universities, non-profits, and industry. The conference provides a forum for dialogue centered upon the development of scientific foundations for the assured engineering of software-intensive complex computing systems and the transition of science into practice. The technical emphasis of the HCSS conference is on mathematically-based tools and techniques, scientific foundations supporting evidence creation, systems assurance, and security. The HCSS vision is one of engaging and growing a community—including researchers and skilled practitioners—that is focused around the creation of dependable systems that are capable, efficient, and responsive; that can work in dangerous or inaccessible environments; that can support large-scale, distributed coordination; that augment human capabilities; that can advance the mission of national security; and that enhance quality of life, safety, and security.
Conference Themes
We invite submissions on any topic related to high-confidence software and systems that aligns with the conference scope and goals listed above. In addition, the 2026 HCSS Conference will highlight the following three themes: (1) Beyond AI, (2) AI as an Enabler, and (3) Assuring AI.
Beyond AI – This theme emphasizes classical techniques, high confidence software challenges, and technology transitions and case studies. Submissions are sought that push the boundaries of established methods without reliance on AI, focusing on the fundamentals of rigorous engineering. Focus areas include:
· Models and representations for software knowledge: The recent joint report on software understanding (CISA, DARPA, NSA, OUSD(R&E)) has highlighted the long-standing challenge of capturing, expressing, and applying formal and informal software knowledge such as models, analyses, design rationale, and the like. We seek submissions that relate to ways to express formal and informal software knowledge. This knowledge can range from models, analyses, and proofs to hazard and threat analyses, test cases, coverage analyses, and informal design rationale. This includes ways to organize and analyze this information, including modeling comprehensiveness and consistency, such as through argumentation structures. Analyses may include a focus on debloating and reducing abstraction.
· Verifiable automatic code generation/translation: In the High Confidence Systems community, we have a variety of tools which allow us to achieve formal guarantees about different artifacts such as specifications, models, and code. These guarantees, however, can be hard to transfer between paradigms or languages, with the result that verification and proof work may need to be replicated as a system's development proceeds from one stage to the next, with corresponding shifts in model choices. We seek submissions that focus on ways we can confidently translate guarantees of security in one proof ecosystem into other target formalisms. Submissions are encouraged covering work along these lines, not necessarily dependent on AI, including generation of security artifacts such as verified protocol implementations or trustworthy data parsers from high level specifications, translation of code between different languages, and unifying verification toolchains.
· Formal methods for weird networks: To reduce potential for censoring and monitoring of network communications, both the national security and internet freedom communities have designed and deployed hidden networks with the intent that users are not readily discoverable. (Tor and DARPA RACE & PWND2 are examples.) We seek submissions that focus on formal models of emergent communication pathways (called weird networks) to fundamentally improve the deployment and detection of robust and resilient hidden networks. We also seek submissions focused on technologies that improve confidence in the information domain, including combining formal definitions of hidden networks to yield mathematical guarantees of privacy and performance.
AI as an Enabler – This theme emphasizes the role of AI in accelerating and transforming processes integral to development of high-confidence software systems, that are currently expensive and bottlenecked on human labor. Across these application domains, a key question is how to go beyond “vibe coding” and use AI in a way that produces known-trustworthy results. Focus areas include:
· Security and Resilience Engineering: We seek submissions that focus on leveraging AI to fundamentally change the economics and effectiveness of securing high-confidence systems. The research should focus on exploring AI’s role in augmenting human efforts in red teaming, vulnerability discovery, and threat modeling. This also includes AI that can identify logical flaws and vulnerabilities in complex codebases or deployed systems. We also seek submissions on AI-augmented approaches for automated patch generation and validation, as well as system-scale mitigations of a discovered vulnerability
· Software development and Systems Engineering: We seek submissions that use AI to generate, refine, and verify artifacts that are crucial for building high-confidence software systems. The research should focus on generating Rigorous Specifications from high-level requirements, creating Digital Twins or comprehensive system models, or generating High-Assurance Code that adheres to strict safety or security standards (e.g., in avionics or autonomous vehicles). Furthermore, research on AI-assisted Test Case Generation and Coverage Analysis is also of interest—generating test suites that systematically poses and probes corner cases.
· Accelerating applications of formal verification and provers: We seek submissions that leverage AI to accelerate integration and application of formal verification and provers in software development processes. The research should focus on assisting developers in identifying critical properties, generating formal assertions, or translating informal requirements into formal specifications. The research could also focus on approaches to tackle the computational complexity of formal methods by automatically identifying abstractions and generating proof outlines for properties of complex systems.
Assuring AI – This theme emphasizes the critical need for rigor in the post-AI software supply chain. The introduction of AI coding assistants and models as supply chain components requires equivalent, or greater, assurance mechanisms than those applied to traditional software. Focus areas include:
· Extrinsic evidence for AI components: We seek submissions on developing systems to track and assure the use of AI in software supply chains. We have standards like SLSA for tracking and assuring the use of code generators, build systems, and compilers and we catalogue open-source usage via Software Bills of Materials (SBOMs). We seek research on similar methods to track AI model provenance and the usage of AI in code development.
· Intrinsic Evidence for AI Artifacts: Model provenance and usage tracking provides extrinsic evidence of trustworthiness, but we must go beyond this to produce intrinsic evidence of model or software quality. We seek submissions on approaches that analyze the results of AI coding assistants and AI models to show that they are high quality and devoid of any malicious behavior. This includes novel static and dynamic analyses tailored towards AI-generated code, as well as methods to probe AI models directly for security and correctness properties.
Conference Presentations
The conference program features invited speakers, panel discussions, poster presentations, and a technical track of contributed talks.
Technical Track Presentations
The technical track features two kinds of talks:
Experience reports. These talks inform participants about how emerging HCSS and CPS techniques play out in real-world applications, focusing especially on lessons learned and insights gained. Although experience reports do not have to be highly technical, they should emphasize substantive and comprehensive reflection, building on data and direct experience. Experience reports focus on topics such as transitioning science into practice, architecture and requirements, use of advanced languages and tools, evaluation and assessment, team practice and tooling, supply-chain issues, etc.
Technical talks. These talks highlight state-of-the-art techniques and methods for high-confidence software systems with an emphasis on how those techniques and methods can be used in practice. Presenters of these talks should strive to make their material accessible to the broader HCSS community even as they discuss deep technical results in areas as diverse as concurrency analysis, hybrid reasoning approaches, theorem proving, separation logic, synthesis, analytics, various modeling techniques, etc.
If you are interested in offering a talk—or nominating someone else to be invited to do so—please upload an abstract of one page or less for your proposed talk or a one paragraph description of your nominee’s proposed talk by January 19, 2026 to [HCSS Conference Submit]. Abstracts and nomination paragraphs should clearly indicate why the talk would be relevant to HCSS and which, if any, conference themes the talk would address. Notifications of accepted presentations will be made by February 17, 2025.
Further instructions for electronically submitting print-ready abstracts and final slide presentations will be provided in the acceptance notification messages. Abstracts of accepted presentations will be published on the HCSS Conference website.
Important Dates
CfP Abstracts Due: January 19, 2025
Notification of Decisions: February 17, 2025
HCSS Conference: May 11-13, 2025
Planning Committee
Co-Chairs
Mike Dodds, Galois, Inc.
Sandeep Neema, Vanderbilt University
Steering Group
Perry Alexander, University of Kansas
June Andronick, Proofcraft
Darren Cofer, Collins Aerospace
Kathleen Fisher, RAND
John Hatcliff, Kansas State University
John Launchbury, Galois, Inc.
Patrick Lincoln, DARPA
Stephen Magill, Amazon Web Services
Brad Martin, Galois, Inc.
Lee Pike, Logothetica
Ray Richards, RTX BBN Technologies
Bill Scherlis, Carnegie Mellon University
Eric Smith, Kestrel Institute
Adam W, National Cyber Security Centre
Sean Weaver, DARPA
Matt Wilding, DARPA
Kristin Yvonne Rozier, Iowa State University
Organization
Katie Dey, Vanderbilt University
Sponsor Agency
National Security Agency