advancing cybersecurity through science

The National Security Agency Research Directorate sponsors the Science of Security Initiative to promote foundational cybersecurity science that is needed to mature the cybersecurity discipline and to underpin advances in cyberdefense. The SoS initiative works in several ways. 1. Engage the academic community for foundational research, 2. Promote rigorous scientific principles, and 3. grow the SoS community. The SoS Virtual Organization is the SoS’s initiative online home.

Research
promoting foundational cybersecurity science needed to mature the cybersecurity discipline and advance cyberdefense
HotSoS
providing a focal point for security science related work and facilitating collaborative community to advance security science
Competitions
sponsoring engaging competitions and rewarding demonstrated excellence in the cybersecurity community

Recent News

Upcoming Events

View more
  • 03/07/2026 – 03/09/2026

    MODELSWARD 2026

    The International Conference on Model-Based Software and Systems Engineering provides a platform for participants from all over the world to present research results and application experience in using model-based techniques for developing all sorts of…
  • 03/23/2026 – 03/26/2026

    RSAC 2026 Conference

    RSAC brings together thousands of security professionals — from researchers to executives and vendors — to share insights, strategies, and innovations that shape the future of cybersecurity. Expect a full agenda: keynotes, technical sessions, interactive…
  • 04/13/2026 – 04/14/2026

    SEAMS 2026

    SEAMS is a CORE-A ranked conference that applies software engineering methods, techniques, processes, and tools to support the construction of safe, performant, and cost-effective self-adaptive and autonomous systems that provide self-* properties like…
  • 04/26/2026 – 04/30/2026

    Assurance and Security for AI-enabled Systems 2026

    This conference will broadly focus on AI assurance and security and associated risks stemming from a variety of factors depending on use context, including AI system safety, equity, reliability, interpretability, robustness, privacy, and governability.