Remote attestation is a process of gathering evidence from a remote system with the intent of establishing its trustworthiness. A relying party requests evidence from a target. The target responds by gathering allowable evidence and meta-evidence. Target evidence and meta-evidence are together appraised to establish whether the target is in a good operational state.Any modern attestation target comprises many subsystems and depends on many others. Thus, attestation of a single component provides a limited picture of an appraisal target. Instead attestation targets should be treated as collections of interacting, distributed components.  Attestation should gather and compose evidence for entire systems.Layered attestation is an enhanced attestation process where attestation managers execute protocols that perform multiple component measurements and bundle resulting evidence for appraisal. The MAESTRO tool suite provides a mechanism for building layered attestation systems around the execution of Copland protocols. Users specify a protocol to be executed and MAESTRO configures a common attestation manager core and attestation service providers to execute that protocol on target systems. With few exceptions, MAESTRO components are either formally verified or synthesized from formal specifications providing assurance that protocol execution faithfully implements the Copland semantics.Our presentation will cover an overview of layered attestation using MAESTRO. We will present a brief overview of layered attestation and the Copland attestation protocol language.  We will then present an attestation architecture for a cross-domain system. The attestation architecture includes a measured boot using TPM, IMA and LKIM that transitions to run-time attestation using MAESTRO that reports execution state.  We will cover both formal treatment and empirical evaluation results.https://content-cdn.sessionboard.com/event-files/2JiUl2lQWjdqtzvc7ygA_alexander-ku.mp4
Authored by Perry Alexander
This talk will present an empirical study of layered attestation for a cross-domain system. The presentation will overview how we boot the system into a trusted state and extend trust to a runtime. Using IMA and TPM 2.0 we boot a verified attestation manager into a measured state where it may access its signing key. We prove the key can be used only if the right attestation system makes a request in a good state. Thus, a signature's presence on evidence strongly binds that evidence to the attestation manger. Once booted, the attestation manger measures and appraises the cross-domain system according to a Copland attestation protocol. It calls LKIM and checks SELinux policy to ensure the underlying Linux system is in a good state. Then it measures CDS components and configurations for runtime appraisal. We then discuss formal verification and empirical study of the attestation system. Specifically, why should trust the link from boot to runtime and the signing key's signature. We then discuss empirical studies that simulate various attacks illustrating design choices, assumptions and limitations.https://youtu.be/3evdvaB5Le0
Authored by William Thomas, Logan Schmalz, Sarah Johnson, Adam Petz, Perry Alexander, Joshua Guttman
Authored by William Thomas, Logan Schmalz, Sarah Johnson, Adam Petz, Perry Alexander, Joshua Guttman
Software systems are composed of interacting processes that send data to satisfy system-level requirements. When these processes become unavailable due to deviations in the system – software bugs, hardware failures, or security attacks – these systems should be resilient and continue delivering safety-critical services. Identifying the processes required to satisfy these requirements is not a trivial task as there may exists multiple, alternate dataflow paths to satisfy the requirements. In this work, we propose a formal modeling and analysis technique to compute the sets of minimal processes required for dataflow satisfaction. The computation of these sets is reduced to a maximum satisfiability problem via translation to AlloyMax, a formal modeling language that performs bounded model-checking. We then present a method to formally define the resilience requirements for a system by constraining the minimal dataflow required under maximum system deviation. The efficacy of this work is then evaluated with four case studies motivated from real-world systems with promising results.
Authored by Abigail Hammer, Changjiang Wang, Vick Dini, Ryan Wagner, Eunsuk Kang, Bradley Schmerl, David Garlan
Successful attacks are nearly inevitable as sophisticated threat actors are committed to inflicting damage, leaving digital and physical destruction in their wakes. As defenders recognize the inevitability of successful attacks, they must change their defense paradigms from only preventing attacks to also weathering the attacks that penetrate first-line defenses. Instead, the systems' abilities to provide functionality should be minimally disrupted while simultaneously containing an attacker. The engineering challenge is to build and operate systems that are resilient to attack, able to adapt to trade off some functionality to preserve trust in more-critical functionality. We refer to this concept as graceful degradation. Defenders would be in a far better position to address the increasingly dire situation confronting them if they had a method and tool to support graceful degradation. However, this requires the ability to reason despite uncertainties at architecture and design time and at run time. Automation can be supported by formal modeling of systems, but it must not be labor-intensive. We propose and develop an approach that directly addresses these challenges. We can architect and operate systems that are better able to weather attacks by automating the evaluation of systems' security properties to enable effective automated graceful degradation of systems in the presence of uncertainty through an approach of formally modeling systems and system behavior at an architectural level of abstraction to explore hypothetical attacks and the systems' abilities to respond. We describe our approach and provide tooling to demonstrate our concept.
Authored by Ryan Wagner
Successful attacks are nearly inevitable as sophisticated threat actors are committed to inflicting damage, leaving digital and physical destruction in their wakes. As defenders recognize the inevitability of successful attacks, they must change their defense paradigms from only preventing attacks to also weathering the attacks that penetrate first-line defenses. Instead, the systems abilities to provide functionality should be minimally disrupted while simultaneously containing an attacker. The engineering challenge is to build and operate systems that are resilient to attack, able to adapt to trade off some functionality to preserve trust in more-critical functionality. We refer to this concept as graceful degradation. Defenders would be in a far better position to address the increasingly dire situation confronting them if they had a method and tool to support graceful degradation. However, this requires the ability to reason despite uncertainties at architecture and design time and at run time. Automation can be supported by formal modeling of systems, but it must not be labor-intensive. We propose and develop an approach that directly addresses these challenges. We can architect and operate systems that are better able to weather attacks by automating the evaluation of systems security properties to enable effective automated graceful degradation of systems in the presence of uncertainty through an approach of formally modeling systems and system behavior at an architectural level of abstraction to explore hypothetical attacks and the systems abilities to respond. We describe our approach and provide tooling to demonstrate our concept.
Authored by Ryan Wagner
This report explores the architecture and threat modeling implications of an autonomous malware agent powered by AI decision-making logic. Unlike traditional operator-controlled malware, this framework enables self-adaptive behavior, encrypted C2 infrastructure via domain generation, surveillance, persistence, and modular action execution all without human intervention. Designed for educational and defensive research only, this blueprint outlines the next frontier of AI-native threats and provides blue team mitigation strategies against future autonomous adversarial agents.Project [REDACTED]
Authored by Skyler Piatiak
Authored by Dung Nguyen, Taylor Johnson, Kevin Leach
Software continues to be vulnerable to adversaries attempting to shut down or obtain sensitive information from systems. An analysis that identifies these threats ahead of time may be able to prevent systems from cascading failures that causes harms to users. One approach to identify security flaws is computing the amount of compromise that a software architecture can handle without complete service delivery stoppage. We show that this method qualitatively identifies parts of a software architecture that are not robust against security attacks and do not meet robustness standards. We define this method as robustness through trust boundaries, and formally define it in a formal modeling tool, Alloy. Three architectures taken from real world systems are used to demonstrate the effectiveness of trust boundaries in identifying security vulnerabilities of an architecture and evaluate the robustness of a system.
Authored by Abigail Hammer, Chiangjian Zhang, Vick Dini, Ryan Wagner, Bradley Schmerl, Eunsuk Kang, David Garlan
Autonomous agents for cyber applications take advantage of modern defense techniques by adopting intelligent agents with conventional and learning-enabled components. These intelligent agents are trained via reinforcement learning (RL) algorithms, and can learn, adapt to, reason about and deploy security rules to defend networked computer systems while maintaining critical operational workflows. However, the knowledge available during training about the state of the operational network and its environment may be limited. The agents should be trustworthy so that they can reliably detect situations they cannot handle, and hand them over to cyber experts. In this work, we develop an out-of-distribution (OOD) Monitoring algorithm that uses a Probabilistic Neural Network (PNN) to detect anomalous or OOD situations of RL-based agents with discrete states and discrete actions. To demonstrate the effectiveness of the proposed approach, we integrate the OOD monitoring algorithm with a neurosymbolic autonomous cyber agent that uses behavior trees with learning-enabled components. We evaluate the proposed approach in a simulated cyber environment under different adversarial strategies. Experimental results over a large number of episodes illustrate the overall efficiency of our proposed approach.
Authored by Ankita Samaddar, Nicholas Potteiger, Xenofon Koutsoukos
Authored by Dung Nguyen, Ngoc Tran, Taylor Johnson, Kevin Leach
Authored by Preston Robinette, Daniel Moyer, Taylor Johnson
Developers rely on the static safety guarantees of the Rust programming language to write secure and performant applications. However, Rust is frequently used to interoperate with other languages which allow design patterns that conflict with Rust's evolving aliasing models. Miri is currently the only dynamic analysis tool that can validate applications against these models, but it does not support foreign functions, indicating that there may be a critical correctness gap across the Rust ecosystem. We conducted a large-scale evaluation of Rust libraries that call foreign functions to determine whether Miri's dynamic analyses remain useful in this context. We used Miri and an LLVM interpreter to jointly execute applications that call foreign functions, where we found 47 instances of undefined or undesired behavior in 37 libraries. Three bugs were found in libraries that had more than 10,000 daily downloads on average during our observation period, and one was found in a library maintained by the Rust Project. Many of these bugs were violations of Rust's aliasing models, but the latest Tree Borrows model was significantly more permissive than the earlier Stacked Borrows model. The Rust community must invest in new, production-ready tooling for multi-language applications to ensure that developers can detect these errors.
Authored by Ian McCormack, Joshua Sunshine, Jonathan Aldrich
Distributed Denial of Service (DDoS) attacks have grown in complexity, with attackers dynamically adapting their strategies to maximize disruption. Dynamic DDoS adversaries evolve their attacks by changing targets, modifying botnet infrastructure, or altering traffic patterns to evade detection and maintain attack effectiveness. This dynamic nature poses significant challenges for DDoS defense, particularly in developing scalable and robust adaptive systems capable of real-time response. This paper introduces a novel, robust, multi-layered defense system called DosSink that integrates detection and mitigation through variational autoencoders (VAE) and actor-critic deep reinforcement learning (DRL). The VAE effectively reduces the feature space which may make the learning intractable and characterizes traffic to estimate the risk score for each flow. At the same time, the DRL agent uses these risk scores to optimize mitigation policies that include traffic limiting, flow redirection, or puzzle-based source verification actions. Feedback from puzzle inquiries refines VAE risk assessments, enhancing detection accuracy. Key innovations of this framework include (1) the VAE’s adaptability as an anomaly detector that evolves with DRL actions, avoiding reliance on static rules or predefined thresholds and enhancing the robustness of the overall system adaptation; (2) the separation of traffic characterization (VAE) and decision-making (DRL), improving scalability by reducing the state space; and (3) real-time adaptability to evolving attackers’ strategies through dynamic collaboration between the VAE and DRL. Our evaluation experiments show that this framework accurately identifies malicious traffic flows, with a true positive rate of over 98% and a false positive rate below 1%. Moreover, it efficiently learns the optimal mitigation strategy in under 20,000 episodes across most experimental settings.
Authored by Qi Duan, Ehab Al-Shaer, David Garlan
Machine Translation (MT) is the backbone of aplethora of systems and applications that are present in users’everyday lives. Despite the research efforts and progress in theMT domain, translation remains a challenging task and MTsystems struggle when translating rare words, named entities,domain-specific terminology, idiomatic expressions and culturallyspecific terms. Thus, to meet the translation performance expec-tations of users, engineers are tasked with periodically updating(fine-tuning) MT models to guarantee high translation quality.However, with ever-growing machine learning models, fine-tuningoperations become increasingly more expensive, raising seriousconcerns from a sustainability perspective. Furthermore, notall fine-tunings are guaranteed to lead to increased translationquality, thus corresponding to wasted compute resource.To address this issue and enhance the sustainability of MTsystems, we present FLEXICO, a new approach to engineer self-adaptive MT systems, which leverages (i) ML-based regressorsto estimate the expected benefits of fine-tuning MT models;and (ii) probabilistic model checking techniques to automate thereasoning about when the benefits of fine-tuning outweigh itscosts. Our empirical evaluation on two MT models and language-pairs and across up to 9 domains demonstrates the predictiveperformance of the black-box models that estimate the expectedbenefits of fine-tuning, as well as their domain-generalizability.Finally, we show that FLEXICO improves the sustainability ofMT systems when compared to naive baselines, decreasing thenumber of fine-tunings while preserving high translation quality.
Authored by Maria Casimiro, Paolo Romano, Jose de Souza, Amin Khan, David Garlan
in a wide range of software analysis tasks, such as model check-  ing, automated synthesis, program comprehension, and runtime  monitoring. Given a set of positive and negative examples,  specified as traces, LTL learning is the problem of synthesizing a  specification, in linear temporal logic (LTL), that evaluates to true  over the positive traces and false over the negative ones. In this  paper, we propose a new type of LTL learning problem called  constrained LTL learning, where the user, in addition to positive  and negative examples, is given an option to specify one or more  constraints over the properties of the LTL formula to be learned.  We demonstrate that the ability to specify these additional constraints significantly increases the range of applications for LTL  learning, and also allows efficient generation of LTL formulas  that satisfy certain desirable properties (such as minimality). We  propose an approach for solving the constrained LTL learning  problem through an encoding in first-order relational logic and  reduction to an instance of the maximal satisfiability (MaxSAT)  problem. An experimental evaluation demonstrates that ATLAS,  an implementation of our proposed approach, is able to solve  new types of learning problems while performing better than or  competitively with the state-of-the-art tools in LTL learning.
Authored by Changjian Zhang, Parv Kapoor, Ian Dardik, Leyi Cui, Romulo Meira-Goes, David Garlan, Eunsuk Kang
Objective: This survey aims to understand frontline healthcare professionals’ perceptions of artificial intelligence (AI) in healthcare and assess how AI familiarity influences these perceptions. Materials and Methods: We conducted a survey from February to March 2023 of physicians and physician assistants registered with the Kansas State Board of Healing Arts. Participants rated their perceptions toward AI-related domains and constructs on a 5-point Likert scale, with higher scores indicating stronger agreement. Two sub-groups were created for analysis to assess the impact of participants’ familiarity and experience with AI on the survey results. Results: From 532 respondents, key concerns were Perceived Communication Barriers (median¼4.0, IQR¼2.8-4.8), Unregulated Standards (median¼4.0, IQR¼3.6-4.8), and Liability Issues (median¼4.0, IQR¼3.5-4.8). Lower levels of agreement were noted for Trust in AI Mechanisms (median¼3.0, IQR¼2.2-3.4), Perceived Risks of AI (median¼3.2, IQR¼2.6-4.0), and Privacy Concerns (median¼3.3, IQR¼2.3-4.0). Positive correlations existed between Intention to use AI and Perceived Benefits (r¼0.825) and Trust in AI Mechanisms (r¼0.777). Perceived risk negatively correlated with Intention to Use AI (r ¼−0.718). There was no difference in perceptions between AI experienced and AI naïve subgroups. Discussion: The findings suggest that perceptions of benefits, trust, risks, communication barriers, regulation, and liability issues influence healthcare professionals’ intention to use AI, regardless of their AI familiarity. Conclusion: The study highlights key factors affecting AI adoption in healthcare from the frontline healthcare professionals’ perspective. These insights can guide strategies for successful AI implementation in healthcare.
Authored by Tanner Dean, Rajeev Seecheran, Robert Badgett, Rosey Zackula, John Symons
Around the world there has been an advancement of IoT edge devices, that in turn have enabled the collection of rich datasets as part of the Mobile Crowd Sensing (MCS) paradigm, which in practice is implemented in a variety of safety critical applications. In spite of the advantages of such datasets, there exists an inherent data trustworthiness challenge due to the interference of malevolent actors. In this context, there has been a great body of proposed solutions which capitalize on conventional machine algorithms for sifting through faulty data without any assumptions on the trustworthiness of the source. However, there is still a number of open issues, such as how to cope with strong colluding adversaries, while in parallel managing efficiently the sizable influx of user data. In this work we suggest that the usage of explainable artificial intelligence (XAI) can lead to even more efficient performance as we tackle the limitation of conventional black box models, by enabling the understanding and interpretation of a model s operation. Our approach enables the reasoning of the model s accuracy in the presence of adversaries and has the ability to shun out faulty or malicious data, thus, enhancing the model s adaptation process. To this end, we provide a prototype implementation coupled with a detailed performance evaluation under different scenarios of attacks, employing both real and synthetic datasets. Our results suggest that the use of XAI leads to improved performance compared to other existing schemes.
Authored by Sam Afzal-Houshmand, Dimitrios Papamartzivanos, Sajad Homayoun, Entso Veliou, Christian Jensen, Athanasios Voulodimos, Thanassis Giannetsos
The aim of the study is to review XAI studies in terms of their solutions, applications and challenges in renewable energy and resources. The results have shown that XAI really helps to explain how the decisions are made by AI models, to increase confidence and trust to the models, to make decision mode reliable, show the transparency of decision-making mechanism. Even if there have been a number of solutions such as SHAP, LIME, ELI5, DeepLIFT, Rule Based Approach of XAI methods, a number of problems in metrics, evaluations, performance and explanations are still specific, and require domain experts to develop new models or to apply available techniques. It is hoped that this article might help researchers to develop XAI solutions in their energy applications and improve their AI approaches for further studies.
Authored by Betül Ersöz, Şeref Sağıroğlu, Halil Bülbül
As we know, change is the only constant present in healthcare services. In this rapidly developing world, the need to drastically improve healthcare performance is essential. Real-time health data monitoring, analysis, and storage securely present us with a highly efficient healthcare system to diagnose, predict, and prevent deadly diseases. Integrating IoT data with blockchain storage technology gives safety and security to the data. The current bottleneck we face while integrating blockchain and IoT is primarily interoperability, scalability, and lack of regulatory frameworks. By integrating Explainable AI into the system, it is possible to overcome some of these bottlenecks between IoT devices and blockchain. XAI acts as a middleware solution, helping in interpreting the predictions and enforcing the standard data communication protocol.
Authored by CH Murthy V, Lawanya Shri
Sixth generation (6G)-enabled massive network MANO orchestration, alongside distributed supervision and fully reconfigurable control logic that manages dynamic arrangement of network components, such as cell-free, Open-Air Interface (OAI) and RIS, is a potent enabler for the upcoming pervasive digitalization of the vertical use cases. In such a disruptive domain, artificial intelligence (AI)-driven zero-touch “Network of Networks” intent-based automation shall be able to guarantee a high degree of security, efficiency, scalability, and sustainability, especially in cross-domain and interoperable deployment environments (i.e., where points of presence (PoPs) are non-independent and identically distributed (non-IID)). To this extent, this paper presents a novel breakthrough, open, and fully reconfigurable networking architecture for 6G cellular paradigms, named 6G-BRICKS. To this end, 6G-BRICKS will deliver the first open and programmable O-RAN Radio Unit (RU) for 6G networks, termed as the OpenRU, based on an NI USRP-based platform. Moreover, 6G-BRICKS will integrate the RIS concept into the OAI alongside Testing as a Service (TaaS) capabilities, multi-tenancy, disaggregated Operations Support Systems (OSS) and Deep Edge adaptation at the forefront. The overall ambition of 6G-BRICKS is to offer evolvability, granularity, while, at the same time, tackling big challenges such as interdisciplinary efforts and big investments in 6G integration.
Authored by Kostas Ramantas, Anastasios Bikos, Walter Nitzold, Sofie Pollin, Adlen Ksentini, Sylvie Mayrargue, Vasileios Theodorou, Loizos Christofi, Georgios Gardikis, Md Rahman, Ashima Chawla, Francisco Ibañez, Ioannis Chochliouros, Didier Nicholson, Mario, Montagudand, Arman Shojaeifard, Alexios Pagkotzidis, Christos Verikoukis
This study addresses the critical need to secure VR network communication from non-immersive attacks, employing an intrusion detection system (IDS). While deep learning (DL) models offer advanced solutions, their opacity as "black box" models raises concerns. Recognizing this gap, the research underscores the urgency for DL-based explainability, enabling data analysts and cybersecurity experts to grasp model intricacies. Leveraging sensed data from IoT devices, our work trains a DL-based model for attack detection and mitigation in the VR network, Importantly, we extend our contribution by providing comprehensive global and local interpretations of the model’s decisions post-evaluation using SHAP-based explanation.
Authored by Urslla Izuazu, Dong-Seong Kim, Jae Lee
The procedure for obtaining an equivalency certificate for international educational recognition is typically complicated and opaque, and differs depending on the nation and system. To overcome these issues and empower students, this study suggests a revolutionary assessment tool that makes use of blockchain technology, chatbots, the European Credit Transfer and Accumulation System (ECTS), and Explainable Artificial Intelligence (XAI). Educational equivalency assessments frequently face difficulties and lack of openness in a variety of settings. The suggested solution uses blockchain for tamper-proof record keeping and secure data storage, based on the capabilities of each component. This improves the blockchain’s ability to securely store application data and evaluation results, fostering immutability and trust. Using the distributed ledger feature of blockchain promotes fairness in evaluations by preventing tampering and guaranteeing data integrity. The blockchain ensures data security and privacy by encrypting and storing data. Discuss how XAI might explain AI-driven equivalence choices, promoting fairness and trust, by reviewing pertinent material in each domain. Chatbots can improve accessibility by streamlining data collection and assisting students along the way. Transparency and efficiency are provided via ECTS computations that integrate XAI and chatbots. Emphasizing the availability of multilingual support for international students, we also address issues such as data privacy and system adaption. The study recommends further research to assess the multifaceted method in practical contexts and improve the technology for moral and efficient application. In the end, both students and institutions will benefit from this, as it can empower individuals and promote international mobility of degree equivalization.
Authored by Sumathy Krishnan, R Surendran
Many studies of the adoption of machine learning (ML) in Security Operation Centres (SOCs) have pointed to a lack of transparency and explanation – and thus trust – as a barrier to ML adoption, and have suggested eXplainable Artificial Intelligence (XAI) as a possible solution. However, there is a lack of studies addressing to which degree XAI indeed helps SOC analysts. Focusing on two XAI-techniques, SHAP and LIME, we have interviewed several SOC analysts to understand how XAI can be used and adapted to explain ML-generated alerts. The results show that XAI can provide valuable insights for the analyst by highlighting features and information deemed important for a given alert. As far as we are aware, we are the first to conduct such a user study of XAI usage in a SOC and this short paper provides our initial findings.
Authored by Håkon Eriksson, Gudmund Grov
Explainable AI is an emerging field that aims to address how black-box decisions of AI systems are made, by attempting to understand the steps and models involved in this decision-making. Explainable AI in manufacturing is supposed to deliver predictability, agility, and resiliency across targeted manufacturing apps. In this context, large amounts of data, which can be of high sensitivity and various formats need to be securely and efficiently handled. This paper proposes an Asset Management and Secure Sharing solution tailored to the Explainable AI and Manufacturing context in order to tackle this challenge. The proposed asset management architecture enables an extensive data management and secure sharing solution for industrial data assets. Industrial data can be pulled, imported, managed, shared, and tracked with a high level of security using this design. This paper describes the solution´s overall architectural design and gives an overview of the functionalities and incorporated technologies of the involved components, which are responsible for data collection, management, provenance, and sharing as well as for overall security.
Authored by Sangeetha Reji, Jonas Hetterich, Stamatis Pitsios, Vasilis Gkolemi, Sergi Perez-Castanos, Minas Pertselakis