As cyber attacks grow in complexity and frequency, cyber threat intelligence (CTI) remains a priority objective for defenders. A critical component of CTI at the strategic level of defensive operations is attack attribution. Attributing an attack to a threat group informs defenders on adversaries that are actively engaging them and advances their ability respond. In this paper, we propose a data analytic approach towards threat attribution using adversary playbooks of tactics, techniques, and procedures (TTPs). Specifically, our approach uses association rule mining on a large real world CTI dataset to extend known threat TTP playbooks with statistically probable TTPs the adversary may deploy. The benefits are twofold. First, we offer a dataset of learned TTP associations and extended threat playbooks. Second, we show that we can attribute attacks using a weighted Jaccard similarity with 96\% accuracy.
Authored by Kelsie Edie, Cole Mckee, Adam Duby
Advanced Persistent Threat (APT) attacks are complex, employing diverse attack elements and increasingly intelligent techniques. This paper introduces a tool for security risk assessment specifically designed for these attacks. This tool assists security teams in systematically analyzing APT attacks to derive adaptive security requirements for mission-critical target systems. Additionally, the tool facilitates the assessment of security risks, providing a comprehensive understanding of their impact on target systems. By leveraging this tool, security teams can enhance defense strategies, mitigating potential threats and ensuring the security of target systems.
Authored by Sihn-Hye Park, Dongyoon Kim, Seok-Won Lee
Federated edge learning can be essential in supporting privacy-preserving, artificial intelligence (AI)-enabled activities in digital twin 6G-enabled Internet of Things (IoT) environments. However, we need to also consider the potential of attacks targeting the underlying AI systems (e.g., adversaries seek to corrupt data on the IoT devices during local updates or corrupt the model updates); hence, in this article, we propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoT environments. Specifically, we study the influence of adversaries on the training and development of federated learning models in digital twin 6G-enabled IoT environments. We demonstrate that attackers can carry out poisoning attacks in two different learning settings, namely: centralized learning and federated learning, and successful attacks can severely reduce the model s accuracy. We comprehensively evaluate the attacks on a new cyber security dataset designed for IoT applications with three deep neural networks under the non-independent and identically distributed (Non-IID) data and the independent and identically distributed (IID) data. The poisoning attacks, on an attack classification problem, can lead to a decrease in accuracy from 94.93\% to 85.98\% with IID data and from 94.18\% to 30.04\% with Non-IID.
Authored by Mohamed Ferrag, Burak Kantarci, Lucas Cordeiro, Merouane Debbah, Kim-Kwang Choo
This paper presents a pioneering blockchain-based framework for enhancing traceability and transparency within the global agrifood supply chain. By seamlessly integrating blockchain technology and the Ethereum Virtual Machine (EVM), the framework offers a robust solution to the industry s challenges. It weaves a narrative where each product s journey is securely documented in an unalterable digital ledger, accessible to all stakeholders. Real-time Internet of Things (IoT) sensors stand sentinel, monitoring variables crucial to product quality. With millions afflicted by foodborne diseases, substantial food wastage, and a strong consumer desire for transparency, this framework responds to a clarion call for change. Moreover, the framework s data-driven approach not only rejuvenates consumer confidence and product authenticity but also lays the groundwork for robust sustainability and toxicity assessments. In this narrative of technological innovation, the paper embarks on an architectural odyssey, intertwining the threads of blockchain and EVM to reimagine a sustainable, transparent, and trustworthy agrifood landscape.
Authored by Prasanna Kumar, Bharati Mohan, Akilesh S, Jaikanth Y, Roshan George, Vishal G, Vishnu P, Elakkiya R
As a recent breakthrough in generative artificial intelligence, ChatGPT is capable of creating new data, images, audio, or text content based on user context. In the field of cybersecurity, it provides generative automated AI services such as network detection, malware protection, and privacy compliance monitoring. However, it also faces significant security risks during its design, training, and operation phases, including privacy breaches, content abuse, prompt word attacks, model stealing attacks, abnormal structure attacks, data poisoning attacks, model hijacking attacks, and sponge attacks. This paper starts from the risks and events that ChatGPT has recently faced, proposes a framework for analyzing cybersecurity in cyberspace, and envisions adversarial models and systems. It puts forward a new evolutionary relationship between attackers and defenders using ChatGPT to enhance their own capabilities in a changing environment and predicts the future development of ChatGPT from a security perspective.
Authored by Chunhui Hu, Jianfeng Chen
As the use of machine learning continues to grow in prominence, so does the need for increased knowledge of the threats posed by artificial intelligence. Now more than ever, people are worried about poison attacks, one of the many AI-generated dangers that have already been made public. To fool a classifier during testing, an attacker may "poison" it by altering a portion of the dataset it utilised for training. The poison-resistance strategy presented in this article is novel. The approach uses a recently developed basic called the keyed nonlinear probability test to determine whether or not the training input is consistent with a previously learnt Ddistribution when the odds are stacked against the model. We use an adversary-unknown secret key in our operation. Since the caveats are kept hidden, an adversary cannot use them to fool a keyed nonparametric normality test into concluding that a (substantially) modified dataset really originates from the designated dataset (D).
Authored by Ramesh Saini
The adoption of IoT in a multitude of critical infrastructures revolutionizes several sectors, ranging from smart healthcare systems to financial organizations and thermal and nuclear power plants. Yet, the progressive growth of IoT devices in critical infrastructure without considering security risks can damage the user’s privacy, confidentiality, and integrity of both individuals and organizations. To overcome the aforementioned security threats, we proposed an AI and onion routing-based secure architecture for IoT-enabled critical infrastructure. Here, we first employ AI classifiers that classify the attack and non-attack IoT data, where attack data is discarded from further communication. In addition, the AI classifiers are secure from data poisoning attacks by incorporating an isolation forest algorithm that efficiently detects the poisoned data and eradicates it from the dataset’s feature space. Only non-attack data is forwarded to the onion routing network, which offers triple encryption to encrypt IoT data. As the onion routing only processes non-attack data, it is less computationally expensive than other baseline works. Moreover, each onion router is associated with blockchain nodes that store the verifying tokens of IoT data. The proposed architecture is evaluated using performance parameters, such as accuracy, precision, recall, training time, and compromisation rate. In this proposed work, SVM outperforms by achieving 97.7\% accuracy.
Authored by Nilesh Jadav, Rajesh Gupta, Sudeep Tanwar
This survey paper provides an overview of the current state of AI attacks and risks for AI security and privacy as artificial intelligence becomes more prevalent in various applications and services. The risks associated with AI attacks and security breaches are becoming increasingly apparent and cause many financial and social losses. This paper will categorize the different types of attacks on AI models, including adversarial attacks, model inversion attacks, poisoning attacks, data poisoning attacks, data extraction attacks, and membership inference attacks. The paper also emphasizes the importance of developing secure and robust AI models to ensure the privacy and security of sensitive data. Through a systematic literature review, this survey paper comprehensively analyzes the current state of AI attacks and risks for AI security and privacy and detection techniques.
Authored by Md Rahman, Aiasha Arshi, Md Hasan, Sumayia Mishu, Hossain Shahriar, Fan Wu
AI technology is widely used in different fields due to the effectiveness and accurate results that have been achieved. The diversity of usage attracts many attackers to attack AI systems to reach their goals. One of the most important and powerful attacks launched against AI models is the label-flipping attack. This attack allows the attacker to compromise the integrity of the dataset, where the attacker is capable of degrading the accuracy of ML models or generating specific output that is targeted by the attacker. Therefore, this paper studies the robustness of several Machine Learning models against targeted and non-targeted label-flipping attacks against the dataset during the training phase. Also, it checks the repeatability of the results obtained in the existing literature. The results are observed and explained in the domain of the cyber security paradigm.
Authored by Alanoud Almemari, Raviha Khan, Chan Yeun
Federated learning is proposed as a typical distributed AI technique to protect user privacy and data security, and it is based on decentralized datasets that train machine learning models by sharing model gradients rather than sharing user data. However, while this particular machine learning approach safeguards data from being shared, it also increases the likelihood that servers will be attacked. Joint learning models are sensitive to poisoning attacks and can effectively pose a threat to the global model when an attacker directly contaminates the global model by passing poisoned gradients. In this paper, we propose a joint learning poisoning attack method based on feature selection. Unlike traditional poisoning attacks, it only modifies important features of the data and ignores other features, which ensures the effectiveness of the attack while being highly stealthy and can bypass general defense methods. After experiments, we demonstrate the feasibility of the method.
Authored by Zhengqi Liu, Ziwei Liu, Xu Yang
Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to hinder model performance or, more concerningly, insert backdoors to exploit at inference time. Many methods have been proposed to defend against adversarial poisoning by either identifying the poisoned samples to facilitate removal or developing poison agnostic training algorithms. Although effective, these proposed approaches can have unintended consequences on the model, such as worsening performance on certain data sub-populations, thus inducing a classification bias. In this work, we evaluate several adversarial poisoning defenses. In addition to traditional security metrics, i.e., robustness to poisoned samples, we also adapt a fairness metric to measure the potential undesirable discrimination of sub-populations resulting from using these defenses. Our investigation highlights that many of the evaluated defenses trade decision fairness to achieve higher adversarial poisoning robustness. Given these results, we recommend our proposed metric to be part of standard evaluations of machine learning defenses.
Authored by Nathalie Baracaldo, Farhan Ahmed, Kevin Eykholt, Yi Zhou, Shriti Priya, Taesung Lee, Swanand Kadhe, Mike Tan, Sridevi Polavaram, Sterling Suggs, Yuyang Gao, David Slater
AI-based code generators have gained a fundamental role in assisting developers in writing software starting from natural language (NL). However, since these large language models are trained on massive volumes of data collected from unreliable online sources (e.g., GitHub, Hugging Face), AI models become an easy target for data poisoning attacks, in which an attacker corrupts the training data by injecting a small amount of poison into it, i.e., astutely crafted malicious samples. In this position paper, we address the security of AI code generators by identifying a novel data poisoning attack that results in the generation of vulnerable code. Next, we devise an extensive evaluation of how these attacks impact state-of-the-art models for code generation. Lastly, we discuss potential solutions to overcome this threat.
Authored by Cristina Improta
The technology described in this paper allows two or more air-gapped computers with passive speakers to discreetly exchange data between them while they are in the same room. The suggested solution takes advantage of the audio chip’s HDA Jack Retask capability, which enables speakers to be attached to it to be switched from output devices to input devices, turning them into microphones. Details of the implementation, technical background, and attack model are discussed. The reversed speakers nonetheless operate effectively in the near-ultrasonic frequency range (18kHz to 24kHz), despite not being intended to function as microphones. The analysis of practical factors for the effective application of the suggested strategy continues. The findings have important ramifications for safe data transfer between air-gapped systems and emphasise the necessity of extra security measures to thWart such assaults.
Authored by S Suraj, Meenu Mohan, Suma S
Urban Air Mobility is envisioned as an on-demand, highly automated and autonomous air transportation modality. It requires the use of advanced sensing and data communication technologies to gather, process, and share flight-critical data. Where this sharing of mix-critical data brings opportunities, if compromised, presents serious cybersecurity threats and safety risks due to the cyber-physical nature of the airborne vehicles. Therefore the avionics system design approach of adhering to functional safety standards (DO-178C) alone is inadequate to protect the mission-critical avionics functions from cyber-attacks. To approach this challenge, the DO-326A/ED-202A standard provides a baseline to effectively manage cybersecurity risks and to ensure the airworthiness of airborne systems. In this regard, this paper pursues a holistic cybersecurity engineering and bridges the security gap by mapping the DO-326A/ED-202A system security risk assessment activities to the Threat Analysis and Risk Assessment process. It introduces Resilient Avionics Architecture as an experimental use case for Urban Air Mobility by apprehending the DO-326A/ED-202A standard guidelines. It also presents a comprehensive system security risk assessment of the use case and derives appropriate risk mitigation strategies. The presented work facilitates avionics system designers to identify, assess, protect, and manage the cybersecurity risks across the avionics system life cycle.
Authored by Fahad Siddiqui, Alexander Ahlbrecht, Rafiullah Khan, Sena Tasdemir, Henry Hui, Balmukund Sonigara, Sakir Sezer, Kieran McLaughlin, Wanja Zaeske, Umut Durak
Wireless Sensor Networks (WSN s) have gained prominence in technology for diverse applications, such as environmental monitoring, health care, smart agriculture, and industrial automation. Comprising small, low-power sensor nodes that sense and collect data from the environment, process it locally, and communicate wirelessly with a central sink or gateway, WSN s face challenges related to limited energy resources, communication constraints, and data processing requirements. This paper presents a comprehensive review of the current state of research in WSN s, focusing on aspects such as network architecture, communication protocols, energy management techniques, data processing and fusion, security and privacy, and applications. Existing solutions are critically analysed regarding their strengths, weaknesses, research gaps, and future directions for WSNs.
Authored by Santosh Jaiswal, Anshu Dwivedi
AMLA is the novel Auckland Model for Logical Airgaps developed at University of Auckland. Convergence of IT-OT use cases are rapidly being implemented and mostly in an ad-hoc manner leaving large security holes. This paper introduces the first novel AMLA logical airgap design pattern; and showcases the AMLA’s layered defense system via New Zealand case study for the electricity distribution sector to propose how logical airgaps can be beneficial in New Zealand. Thus, able to provide security even to legacy methods and devices without replacing them to make the newer convergence use cases work economically and securely.
Authored by Abhinav Chopra, Nirmal-Kumar Nair, Rizki Rahayani
Wireless communication enables an ingestible device to send sensor information and support external on-demand operation while in the gastrointestinal (GI) tract. However, it is challenging to maintain stable wireless communication with an ingestible device that travels inside the dynamic GI environment as this environment easily detunes the antenna and decreases the antenna gain. In this paper, we propose an air-gap based antenna solution to stabilize the antenna gain inside this dynamic environment. By surrounding a chip antenna with 1 2 mms of air, the antenna is isolated from the environment, recovering its antenna gain and the received signal strength by 12 dB or more according to our in vitro and in vivo evaluation in swine. The air gap makes margin for the high path loss, enabling stable wireless communication at 2.4 GHz that allows users to easily access their ingestible devices by using mobile devices with Bluetooth Low Energy (BLE). On the other hand, the data sent or received over the wireless medium is vulnerable to being eavesdropped on by nearby devices other than authorized users. Therefore, we also propose a lightweight security protocol. The proposed protocol is implemented in low energy without compromising the security level thanks to the base protocol of symmetric challenge-response and Speck, the cipher that is optimized for software implementation.
Authored by Yeseul Jeon, Saurav Maji, So-Yoon Yang, Muhammed Thaniana, Adam Gierlach, Ian Ballinger, George Selsing, Injoo Moon, Josh Jenkins, Andrew Pettinari, Niora Fabian, Alison Hayward, Giovanni Traverso, Anantha Chandrakasan
The notion that ships, marine vessels and off-shore structures are digitally isolated is quickly disappearing. Affordable and accessible wireless communication technologies (e.g., short-range radio, long-range satellite) are quickly removing any air-gaps these entities have. Commercial, defence, and personal ships have a wide range of communication systems to choose from, yet some can weaken the overall ship security. One of the most significant information technologies (IT) being used today is satellite-based communications. While the backbone of this technology is often secure, third-party devices may introduce vulnerabilities. Within maritime industries, the market for satellite communication devices has also grown significantly, with a wide range of products available. With these devices and services, marine cyber-physical systems are now more interconnected than ever. However, some of these off-the-shelf products can be more insecure than others and, as shown here, can decrease the security of the overall maritime network and other connected devices. This paper examines the vulnerability of an existing, off-the-shelf product, how a novel attack-chain can compromise the device, how that introduces vulnerabilities to the wider network, and then proposes solutions to the found vulnerabilities.
Authored by Jordan Gurren, Avanthika Harish, Kimberly Tam, Kevin Jones
Air-gapped workstations are separated from the Internet because they contain confidential or sensitive information. Studies have shown that attackers can leak data from air-gapped computers with covert ultrasonic signals produced by loudspeakers. To counteract the threat, speakers might not be permitted on highly sensitive computers or disabled altogether - a measure known as an ’audio gap.’ This paper presents an attack enabling adversaries to exfiltrate data over ultrasonic waves from air-gapped, audio-gapped computers without external speakers. The malware on the compromised computer uses its built-in buzzer to generate sonic and ultrasonic signals. This component is mounted on many systems, including PC workstations, embedded systems, and server motherboards. It allows software and firmware to provide error notifications to a user, such as memory and peripheral hardware failures. We examine the different types of internal buzzers and their hardware and software controls. Despite their limited technological capabilities, such as 1-bit sound, we show that sensitive data can be encoded in sonic and ultrasonic waves. This is done using pulse width modulation (PWM) techniques to maintain a carrier wave with a dynamic range. We also show that malware can evade detection by hiding in the frequency bands of other components (e.g., fans and power supplies). We implement the attack using a PC transmitter and smartphone app receiver. We discuss transmission protocols, modulation, encoding, and reception and present the evaluation of the covert channel as well. Based on our tests, sensitive data can be exfiltrated from air-gapped computers through its built- in buzzer. A smartphone can receive data from up to six meters away at 100 bits per second.
Authored by Mordechai Guri
The rapid advancement of technology in aviation business management, notably through the implementation of location-independent aerodrome control systems, is reshaping service efficiency and cost-effectiveness. However, this emphasis on operational enhancements has resulted in a notable gap in cybersecurity incident management proficiency. This study addresses the escalating sophistication of the cybersecurity threat landscape, where malicious actors target critical safety information, posing risks from disruptions to potential catastrophic incidents. The paper employs a specialized conceptualization technique, derived from prior research, to analyze the interplays between malicious software and degraded modes operations in location-independent aerodrome control systems. Rather than predicting attack trajectories, this approach prioritizes the development of training paradigms to rigorously evaluate expertise across engineering, operational, and administrative levels in air traffic management domain. This strategy offers a proactive framework to safeguard critical infrastructures, ensuring uninterrupted, reliable services, and fortifying resilience against potential threats. This methodology promises to cultivate a more secure and adept environment for aerodrome control operations, mitigating vulnerabilities associated with malicious interventions.
Authored by Gabor Horvath
The medium-voltage (MV) power distribution networks have a complex topology, and this can easily cause air arc faults. However, the current of the air arc is low, and the arc temperature is only a few thousand Kelvin. In this case, the arc is in non-local thermodynamic equilibrium (non-LTE). The LTE state of arc is the basis for the establishment of arc model and the calculation of transport coefficient. In this paper, the non-LTE effect of the MV AC air arc is studied by the moiré deflection and the optical emission spectroscopy (OES) techniques.
Authored by Tong Zhou, Qing Yang, Tao Yuan
This paper presents AirKeyLogger - a novel radio frequency (RF) keylogging attack for air-gapped computers.Our keylogger exploits radio emissions from a computer’s power supply to exfiltrate real-time keystroke data to a remote attacker. Unlike hardware keylogging devices, our attack does not require physical hardware. Instead, it can be conducted via a software supply-chain attack and is solely based on software manipulations. Malware on a sensitive, air-gap computer can intercept keystroke logging by using global hooking techniques or injecting malicious code into a running process. To leak confidential data, the processor’s working frequencies are manipulated to generate a pattern of electromagnetic emissions from the power unit modulated by keystrokes. The keystroke information can be received at distances of several meters away via an RF receiver or a smartphone with a simple antenna. We provide related work, discuss keylogging methods and present multi-key modulation techniques. We evaluate our method at various typing speeds and on-screen keyboards as well. We show the design and implementation of transmitter and receiver components and present evaluation findings. Our tests show that malware can eavesdrop on keylogging data in real-time over radio signals several meters away and behind concrete walls from highly secure and air-gapped systems.
Authored by Mordechai Guri
Specific Emitter Identification (SEI) is advantageous for its ability to passively identify emitters by exploiting distinct, unique, and organic features unintentionally imparted upon every signal during formation and transmission. These features are attributed to the slight variations and imperfections that exist in the Radio Frequency (RF) front end, thus SEI is being proposed as a physical layer security technique. The majority of SEI work assumes the targeted emitter is a passive source with immutable and difficult-to-mimic signal features. However, Software-Defined Radio (SDR) proliferation and Deep Learning (DL) advancements require a reassessment of these assumptions, because DL can learn SEI features directly from an emitter’s signals and SDR enables signal manipulation. This paper investigates a strong adversary that uses SDR and DL to mimic an authorized emitter’s signal features to circumvent SEI-based identity verification. The investigation considers three SEI mimicry approaches, two different SDR platforms, the presence or lack of signal energy as well as a "decoy" emitter. The results show that "off-the-shelf" DL achieves effective SEI mimicry. Additionally, SDR constraints impact SEI mimicry effectiveness and suggest an adversary’s minimum requirements. Future SEI research must consider adversaries capable of mimicking another emitter’s SEI features or manipulating their own.
Authored by Donald Reising, Joshua Tyler, Mohamed Fadul, Matthew Hilling, Daniel Loveless
In a one-way secret key agreement (OW-SKA) protocol in source model, Alice and Bob have private samples of two correlated variables X and Y that are partially leaked to Eve through the variable Z, and use a single message from Alice to Bob to obtain a shared secret key. We propose an efficient secure OW-SKA when the sent message over the public channel can be tampered with by an active adversary. Our construction uses a specially designed hash function that is used for reconciliation, as well as detection of tampering. In detection of tampering the function is a Message Authentication Code (MAC) that maintains its security when the key is partially leaked. We prove the secrecy of the established key and robustness of the protocol, and discuss our results.
Authored by Somnath Panja, Shaoquan Jiang, Reihaneh Safavi-Naini
Can we hope to provide provable security against model extraction attacks? As a step towards a theoretical study of this question, we unify and abstract a wide range of “observational” model extraction defenses (OMEDs) - roughly, those that attempt to detect model extraction by analyzing the distribution over the adversary s queries. To accompany the abstract OMED, we define the notion of complete OMEDs - when benign clients can freely interact with the model - and sound OMEDs - when adversarial clients are caught and prevented from reverse engineering the model. Our formalism facilitates a simple argument for obtaining provable security against model extraction by complete and sound OMEDs, using (average-case) hardness assumptions for PAC-learning, in a way that abstracts current techniques in the prior literature. The main result of this work establishes a partial computational incompleteness theorem for the OMED: any efficient OMED for a machine learning model computable by a polynomial size decision tree that satisfies a basic form of completeness cannot satisfy soundness, unless the subexponential Learning Parity with Noise (LPN) assumption does not hold. To prove the incompleteness theorem, we introduce a class of model extraction attacks called natural Covert Learning attacks based on a connection to the Covert Learning model of Canetti and Karchmer (TCC 21), and show that such attacks circumvent any defense within our abstract mechanism in a black-box, nonadaptive way. As a further technical contribution, we extend the Covert Learning algorithm of Canetti and Karchmer to work over any “concise” product distribution (albeit for juntas of a logarithmic number of variables rather than polynomial size decision trees), by showing that the technique of learning with a distributional inverter of Binnendyk et al. (ALT 22) remains viable in the Covert Learning setting.
Authored by Ari Karchmer