A decentralized and secure architecture made possible by blockchain technology is what Web 3.0 is known for. By offering a secure and trustworthy platform for transactions and data storage, this new paradigm shift in the digital world promises to transform the way we interact with the internet. Data is the new oil, thus protecting it is equally crucial. The foundation of the web 3.0 ecosystem, which provides a secure and open method of managing user data, is blockchain technology. With the launch of Web 3.0, demand for seamless communication across numerous platforms and technologies has increased. Blockchain offers a common framework that makes it possible for various systems to communicate with one another. The decentralized nature of blockchain technology almost precludes hacker access to the system, ushering in a highly secure Web 3.0. By preserving the integrity and validity of data and transactions, blockchain helps to build trust in online transactions. AI can be integrated with blockchain to enhance its capabilities and improve the overall user experience. We can build a safe and intelligent web that empowers users, gives them more privacy, and gives them more control over their online data by merging blockchain and AI. In this article, we emphasize the value of blockchain and AI technologies in achieving Web 3.0 s full potential for a secure internet and propose a Blockchain and AI empowered framework. The future of technology is now driven by the power of blockchain, AI, and web 3.0, providing a secure and efficient way to manage digital assets and data.
Authored by Akshay Suryavanshi, Apoorva G, Mohan N, Rishika M, Abdul N
The HTTP protocol is the backbone for how traffic is communicated over the Internet and between web applications and users. Introduced in 1997 with HTTP 1.0 and 1.1, HTTP has gone through several developmental changes throughout the years. HTTP/1.1 suffers from several issues. Namely only allowing a one-to-one connection. HTTP/2 allowed for multiplexed connections. Additionally, HTTP/2 attempted to address the security issues that were faced by the prior version of HTTP by allowing administrators to enable HTTPS, as well as enable certificates to help ensure the encryption and protection of data between users and the web application. One of the major issues HTTP/2 faces is that it allows users to have multiplexed connections, but when there is an error and data needs to be retransmitted, this leads to head of line blocking. HTTP/3 is a new protocol that was proposed for formalization to the IETF in June of 2022. One of the first major changes is that unlike prior versions of HTTP that used the TCP/IP method of networking for data transmission, HTTP/3 uses UDP for data transmission. Prior research has focused on the protocol itself or investigating how certain types of attacks affect your web architecture that uses QUIC and HTTP/3. One area lacking research in this topic is how to secure web architecture in the cloud that uses this new protocol. To this end, we will be investigating how logging can be used to secure your web architecture and this protocol in the cloud.
Authored by Jacob Koch, Emmanuel Gyamfi
The exponential growth of web documents has resulted in traditional search engines producing results with high recall but low precision when queried by users. In the contemporary internet landscape, resources are made available via hyperlinks which may or may not meet the expectations of the user. To mitigate this issue and enhance the level of pertinence, it is imperative to examine the challenges associated with querying the semantic web and progress towards the advancement of semantic search engines. These search engines generate outcomes by prioritizing the semantic significance of the context over the structural composition of the content. This paper outlines a proposed architecture for a semantic search engine that utilizes the concept of semantics to refine web search results. The resulting output would consist of ontologically based and contextually relevant outcomes pertaining to the user s query.
Authored by Ganesh D, Ajay Rastogi
With the rapid growth in information technology and being called the Digital Era, it is very evident that no one can survive without internet or ICT advancements. The day-to-day life operations and activities are dependent on these technologies. The latest technology trends in the market and industry are computing power, Smart devices, artificial intelligence, Robotic process automation, metaverse, IOT (Internet of things), cloud computing, Edge computing, Block chain and much more in the coming years. When looking at all these aspect and advancements, one common thing is cloud computing and data which must be protected and safeguarded which brings in the need for cyber/cloud security. Hence cloud security challenges have become an omnipresent concern for organizations or industries of any size where it has gone from a small incident to threat landscape. When it comes to data and cyber/ cloud security there are lots of challenges seen to safeguard these data. Towards that it is necessary that everyone must be aware of the latest technological advancements, evolving cyber threats, data as a valuable asset, Human Factor, Regulatory compliance, Cyber resilience. To handle all these challenges, security and risk prediction framework is proposed in this paper. This framework PRCSAM (Predictive Risk and Complexity Score Assessment Model) will consider factors like impact and likelihood of the main risks, threats and attacks that is foreseen in cloud security and the recommendation of the Risk management framework with automatic risk assessment and scoring option catering to Information security and privacy risks. This framework will help management and organizations in making informed decisions on the cyber security strategy as this is a data driven, dynamic \& proactive approach to cyber security and its complexity calculation. This paper also discusses on the prediction techniques using Generative AI techniques.
Authored by Kavitha Ayappan, J.M Mathana, J Thangakumar
Procurement is a critical step in the setup of systems, as reverting decisions made at this point is typically time-consuming and costly. Especially Artificial Intelligence (AI) based systems face many challenges, starting with unclear and unknown side parameters at design time of the systems, changing ecosystems and regulations, as well as problems of overselling capabilities of systems by vendors. Furthermore, the AI Act puts forth a great deal of additional requirements for operators of critical AI systems, like risk management and transparency measures, thus making procurement even more complex. In addition, the number of providers of AI systems is drastically increasing. In this paper we provide guidelines for the procurement of AI based systems that support the decision maker in identifying the key elements for the procurement of secure AI systems, depending on the respective technical and regulatory environment. Furthermore, we provide additional resources for utilizing these guidelines in practical procurement.
Authored by Peter Kieseberg, Christina Buttinger, Laura Kaltenbrunner, Marlies Temper, Simon Tjoa
Over the years, mobile applications have brought about transformative changes in user interactions with digital services. Many of these apps however, are free and offer convenience at the cost of exchanging personal data. This convenience, however, comes with inherent risks to user privacy and security. This paper introduces a comprehensive methodology that evaluates the risks associated with sharing sensitive data through mobile applications. Building upon the Hierarchical Weighted Risk Scoring Model (HWRSM), this paper proposes an evaluation methodology for HWRSM, keeping in mind the implications of such risk scoring on real-world security scenarios. The methodology employs innovative risk scoring, considering various factors to assess potential security vulnerabilities related to sensitive terms. Practical assessments involving diverse set of Android applications, particularly in data-intensive categories, reveal insights into data privacy practices, vulnerabilities, and alignment with HWRSM scores. By offering insights into testing, validation, real-world findings, and model effectiveness, the paper aims to provide practical considerations to mobile application security discussions, facilitating informed approaches to address security and privacy concerns.
Authored by Trishla Shah, Raghav Sampangi, Angela Siegel
The use of artificial intelligence (AI) in cyber security [1] has proven to be very effective as it helps security professionals better understand, examine, and evaluate possible risks and mitigate them. It also provides guidelines to implement solutions to protect assets and safeguard the technology used. As cyber threats continue to evolve in complexity and scope, and as international standards continuously get updated, the need to generate new policies or update existing ones efficiently and easily has increased [1] [2].The use of (AI) in developing cybersecurity policies and procedures can be key in assuring the correctness and effectiveness of these policies as this is one of the needs for both private organizations and governmental agencies. This study sheds light on the power of AI-driven mechanisms in enhancing digital defense procedures by providing a deep implementation of how AI can aid in generating policies quickly and to the needed level.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
Artificial intelligence (AI) has been successfully used in cyber security for enhancing comprehending, investigating, and evaluating cyber threats. It can effectively anticipate cyber risks in a more efficient way. AI also helps in putting in place strategies to safeguard assets and data. Due to their complexity and constant development, it has been difficult to comprehend cybersecurity controls and adopt the corresponding cyber training and security policies and plans.Given that both cyber academics and cyber practitioners need to have a deep comprehension of cybersecurity rules, artificial intelligence (AI) in cybersecurity can be a crucial tool in both education and awareness. By offering an in-depth demonstration of how AI may help in cybersecurity education and awareness and in creating policies fast and to the needed level, this study focuses on the efficiency of AI-driven mechanisms in strengthening the entire cyber security education life cycle.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
Artificial Intelligence (AI) holds great potential for enhancing Risk Management (RM) through automated data integration and analysis. While the positive impact of AI in RM is acknowledged, concerns are rising about unintended consequences. This study explores factors like opacity, technology and security risks, revealing potential operational inefficiencies and inaccurate risk assessments. Through archival research and stakeholder interviews, including chief risk officers and credit managers, findings highlight the risks stemming from the absence of AI regulations, operational opacity, and information overload. These risks encompass cybersecurity threats, data manipulation uncertainties, monitoring challenges, and biases in algorithms. The study emphasizes the need for a responsible AI framework to address these emerging risks and enhance the effectiveness of RM processes. By advocating for such a framework, the authors provide practical insights for risk managers and identify avenues for future research in this evolving field.
Authored by Abdelmoneim Metwally, Salah Ali, Abdelnasser Mohamed
Artificial intelligence (AI) has emerged as one of the most formative technologies of the century and further gains importance to solve the big societal challenges (e.g. achievement of the sustainable development goals) or as a means to stay competitive in today’s global markets. The role as a key enabler in many areas of our daily life leads to a growing dependence, which has to be managed accordingly to mitigate negative economic, societal or privacy impacts. Therefore, the European Union is working on an AI Act, which defines concrete governance, risk and compliance (GRC) requirements. One of the key demands of this regulation is the operation of a risk management system for High-Risk AI systems. In this paper, we therefore present a detailed analysis of relevant literature in this domain and introduce our proposed approach for an AI Risk Management System (AIRMan).
Authored by Simon Tjoa, Peter Temper, Marlies Temper, Jakob Zanol, Markus Wagner, Andreas Holzinger
We propose a new security risk assessment approach for Machine Learning-based AI systems (ML systems). The assessment of security risks of ML systems requires expertise in ML security. So, ML system developers, who may not know much about ML security, cannot assess the security risks of their systems. By using our approach, a ML system developers can easily assess the security risks of the ML system. In performing the assessment, the ML system developer only has to answer the yes/no questions about the specification of the ML system. In our trial, we confirmed that our approach works correctly. CCS CONCEPTS • Security and privacy; • Computing methodologies → Artificial intelligence; Machine learning;
Authored by Jun Yajima, Maki Inui, Takanori Oikawa, Fumiyoshi Kasahara, Ikuya Morikawa, Nobukazu Yoshioka
The effective use of artificial intelligence (AI) to enhance cyber security has been demonstrated in various areas, including cyber threat assessments, cyber security awareness, and compliance. AI also provides mechanisms to write cybersecurity training, plans, policies, and procedures. However, when it comes to cyber security risk assessment and cyber insurance, it is very complicated to manage and measure. Cybersecurity professionals need to have a thorough understanding of cybersecurity risk factors and assessment techniques. For this reason, artificial intelligence (AI) can be an effective tool for producing a more thorough and comprehensive analysis. This study focuses on the effectiveness of AI-driven mechanisms in enhancing the complete cyber security insurance life cycle by examining and implementing a demonstration of how AI can aid in cybersecurity resilience.
Authored by Shadi Jawhar, Craig Kimble, Jeremy Miller, Zeina Bitar
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
Cloud computing has become increasingly popular in the modern world. While it has brought many positives to the innovative technological era society lives in today, cloud computing has also shown it has some drawbacks. These drawbacks are present in the security aspect of the cloud and its many services. Security practices differ in the realm of cloud computing as the role of securing information systems is passed onto a third party. While this reduces managerial strain on those who enlist cloud computing it also brings risk to their data and the services they may provide. Cloud services have become a large target for those with malicious intent due to the high density of valuable data stored in one relative location. By soliciting help from the use of honeynets, cloud service providers can effectively improve their intrusion detection systems as well as allow for the opportunity to study attack vectors used by malicious actors to further improve security controls. Implementing honeynets into cloud-based networks is an investment in cloud security that will provide ever-increasing returns in the hardening of information systems against cyber threats.
Authored by Eric Toth, Md Chowdhury
A fast expanding topic of study on automated AI is focused on the prediction and prevention of cyber-attacks using machine learning algorithms. In this study, we examined the research on applying machine learning algorithms to the problems of strategic cyber defense and attack forecasting. We also provided a technique for assessing and choosing the best machine learning models for anticipating cyber-attacks. Our findings show that machine learning methods, especially random forest and neural network models, are very accurate in predicting cyber-attacks. Additionally, we discovered a number of crucial characteristics, such as source IP, packet size, and malicious traffic that are strongly associated with the likelihood of cyber-attacks. Our results imply that automated AI research on cyber-attack prediction and security planning has tremendous promise for enhancing cyber-security and averting cyber-attacks.
Authored by Ravikiran Madala, N. Vijayakumar, Nandini N, Shanti Verma, Samidha Chandvekar, Devesh Singh
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
The effective use of artificial intelligence (AI) to enhance cyber security has been demonstrated in various areas, including cyber threat assessments, cyber security awareness, and compliance. AI also provides mechanisms to write cybersecurity training, plans, policies, and procedures. However, when it comes to cyber security risk assessment and cyber insurance, it is very complicated to manage and measure. Cybersecurity professionals need to have a thorough understanding of cybersecurity risk factors and assessment techniques. For this reason, artificial intelligence (AI) can be an effective tool for producing a more thorough and comprehensive analysis. This study focuses on the effectiveness of AI-driven mechanisms in enhancing the complete cyber security insurance life cycle by examining and implementing a demonstration of how AI can aid in cybersecurity resilience.
Authored by Shadi Jawhar, Craig Kimble, Jeremy Miller, Zeina Bitar
The integration of IoT with cellular wireless networks is expected to deepen as cellular technology progresses from 5G to 6G, enabling enhanced connectivity and data exchange capabilities. However, this evolution raises security concerns, including data breaches, unauthorized access, and increased exposure to cyber threats. The complexity of 6G networks may introduce new vulnerabilities, highlighting the need for robust security measures to safeguard sensitive information and user privacy. Addressing these challenges is critical for 5G networks massively IoT-connected systems as well as any new ones that that will potentially work in the 6G environment. Artificial Intelligence is expected to play a vital role in the operation and management of 6G networks. Because of the complex interaction of IoT and 6G networks, Explainable Artificial Intelligence (AI) is expected to emerge as an important tool for enhancing security. This study presents an AI-powered security system for the Internet of Things (IoT), utilizing XGBoost, Shapley Additive, and Local Interpretable Model-agnostic explanation methods, applied to the CICIoT 2023 dataset. These explanations empowers administrators to deploy more resilient security measures tailored to address specific threats and vulnerabilities, improving overall system security against cyber threats and attacks.
Authored by Navneet Kaur, Lav Gupta
The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
The complex landscape of multi-cloud settings is the result of the fast growth of cloud computing and the ever-changing needs of contemporary organizations. Strong cyber defenses are of fundamental importance in this setting. In this study, we investigate the use of AI in hybrid cloud settings for the purpose of multi-cloud security management. To help businesses improve their productivity and resilience, we provide a mathematical model for optimal resource allocation. Our methodology streamlines dynamic threat assessments, making it easier for security teams to efficiently priorities vulnerabilities. The advent of a new age of real-time threat response is heralded by the incorporation of AI-driven security tactics. The technique we use has real-world implications that may help businesses stay ahead of constantly changing threats. In the future, scientists will focus on autonomous security systems, interoperability, ethics, interoperability, and cutting-edge AI models that have been validated in the real world. This study provides a detailed road map for businesses to follow as they navigate the complex cybersecurity landscape of multi-cloud settings, therefore promoting resilience and agility in this era of digital transformation.
Authored by Srimathi. J, K. Kanagasabapathi, Kirti Mahajan, Shahanawaj Ahamad, E. Soumya, Shivangi Barthwal
With the rapid advancement of technology and the expansion of available data, AI has permeated many aspects of people s lives. Large Language Models(LLMs) such as ChatGPT are increasing the accuracy of their response and achieving a high level of communication with humans. These AIs can be used in business to benefit, for example, customer support and documentation tasks, allowing companies to respond to customer inquiries efficiently and consistently. In addition, AI can generate digital content, including texts, images, and a wide range of digital materials based on the training data, and is expected to be used in business. However, the widespread use of AI also raises ethical concerns. The potential for unintentional bias, discrimination, and privacy and security implications must be carefully considered. Therefore, While AI can improve our lives, it has the potential to exacerbate social inequalities and injustices. This paper aims to explore the unintended outputs of AI and assess their impact on society. Developers and users can take appropriate precautions by identifying the potential for unintended output. Such experiments are essential to efforts to minimize the potential negative social impacts of AI transparency, accountability, and use. We will also discuss social and ethical aspects with the aim of finding sustainable solutions regarding AI.
Authored by Takuho Mitsunaga
Generative Artificial Intelligence (AI) has increasingly been used to enhance threat intelligence and cyber security measures for organizations. Generative AI is a form of AI that creates new data without relying on existing data or expert knowledge. This technology provides decision support systems with the ability to automatically and quickly identify threats posed by hackers or malicious actors by taking into account various sources and data points. In addition, generative AI can help identify vulnerabilities within an organization s infrastructure, further reducing the potential for a successful attack. This technology is especially well-suited for security operations centers (SOCs), which require rapid identification of threats and defense measures. By incorporating interesting and valuable data points that previously would have been missed, generative AI can provide organizations with an additional layer of defense against increasingly sophisticated attacks.
Authored by Venkata Saddi, Santhosh Gopal, Abdul Mohammed, S. Dhanasekaran, Mahaveer Naruka
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
Penetration testing (Pen-Testing) detects potential vulnerabilities and exploits by imitating black hat hackers to stop cyber crimes. Despite recent attempts to automate Pen-Testing, the issue of automation is still unresolved. Additionally, the attempts are highly case-specific and ignore the unique characteristics of pen-testing. Moreover, the achieved accuracy is limited, and very sensitive to variations. Also, there are redundancies found in detecting the exploits using non-automated algorithms. This paper concludes the recent study in the Penetration testing field and illustrates the importance of a comprehensive hybrid AI automation framework for pen-testing.
Authored by Verina Saber, Dina ElSayad, Ayman Bahaa-Eldin, Zt Fayed
Deep neural networks have been widely applied in various critical domains. However, they are vulnerable to the threat of adversarial examples. It is challenging to make deep neural networks inherently robust to adversarial examples, while adversarial example detection offers advantages such as not affecting model classification accuracy. This paper introduces common adversarial attack methods and provides an explanation of adversarial example detection. Recent advances in adversarial example detection methods are categorized into two major classes: statistical methods and adversarial detection networks. The evolutionary relationship among different detection methods is discussed. Finally, the current research status in this field is summarized, and potential future directions are highlighted.
Authored by Chongyang Zhao, Hu Li, Dongxia Wang, Ruiqi Liu