https://publications.eai.eu/index.php/sesa/issue/feedEAI Endorsed Transactions on Security and Safety2025-06-19T11:21:50+00:00EAI Publications Departmentpublications@eai.euOpen Journal Systems<div class="abstract"> <p>Growing threats and increasingly also failures due to complexity may compromise the security and resilience of network and service infrastructures. Applications and services require the security of data handling and we need new security architectures and scalable and interoperable security policies for this. There is a need to guarantee end-to-end security in data communications and storage, including identity management and authentication.</p> <p><strong>INDEXING</strong>: DOAJ, CrossRef, Google Scholar, ProQuest, EBSCO, CNKI, Dimensions</p> <p> </p> </div>https://publications.eai.eu/index.php/sesa/article/view/2854SeFS: A Secure and Efficient File Sharing Framework based on the Trusted Execution Environment2022-12-01T00:29:20+00:00Yun He1224533208@qq.comXiaoqi JiaJiaxiaoqi@iie.ac.cnShengzhi Zhangshengzhi@bu.eduLou ChitkushevLTC@bu.edu<p>As the cloud-based file sharing becomes increasingly popular, it is crucial to protect the outsourced data against unauthorized access. Existing cryptography-based approach suffers from expensive re-encryption upon permission revocation. Other solutions that utilize Trusted Execution Environment (TEE) to enforce access control either expose the plaintext keys to users or turn out incapable of handling concurrent requests. In this paper, we propose SeFS, a secure and practical file sharing framework that leverages cooperation of server-side and client-side enclaves to enforce access control, with the former responsible for registration, authentication and access control enforcement and the latter performing file decryption. Such design significantly reduces the computation workload of server-side enclave, thus capable of handling concurrent requests. Meanwhile, it also supports immediate permission revocation, since the file decryption keys inside the client-side enclaves are destroyed immediately after use. We implement a prototype of SeFS and the evaluation demonstrates it enforces access control securely with high throughput and low latency.</p>2025-07-18T00:00:00+00:00Copyright (c) 2025 EAI Endorsed Transactions on Security and Safetyhttps://publications.eai.eu/index.php/sesa/article/view/9298SoK: The Psychology of Insider Threats2025-05-13T09:21:19+00:00Mubashrah Saddiqamsad@mmmi.sdu.dkJukka Ruohonenjuk@mmmi.sdu.dk<p>This paper presents a systematic literature review on the psychology of insider threats—security risks originating from individuals within organizations. While this is a well-established research area, psychological perspectives remain underdeveloped. The extended version adds background to better contextualize the role of personality traits, psychological states, and situational factors in insider threats. The paper also highlights research gaps and the need for stronger theoretical foundations in this domain.</p>2025-06-19T00:00:00+00:00Copyright (c) 2025 EAI Endorsed Transactions on Security and Safetyhttps://publications.eai.eu/index.php/sesa/article/view/9502Breaking the Loop: Adversarial Attacks on Cognitive-AI Feedback via Neural Signal Manipulation2025-06-07T15:12:39+00:00Dhaya Rdhaya.kanthavel@pnguot.ac.pgkanthavel Rradakrishnan.kanthavel@pnguot.ac.pg<p class="ICST-abstracttext" style="text-align: left;" align="left"><span lang="EN-GB">INTRODUCTION: Brain-Computer Interfaces (BCIs) embedded with Artificial Intelligence (AI) have created powerful closed-loop cognitive systems in the fields of neurorehabilitation, robotics, and assistive technologies. However, these tightly bound systems of human-AI integration expose the system to new security vulnerabilities and adversarial distortions of neural signals.<br />OBJECTIVES: The paper seeks to formally develop and assess neuro-adversarial attacks, a new class of attack vector that targets AI cognitive feedback systems through attacks on electroencephalographic (EEG) signals. The goal of the research was to simulate such attacks, measure the effects, and propose countermeasures.</span></p><p class="ICST-abstracttext" style="text-align: left;" align="left"><span lang="EN-GB">METHODS: Adversarial machine learning (AML) techniques, including Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), were applied to open EEG datasets using Long Short Term Memory (LSTM), Convolutional Neural Networks (CNN), and Transformer-based models. Closed-loop simulations of BCI-AI systems, including real-time feedback, were conducted, and both the attack vectors and the attacks countermeasure approaches (e.g., VAEs, wavelet denoising, adversarial detectors) were tested.<br />RESULTS: Neuro-adversarial perturbations yielded up to 30% reduction in classification accuracy and over 35% user intent misalignment. Transformer-based models performed relatively better, but overall performance degradation was significant. Defense strategies such as variational autoencoders and real-time adversarial detectors returned classification accuracy to over 80% and reduced successful attacks to below 10%.<br />CONCLUSION: The threat model presented in this paper is a significant addition to the world of neuroscience and AI security. Neuro-adversarial attacks represent a real risk to cognitive-AI systems by misaligning human intent and action with machine response. Mobile layer signal sanitation and detection.</span></p>2025-09-30T00:00:00+00:00Copyright (c) 2025 EAI Endorsed Transactions on Security and Safety