EAI Endorsed Transactions on Security and Safety https://publications.eai.eu/index.php/sesa <div class="abstract"> <p>Growing threats and increasingly also failures due to complexity may compromise the security and resilience of network and service infrastructures. Applications and services require the security of data handling and we need new security architectures and scalable and interoperable security policies for this. There is a need to guarantee end-to-end security in data communications and storage, including identity management and authentication.</p> <p><strong>INDEXING</strong>: DOAJ, CrossRef, Google Scholar, ProQuest, EBSCO, CNKI, Dimensions</p> <p> </p> </div> en-US <p>This is an open-access article distributed under the terms of the Creative Commons Attribution <a href="https://creativecommons.org/licenses/by/3.0/" target="_blank" rel="noopener">CC BY 4.0</a> license, which permits unlimited use, distribution, and reproduction in any medium so long as the original work is properly cited.</p> publications@eai.eu (EAI Publications Department) publications@eai.eu (EAI Support) Thu, 19 Jun 2025 11:21:50 +0000 OJS 3.3.0.18 http://blogs.law.harvard.edu/tech/rss 60 SeFS: A Secure and Efficient File Sharing Framework based on the Trusted Execution Environment https://publications.eai.eu/index.php/sesa/article/view/2854 <p>As the cloud-based file sharing becomes increasingly popular, it is crucial to protect the outsourced data against unauthorized access. Existing cryptography-based approach suffers from expensive re-encryption upon permission revocation. Other solutions that utilize Trusted Execution Environment (TEE) to enforce access control either expose the plaintext keys to users or turn out incapable of handling concurrent requests. In this paper, we propose SeFS, a secure and practical file sharing framework that leverages cooperation of server-side and client-side enclaves to enforce access control, with the former responsible for registration, authentication and access control enforcement and the latter performing file decryption. Such design significantly reduces the computation workload of server-side enclave, thus capable of handling concurrent requests. Meanwhile, it also supports immediate permission revocation, since the file decryption keys inside the client-side enclaves are destroyed immediately after use. We implement a prototype of SeFS and the evaluation demonstrates it enforces access control securely with high throughput and low latency.</p> Yun He, Xiaoqi Jia, Shengzhi Zhang, Lou Chitkushev Copyright (c) 2025 EAI Endorsed Transactions on Security and Safety https://creativecommons.org/licenses/by/4.0/ https://publications.eai.eu/index.php/sesa/article/view/2854 Fri, 18 Jul 2025 00:00:00 +0000 SoK: The Psychology of Insider Threats https://publications.eai.eu/index.php/sesa/article/view/9298 <p>This paper presents a systematic literature review on the psychology of insider threats—security risks originating from individuals within organizations. While this is a well-established research area, psychological perspectives remain underdeveloped. The extended version adds background to better contextualize the role of personality traits, psychological states, and situational factors in insider threats. The paper also highlights research gaps and the need for stronger theoretical foundations in this domain.</p> Mubashrah Saddiqa, Jukka Ruohonen Copyright (c) 2025 EAI Endorsed Transactions on Security and Safety https://creativecommons.org/licenses/by/4.0/ https://publications.eai.eu/index.php/sesa/article/view/9298 Thu, 19 Jun 2025 00:00:00 +0000 Breaking the Loop: Adversarial Attacks on Cognitive-AI Feedback via Neural Signal Manipulation https://publications.eai.eu/index.php/sesa/article/view/9502 <p class="ICST-abstracttext" style="text-align: left;" align="left"><span lang="EN-GB">INTRODUCTION: Brain-Computer Interfaces (BCIs) embedded with Artificial Intelligence (AI) have created powerful closed-loop cognitive systems in the fields of neurorehabilitation, robotics, and assistive technologies. However, these tightly bound systems of human-AI integration expose the system to new security vulnerabilities and adversarial distortions of neural signals.<br />OBJECTIVES: The paper seeks to formally develop and assess neuro-adversarial attacks, a new class of attack vector that targets AI cognitive feedback systems through attacks on electroencephalographic (EEG) signals. The goal of the research was to simulate such attacks, measure the effects, and propose countermeasures.</span></p><p class="ICST-abstracttext" style="text-align: left;" align="left"><span lang="EN-GB">METHODS: Adversarial machine learning (AML) techniques, including Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), were applied to open EEG datasets using Long Short Term Memory (LSTM), Convolutional Neural Networks (CNN), and Transformer-based models. Closed-loop simulations of BCI-AI systems, including real-time feedback, were conducted, and both the attack vectors and the attacks countermeasure approaches (e.g., VAEs, wavelet denoising, adversarial detectors) were tested.<br />RESULTS: Neuro-adversarial perturbations yielded up to 30% reduction in classification accuracy and over 35% user intent misalignment. Transformer-based models performed relatively better, but overall performance degradation was significant. Defense strategies such as variational autoencoders and real-time adversarial detectors returned classification accuracy to over 80% and reduced successful attacks to below 10%.<br />CONCLUSION: The threat model presented in this paper is a significant addition to the world of neuroscience and AI security. Neuro-adversarial attacks represent a real risk to cognitive-AI systems by misaligning human intent and action with machine response. Mobile layer signal sanitation and detection.</span></p> Dhaya R, Kanthavel R Copyright (c) 2025 EAI Endorsed Transactions on Security and Safety https://creativecommons.org/licenses/by/4.0/ https://publications.eai.eu/index.php/sesa/article/view/9502 Tue, 30 Sep 2025 00:00:00 +0000