Resonant Risk: A Sociotechnical Model for Understanding Cyber Threat Perception and Amplification in the Age of Artificial Intelligence (AI)
DOI:
https://doi.org/10.4108/eetss.9915Keywords:
Cybersecurity, Risk Perception, Social Amplification of Risk, Artificial Intelligence, Deepfakes, Misinformation, Risk Communication, Resonant Risk, Risk ManagementAbstract
INTRODUCTION: Cyber incidents are increasingly shaped not only by technical severity but also by how risk signals are amplified through AI-mediated information ecosystems. Deepfakes, synthetic media, algorithmic amplification, and AI-generated misinformation can intensify public fear, distort trust, and trigger disproportionate societal responses.
OBJECTIVES: This paper develops the Resonant Risk Model as a sociotechnical extension of the Social Amplification of Risk Framework for AI-era cybersecurity. It also proposes the Resonant Risk Management Framework to support perception-aware cyber risk governance.
METHODS: The study uses theory-building, interdisciplinary literature synthesis, structured case selection, and comparative case analysis. Six cases are assessed using dimensions including technical severity, amplification channels, resonance factors, public perception, societal ripple effects, and feedback loops.
RESULTS: The analysis shows that high technical severity does not always produce high public resonance. SolarWinds showed very high technical severity but moderate public resonance, while AI-driven misinformation and deepfake cases produced high trust erosion despite lower direct technical impact.
CONCLUSION: The paper argues that cyber resilience must include perception monitoring, rapid communication, misinformation correction, and trust recovery alongside technical controls.
References
[1] R. E. Kasperson, T. Webler, B. Ram, and J. Sutton, “The social amplification of risk framework: New perspectives,” Risk Analysis, vol. 42, no. 7, pp. 1367–1380, Jul. 2022, doi: https://doi.org/10.1111/risa.13926.
[2] T. Brooks and J. Heatley, “Increasing Threat of Deepfake Identities,” Department of Homeland Security, 2023. Available: https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf
[3] M. Salvagno, F. S. Taccone, and A. G. Gerli, “Artificial intelligence hallucinations,” Critical Care, vol. 27, no. 1, May 2023, doi: https://doi.org/10.1186/s13054-023-04473-y.
[4] M. S. Khan, “Slopsquatting (AI Hallucinations) and the Future of Secure Prompt Engineering,” Medium, Jul. 22, 2025. https://medium.com/@sajidmkd/slopsquatting-ai-hallucinations-and-the-future-of-secure-prompt-engineering-24705c1225ed (accessed Aug. 01, 2025).
[5] WEF, “Global Risks Report 2025: Conflict, Environment and Disinformation Top Threats,” Jan. 30, 2025. https://ghhin.org/news/global-risks-report-2025-conflict-environment-and-disinformation-top-threats/ (accessed Aug. 01, 2025).
[6] D. Alba, “How a fake AI photo of a Pentagon blast went viral and briefly spooked stocks,” May 22, 2023. https://www.latimes.com/business/story/2023-05-22/how-fake-ai-photo-of-a-pentagon-blast-went-viral-and-briefly-spooked-stocks
[7] R. E. Kasperson et al., “The Social Amplification of Risk: A Conceptual Framework,” Risk Analysis, vol. 8, no. 2, pp. 177–187, Jun. 1988, doi: https://doi.org/10.1111/j.1539-6924.1988.tb01168.x.
[8] W. J. Burns, P. Slovic, R. E. Kasperson, J. X. Kasperson, O. Renn, and S. Emani, “Incorporating Structural Models into Research on the Social Amplification of Risk: Implications for Theory Construction and Decision Making,” Risk Analysis, vol. 13, no. 6, pp. 611–623, Dec. 1993, doi: https://doi.org/10.1111/j.1539-6924.1993.tb01323.x.
[9] I. J. Chung, “Social Amplification of Risk in the Internet Environment,” Risk Analysis, vol. 31, no. 12, pp. 1883–1896, May 2011, doi: https://doi.org/10.1111/j.1539-6924.2011.01623.x.
[10] S. C. Vos et al., “Retweeting Risk Communication: The Role of Threat and Efficacy,” Risk Analysis, vol. 38, no. 12, pp. 2580–2598, Aug. 2018, doi: https://doi.org/10.1111/risa.13140.
[11] J. Sutton et al., “A cross-hazard analysis of terse message retransmission on Twitter,” Proceedings of the National Academy of Sciences, vol. 112, no. 48, Dec. 2015, doi: https://doi.org/10.1073/pnas.1508916112.
[12] X. A. Zhang and R. Cozma, “Risk sharing on Twitter: Social amplification and attenuation of risk in the early stages of the COVID-19 pandemic,” Computers in Human Behavior, vol. 126, Jan. 2022, doi: https://doi.org/10.1016/j.chb.2021.106983.
[13] J. X. Kasperson, R. E. Kasperson, N. Pidgeon, and P. Slovic, “The social amplification of risk: assessing fifteen years of research and theory,” The Social Amplification of Risk, pp. 13–46, Jul. 2003, doi: https://doi.org/10.1017/cbo9780511550461.002.
[14] N. Kshetri, “Disinformation and Misinformation in the Age of Artificial Intelligence and the Metaverse,” Computer, vol. 57, no. 12, pp. 110–116, Dec. 2024, doi: https://doi.org/10.1109/mc.2024.3461325.
[15] R. Ma, X. Wang, and G.-R. Yang, “Fighting fake news in the age of generative AI: Strategic insights from multi-stakeholder interactions,” Technological Forecasting and Social Change, vol. 216, Apr. 2025, doi: https://doi.org/10.1016/j.techfore.2025.124125.
[16] J. Villasenor, “Artificial intelligence, deepfakes, and the uncertain future of truth,” Feb. 14, 2019. https://www.brookings.edu/articles/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/
[17] D. Dias and P. Mendonça, “Lessons from the Equifax and Capital One Data Breaches on Social Amplification of Risk,” Review of Integrative Business and Economics Research, vol. 11, 2022, Available: http://buscompress.com/uploads/3/4/9/8/34980536/riber_11-4_06_t22-044_71-90.pdf
[18] M. Douglas and A. Wildavsky, Risk and culture : an essay on the selection of technological and environmental dangers. Berkeley, Calif.: Univ. Of California Press, 1982.
[19] S. Rippl, “Cultural theory and risk perception: a proposal for a better measurement,” Journal of Risk Research, vol. 5, no. 2, pp. 147–165, Apr. 2002, doi: https://doi.org/10.1080/13669870110042598.
[20] B. Colman, “EU AI Act Cheat Sheet: Understanding Regulations in 2 Minutes,” Reality Defender — Enterprise-Grade Deepfake Detection, Jun. 04, 2025. https://www.realitydefender.com/insights/understanding-eu-ai-act-in-2-minutes
[21] I. K. Miyashiro, “Case study: Equifax Data Breach,” Seven Pillars Institute, Apr. 30, 2021. https://sevenpillarsinstitute.org/case-study-equifax-data-breach/
[22] K. D. Martin, A. Borah, and R. W. Palmatier, “Data Privacy: Effects on Customer and Firm Performance,” Journal of Marketing, vol. 81, no. 1, pp. 36–58, Jan. 2017, doi: https://doi.org/10.1509/jm.15.0497.
[23] S. Oladimeji and S. M. Kerner, “SolarWinds hack explained: Everything you need to know,” TechTarget, Nov. 03, 2023. https://www.techtarget.com/whatis/feature/SolarWinds-hack-explained-Everything-you-need-to-know
[24] Fortinet, “SolarWinds Supply Chain Attack,” Fortinet, 2025. https://www.fortinet.com/resources/cyberglossary/solarwinds-cyber-attack
[25] S. Srinivasan, “Ransomware Attack at Colonial Pipeline Company,” Mar. 2023. https://www.hbs.edu/faculty/Pages/item.aspx?num=63756
[26] J. Beerman, D. Berent, Z. Falter, and S. Bhunia, “A Review of Colonial Pipeline Ransomware Attack,” IEEE Xplore, May 01, 2023. https://ieeexplore.ieee.org/abstract/document/10181159
[27] D. Citron and R. Chesney, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review, vol. 107, no. 6, Dec. 2019, Available: https://scholarship.law.bu.edu/faculty_scholarship/640/
[28] Dark Reading, “Cybercriminals Impersonate Chief Exec’s Voice with AI Software,” 2019. https://www.darkreading.com/cyber-risk/cybercriminals-impersonate-chief-exec-s-voice-with-ai-software
[29] H. Hart and A. Aronsson-Storrier, “False citations: AI and ‘hallucination,’” Society for Computers & Law, Feb. 26, 2024. https://www.scl.org/uk-litigant-found-to-have-cited-false-judgments-hallucinated-by-ai/
[30] B. A. Herrera-Tapias and D. H. Guzmán, “Legal Hallucinations and the Adoption of Artificial Intelligence in the Judiciary,” Procedia Computer Science, vol. 257, pp. 1184–1189, Jan. 2025, doi: https://doi.org/10.1016/j.procs.2025.03.158.
[31] T. Khan, “Law, Lies, and Language Models: Responding to AI Hallucinations in UK Jurisprudence,” Jun. 12, 2025. https://thebarristergroup.co.uk/blog/responding-to-ai-hallucinations-in-uk-jurisprudence
[32] L. Arvanitis, M. Sadeghi, and J. Brewster, “NewsGuard’s Misinformation Monitor: GPT-4 produces more misinformation than predecessor,” NewsGuard, Feb. 2024. https://www.newsguardtech.com/misinformation-monitor/march-2023/
[33] C. Metz, “OpenAI Says It Disrupted an Iranian Misinformation Campaign,” The New York Times, Aug. 16, 2024. Available: https://www.nytimes.com/2024/08/16/technology/openai-chatgpt-iran-misinformation.html
[34] R. Koppel, “Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors,” JAMA, vol. 293, no. 10, Mar. 2005, doi: https://doi.org/10.1001/jama.293.10.1197.
[35] E. M. Redmiles, S. Kross, and M. L. Mazurek, “How I Learned to be Secure,” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Oct. 2016, doi: https://doi.org/10.1145/2976749.2978307.
[36] A. Mulahuwaish et al., “A survey of social cybersecurity: Techniques for attack detection, evaluations, challenges, and future prospects,” Computers in Human Behavior Reports, p. 100668, Apr. 2025, doi: https://doi.org/10.1016/j.chbr.2025.100668.
[37] D. C. Glik, “Risk Communication for Public Health Emergencies,” Annual Review of Public Health, vol. 28, no. 1, pp. 33–54, Apr. 2007, doi: https://doi.org/10.1146/annurev.publhealth.28.021406.144123.
[38] L. Fitzgerald, “How Deepfakes Are Impacting Public Trust in Media,” Jan. 17, 2025. https://www.pindrop.com/article/deepfakes-impacting-trust-media/
[39] J. Trevithick, “Sweden’s New Civil Defense Guide Tells Citizens To Resist Fake News As They Would An Invasion,” The War Zone, Jun. 06, 2018. https://www.twz.com/21343/swedens-new-civil-defense-guide-tells-citizens-to-resist-fake-news-as-they-would-an-invasion (accessed Aug. 03, 2025).
[40] P. Chadha, “The Four Phases of Crisis Management,” AGB, Oct. 06, 2020. https://agb.org/blog-post/the-four-phases-of-crisis-management/
[41] M. D. Cavelty, C. Eriksen, and B. Scharte, “Making cyber security more resilient: adding social considerations to technological fixes,” Journal of Risk Research, vol. 26, no. 7, pp. 801–814, May 2023, doi: https://doi.org/10.1080/13669877.2023.2208146.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 EAI Endorsed Transactions on Security and Safety

This work is licensed under a Creative Commons Attribution 4.0 International License.
This is an open-access article distributed under the terms of the Creative Commons Attribution CC BY 4.0 license, which permits unlimited use, distribution, and reproduction in any medium so long as the original work is properly cited.