RISK AND RESPONSIBILITY AT THE FRONTIER OF AI: A THEMATIC ANALYSIS OF DEEP LEARNING PIONEERS’ PERSPECTIVES ON ARTIFICIAL INTELLIGENCE THREATS AND GOVERNANCE

ТЕМАТСКА АНАЛИЗА СТАВОВА ПИОНИРА ДУБОКОГ УЧЕЊА О ПРЕТЊАМА И УПРАВЉАЊУ ВИ

  • Ljubiša Bojić Унивeрзитeт у Бeoгрaду, Институт зa филoзoфиjу и друштвeну тeoриjу
Ključne reči: artificial intelligence risk, deep learning pioneers, AI governance, existential threats, thematic analysis

Sažetak


As artificial intelligence (AI) reshapes global societies, understanding its associated risks and governance imperatives is of urgent social importance. This study fills a critical gap by systematically analyzing extended interviews with Geoffrey Hinton, Yoshua Bengio, and Yann LeCun—the chief architects of deep learning—to elucidate their firsthand perspectives on AI’s existential, ethical, social, and governance challenges. Employing qualitative thematic analysis across six longitudinal interview transcripts, the research identifies both convergences and divergences: Hinton and Bengio strongly emphasize existential threats, superintelligence hazards, AI weapons risks and the need for robust global regulation, while LeCun expresses technological optimism and favors decentralized, open development. All acknowledge economic disruption, misuse potential, and fractures in democratic discourse. The study’s findings reveal that expert opinion on AI risk is far from monolithic and highlights actionable, innovative governance proposals—from regulated compute access to “diversity engines” in social media feeds. Implications include the necessity for adaptive, internationally coordinated AI governance and greater professional accountability among developers. Limitations include a focus on elite, Anglophone experts and inherent subjectivity in qualitative coding. Future research should expand to multi-stakeholder and cross-national perspectives, and test proposed regulatory frameworks in real-world contexts, addressing the ongoing evolution of risk as AI permeates new domains.

Reference

Alipour, Shayan, Alessandro Galeazzi, Emanuele Sangiorgio, Michele Avalle, Ljubisa Bojic, Matteo Cinelli, and Walter Quattrociocchi [Alipour et al.], 2024. “Cross-Platform Social Dynamics: An Analysis of ChatGPT and COVID-19 Vaccine Conversations“. Scientific Reports 14 (1): 2789. DOI:10.1038/s41598-024-53124-x

Altmann, Jürgen, and Frank Sauer. 2017. “Autonomous Weapon Systems and Strategic Stability.” Survival 59: 117–42. DOI: 10.1080/00396338.2017.1375263

Autor, David H. 2025. “Why Are There Still So Many Jobs? The History and Future of Workplace Automation.” Journal of Economic Perspectives 29 (3): 3-30.

Bengio_BBC. 2025. “The worst case scenario is human extinction - Godfather of AI on rogue AI. Yoshua Bengio.” BBC Youtube. https://youtu.be/c4Zx849dOiY

Bengio_WSF. 2024. “Why a Forefather of AI Fears the Future. Yoshua Bengio.” World Science Festival. Youtube. https://youtu.be/KcbTbTxPMLc

Bessen, James E. 2018. “AI and Jobs: The Role of Demand.” NBER Working Paper. https://www.nber.org/papers/w24235

Bodroža, Bojana, Bojana M. Dinić, and Ljubiša Bojić. 2024. “ Personality Testing of Large Language Models: Limited Temporal Stability, but Highlighted Prosociality.” Royal Society Open Science 11: 240180. DOI: 10.1098/rsos.240180

Bojic, Ljubisa. 2022. “Metaverse through the prism of power and addiction: What will happen when the virtual world becomes more attractive than reality?” European Journal of Futures Research 10 (1): 22. DOI: 10.1186/s40309-022-00208-4

Bojic, Ljubisa. 2024. “AI alignment: Assessing the global impact of recommender systems.” Futures 160 (103383). DOI:10.1016/j.futures.2024.103383

Bojic, Ljubisa, & Jean-Louis Marie. 2017. “Addiction to Old versus New media.”Srpska politička misao 56(2): 33-48. DOI: 10.22182/spm.5622017.2

Bojić, Ljubiša, Irena Stojković, and Zorana Jolić Marjanović. 2024. “Signs of Consciousness in AI: Can GPT-3 Tell How Smart It Really Is?” Humanities and Social Sciences Communications 11: 1631. DOI: 10.1057/s41599-024-04154-3

Bojić, Ljubiša, Matteo Cinelli, Dubravko Ćulibrk, and Boris Delibašić. [Bojić et al.]. 2024. “CERN for AI: A Theoretical Framework for Autonomous Simulation-Based Artificial Intelligence Testing and Alignment” . European Journal of Futures Research 12: 15. DOI: 10.1186/s40309-024-00238-0

Bojić, Ljubiša, Miloš Agatonović, and Jelena Guga. 2024. “The Immersion in the Metaverse: Cognitive Load and Addiction.” In Augmented and Virtual Reality in the Metaverse, eds. Vladimir Geroimenko. Cham: Springer Nature Switzerland. DOI:10.1007/978-3-031-57746-8_11

Bojić, Ljubiša, Olga Zagovora, Asta Zelenkauskaite, Vuk Vuković, Milan Čabarkapa, Selma Veseljević Jerković, and Ana Jovančević. [Bojić et al.]. 2025. “Comparing Large Language Models and Human Annotators in Latent Content Analysis of Sentiment, Political Leaning, Emotional Intensity and Sarcasm.” Scientific Reports 15: 11477. DOI: 10.1038/s41598-025-96508-3

Bojić, Ljubiša, Predrag Kovačević, and Milan Čabarkapa. 2025. “Does GPT-4 Surpass Human Performance in Linguistic Pragmatics?” Humanities and Social Sciences Communications 12: 794. DOI: 10.1057/s41599-025-04912-x

Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

Braun, Virginia, and Victoria Clarke. 2006. “ Using thematic analysis in psychology.” Qualitative Research in Psychology 3 (2): 77-101.

Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, SJ Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, Dario Amodei [Brundage et al.]. 2018. “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” arXiv1802.07228. DOI: 10.48550/arXiv.1802.07228

Brynjolfsson, Erik, and Andrew McAfee. 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W. W. Norton.

Buolamwini, Joy, and Timnit Gebru. 2018. “ Gender shades: Intersectional accuracy disparities in commercial gender classification.” Proceedings of Machine Learning Research 81: 1–15.

Cath, Corinne. 2018. “ Governing artificial intelligence: ethical, legal and technical opportunities and challenges.” Philosophical Transactions of the Royal Society 376 (2133): 20180080.

Chui, Michael, James Manyika, and Mehdi Miremadi. 2016. “Where Machines Could Replace Humans—and Where They Can’t (Yet).” McKinsey Quarterly. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/where-machines-could-replace-humans-and-where-they-cant-yet

De Filippi, Primavera, Samer Hassan, and Roberto Zicari. 2023. “Smart contracts meet legal responsibility: The role of Reg-LLMs in adaptive law.” Stanford Journal of Blockchain Law & Policy 6 (1)77–104.

Doshi-Velez, Finale, and Been Kim. 2017. “Towards a rigorous science of interpretable machine learning.” arXiv 1702.08608. DOI: 10.48550/arXiv.1702.08608

Esteva, Andre, Brett Kuprel, Roberto Novoa, Justin Ko, Susan Swetter, Helen Blau, Sebastian Thrun [Esteva et al.]. 2017. “Dermatologist-level classification of skin cancer with deep neural networks.” Nature 542 (7639): 115-118. DOI: 10.1038/nature21056

Feldstein, Steven. 2019. “The Global Expansion of AI Surveillance.” Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2019/09/the-global-expansion-of-ai-surveillance?lang=en

Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Lütge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, Effy Vayena [Floridi et al.]. 2018. “ AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations.” Minds and Machines 28 (4): 689–707.

Gabriel, Iason. 2020. “ Artificial intelligence, values, and alignment.” Minds and Machines 30 (3): 411–437.

Helberger, Natali. 2019. “ On the democratic role of news recommenders.” Digital Journalism 7 (8): 993–1012.

Hinton_60Minutes. 2023. “Godfather of AI” Geoffrey Hinton: The 60 Minutes Interview. 60 Minutes. Youtube. https://youtu.be/qrvK_KuIeJk

Hinton_Diary. 2025. “Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton.” The Diary of A CEO. Youtube. https://youtu.be/giT0ytynSqg

Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The global landscape of AI ethics guidelines.” Nature Machine Intelligence 1 (9): 389–399.

Korinek, Anton, and Joseph E. Stiglitz. 2018. “Artificial intelligence and its implications for income distribution and unemployment.” In Economics of Artificial Intelligence: An Agenda, eds. Ajay Agrawal, Joshua Gans, and Avi Goldfarb, 259-290. Chicago: University of Chicago Press. 2018.

Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2017. “ImageNet classification with deep convolutional neural networks.” Communications of the ACM 60 (6): 84–90.

LeCun_Brian. 2024. “Yann LeCun: AI Doomsday Fears Are Overblown [Ep. 473].” Dr Brian Keating. Youtube. https://youtu.be/u7e0YUcZYbEhttps://youtu.be/u7e0YUcZYbE

LeCun_Lex. 2024. “Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI | Lex Fridman Podcast #416.” Lex Fridman. Youtube. https://youtu.be/5t1vTLU7s40https://youtu.be/5t1vTLU7s40

LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. “Deep learning.” Nature 521 (7553): 436–444.

Metz, Cade. 2023. “'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead.” The New York Times. https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

Moe, Hallvard, Jan Fredrik Hovden, and Kari Karppinen. 2021. “Operationalizing Exposure Diversity.” European Journal of Communication 36 (2): 148–67. DOI: 10.1177/0267323120966849

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

O’Connor, Cailin, and James Owen Weatherall. 2019. The Misinformation Age: How False Beliefs Spread. New Haven: Yale University Press.

O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Broadway Books.

Obermeyer, Ziad, Brian Powers, Christine Vogeli, Sendhil Mullainathan [Obermeyer et al]. 2019. “Dissecting racial bias in an algorithm used to manage the health of populations.” Science 366 (6464): 447–453.

OpenAI. 2023. “GPT-4 Technical Report.” OpenAI. https://cdn.openai.com/papers/gpt-4.pdf

OSF 2025. “Risk and Responsibility at the Frontier of AI.” OSF. https://osf.io/s4gva/?view_only=b42e04616959401082178f6c4c8376ce

Pariser, Eli. 2011. The Filter Bubble: What the Internet Is Hiding from You. New York: Penguin Press.

Pavlovic, Maja, and Ljubisa Bojic. 2020. “Political Marketing and Strategies of Digital Illusions – Examples from Venezuela and Brazil.” Sociološki pregled 54 (4):1391-1414. DOI: 10.5937/socpreg54-27846

Reinhardt, Anne, Jörg Matthes, Ljubisa Bojic, Helle T. Maindal, Corina Paraschiv, and Knud Ryom [Reinhardt et al]. 2025. “Help Me, Doctor AI? A Cross-National Experiment on the Effects of Disease Threat and Stigma on AI Health Information-Seeking Intentions.” Computers in Human Behavior 172: 108718. DOI: 0.1016/j.chb.2025.108718

Rudin, Cynthia. 2019. “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.” Nature Machine Intelligence 1 (5): 206–215.

Russell, Stuart J. 2019. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking.

Sandvig, Christian, Kevin Hamilton, Karrie Karahalios, Cedric Langbort [Sandvig et al]. 2016. “Auditing algorithms: Research methods for detecting discrimination on Internet platforms.” Data and Discrimination: Converting Critical Concerns into Productive Inquiry. Social Science Research Council. https://ai.equineteurope.org/system/files/2022-02/ICA2014-Sandvig.pdf

Schmidhuber, Jürgen. 2015. “Deep learning in neural networks: An overview.” Neural Networks 61: 85–117.

Silver, David, Aja Huang, Christopher Maddison, Arthur Guez, Laurent Sifre, George Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, Demis Hassabis [Silver et al]. 2016. “Mastering the game of Go with deep neural networks and tree search.” Nature 529 (7587): 484–489.

Standing, Guy. 2018. Basic Income: And How We Can Make It Happen. London: Pelican Books.

Susskind, Daniel. 2020. A World Without Work: Technology, Automation, and How We Should Respond. New York: Metropolitan Books.

Turing. 2018. “Fathers of the Deep Learning Revolution Receive ACM A.M. Turing Award. Bengio, Hinton and LeCun Ushered in Major Breakthroughs in Artificial Intelligence.” Association for Computing Machinery. https://awards.acm.org/about/2018-turing

Turing_Bengio. 2018. “Yoshua Bengio.” AM Turing Award. https://amturing.acm.org/award_winners/bengio_3406375.cfm

Turing_Hinton. 2018. “Geoffrey Hinton.” AM Turing Award. https://amturing.acm.org/award_winners/hinton_4791679.cfm

Turing_LeCun. 2018. “Yann LeCuno.” AM Turing Award. https://amturing.acm.org/award_winners/lecun_6017366.cfm

West, Sarah Myers. 2019. “Data Capitalism: Redefining the Logics of Surveillance and Privacy.” Business & Society 58 (1): 20–41.

Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.

Objavljeno
2025/09/30
Rubrika
Članci