ARTIFICIAL INTELLIGENCE AS A SOURCE OF ALIENATION AND EXISTENTIAL THREATS

  • Оleksandr Petrovych DZOBAN State Scientific Institution «Institute of Information, Security and Law of the National Academy of Legal Sciences of Ukraine» https://orcid.org/0000-0002-2075-7508
Keywords: artificial intelligence, existential risks, alienation, social injustice, discrimination, human autonomy, meaning of life.

Abstract

This article attempts a comprehensive socio-philosophical reflection on the impact of artificial intelligence (AI) systems on modern individuals and society. It substantiates the thesis that AI is a multifaceted phenomenon acting as a source of large-scale socio-cultural transformations offering both opportunities for self-actualization and profound existential risks. The study emphasizes that the pace of technological progress significantly outstrips philosophical reflection and the theoretical understanding of the consequences of implementing these technologies. The fundamental threats that AI poses to the human race are considered: the transformation of the labour market and the loss of meaning in life (the problem of mass unemployment caused by AI's ability to perform not only routine but also complex cognitive tasks in creative and intellectual fields), social injustice and algorithmic bias (the ‘black box’ phenomenon, through which the logic of AI decision-making remains opaque, leading to discrimination on ethnic, gender or religious grounds in areas such as law enforcement, education and recruitment), violation of personal autonomy (AI systems infringing on the human right to self-determination through manipulation, total surveillance and restrictions on the information space). The conclusions stress the necessity of developing philosophical principles and regulatory frameworks to ensure a human-centered approach to AI development. It is emphasized that only by maintaining the central position of the human being in the world and establishing clear ethical boundaries for the use of intelligent systems can negative consequences be minimized and the potential of AI be directed toward the benefit of civilization. Particular attention is paid to the fact that the existential threat formed by the fundamental integration of AI into social life requires the development and adoption of a comprehensive set of documents national strategies, legislative acts, ethical guidelines, and standards issued by state institutions, commercial structures, and international organizations to regulate the ethico-legal aspects of AI utilization.

References

1. AI Conf 2025. URL: https://dou.ua/calendar/54093/ (дата звернення: 20.02.2026).
2. Angwin J., Larson J. et al. Machine Bias. There is software that is used across the county to predict future criminals. And it is biased against blacks. URL: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (дата звернення: 10.02.2026).
3. Bian Y.et al. Artificial intelligence–assisted system in postoperative follow-up of orthopedic patients: e-ploratory quantitative and qualitative study. URL: https://pubmed.ncbi.nlm.nih.gov/32452807/ (дата звернення: 18.02.2026).
4. Bogen M., Rieke A. Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias. URL: https://apo.org.au/node/210071 (дата звернення: 20.02.2026).
5. Clark A., Oswald A. Unhappiness and unemployment. Economic Journal. 1994. Vol. 104 (424). P. 648–659.
6. Etzioni A., Etzioni O. Should artificial intelligence be regulated. URL: https://www.issues.org/334/perspective-should-artificial-intelligence-be-regulated/ (дата звернення: 14.02.2026).
7. Floridi L., Cowls J., Beltrametti M. et al. AI4People ‒ An ethical framework for a good AI society. URL: https://ai4people.org/PDF/AI4People_Ethical_Framework_For_A_Good_AI_Society.pdf (дата звернення: 20.02.2026).
8. Hawking St. Will artificial intelligence outsmart US? URL: https://ru.scribd.com/document/470378723/Will-Artificial-Intelligence-Outsmart-Us (дата звернення: 14.02.2026).
9. Holstein K., Wortman J. et al. Improving fairness in machine learning systems: What do industry practition-ers need? Proceedings of the 2019 CHI conference on human factors in computing systems. 2019. P. 1–16.
10. Iamus, a music-making computer, could be the next Mozart. URL: https://www.vice.com/en/article/pgg8yy/iamus-a-music-making-computer-could-be-the-next-mozart (дата звернення: 20.02.2026).
11. Kellogg K. C., Valentine M. A., Christin A. Algorithms at work: The new contested terrain of control. Academy of Management Annals. 2020. Vol. 14 (1). P. 366–410.
12. Kim T. W., Scheller-Wolf A. Technological unemployment, meaning in life, purpose of business, and the future of stakeholders. Journal of Business Ethics. 2019. Vol. 160 (2). P. 319–337.
13. Kleinberg J., Mullainathan S., Manish R. Inherent Trade-Offs in the Fair Determination of Risk Scores. 8th Innovations in Theoretical Computer Science Conference. Schloss Dagstuhl-Leibniz-Zentrumfuer Informatik. 2018. P. 1–23.
14. Michael V., Van Kleek M., Binns R. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. Proceedings of the 2018 chi conference on human factors in computing systems. 2018. P. 1–14.
15. Pardes A. AI can run your work meetings now. URL: https://www.wired.com/story/ai-can-run-work-meetings-now-headroom-clockwise/ (дата звернення: 20.02.2026).
16. Pushback against AI policing in Europe heats up over. URL: https://www.globaltimes.cn/page/202110/1237232.shtml (дата звернення: 22.02.2026).
17. Ryan M., Antoniou J. et al. Research and Practice of AI Ethics: A Case Study Approach Juxtaposing Academic Discourse with Organisational Reality. URL: https://edepot.wur.nl/543861 (дата звернення: 20.02.2026).
18. Wachter S., Mittelstadt B., Russel C. Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI. Computer Law & Security Review. 2021. Vol. 41. P. 105–567.
Published
2026-04-08
How to Cite
DZOBAN О. P. (2026). ARTIFICIAL INTELLIGENCE AS A SOURCE OF ALIENATION AND EXISTENTIAL THREATS. Dnipro Academy of Continuing Education Herald. Series: Philosophy, Pedagogy, 1(1), 7-17. https://doi.org/10.54891/2786-7013/2026-1-1
Section
Dnipro Academy of Continuing Education Herald. Series: Philosophy, Pedagogy