CRITICAL THINKING AND FORMAL LOGIC IN ARTIFICIAL INTELLIGENCE SYSTEMS: A PHILOSOPHICAL ANALYSIS
Abstract
The article provides a comprehensive philosophical account of critical thinking and formal logic within contemporary artificial intelligence (AI) systems against the backdrop of the pervasive «hallucination» phenomenon–confidently produced yet factually incorrect outputs. It argues that purely statistical language models, lacking intrinsic verification mechanisms, cannot deliver epistemic reliability and therefore require «disciplining» via logical and metacognitive components. Methodologically, the study combines a history-of-philosophy perspective–from Aristotle, Descartes, Kant and Popper to current debates on the semantic limits of syntactic procedures (e.g., Searle’s «Chinese Room») – with a survey of technical strategies for embedding logic in AI. The paper traces the pendulum swing from symbolic (GOFAI) to neural approaches and shows an emerging consensus in favor of neuro-symbolic architectures that couple statistical power with the transparency of formal inference. It identifies three human critical-thinking components relevant for algorithms: a normative one (logical laws and rules), a descriptive one (awareness of common cognitive biases), and a prescriptive one (procedures for self-correction). Building on this mapping, it formulates principles of «logical discipline» for AI: explicit multi-step reasoning, internal self-verification and confidence estimation, tool-assisted fact-checking (calls to external modules and knowledge bases), causal modeling, and explainability. The analysis also clarifies key limitations: valid formal inference does not ensure truth under false premises; the absence of consciousness and intentionality precludes an «intrinsic» drive to truth; long reasoning chains may rationalize an initial error. The paper contends that a human-in-the-loop paradigm is currently optimal: AI acts as a rapid logical filter and hypothesis generator, while the human agent remains the final epistemic arbiter. Practical significance spans education, science, and managerial decision-making: introducing logical and metacognitive control loops can reduce factual errors and strengthen trust in AI. The contribution lies in an interdisciplinary mapping of critical-thinking components to implementable algorithmic mechanisms and in outlining a research program for «thinking» models that integrate logical rigor, causal reasoning, and transparent explanations – while remaining under human oversight.
References
2. Кант І. Критика чистого розуму / пер. з нім. Київ: Юніверс, 2000. 504 с.
3. Крулевський А. В. Формування стратегії цифровізації системи вищої освіти в Україні: дис. … д-ра філософії. Тернопіль, 2025. 291 с. URL: https://www.wunu.edu.ua/svr/disertatcia/2025/Krulevskyi/Dis_Krulevskyi.pdf (дата звернення: 08.10.2025).
4. Матвієнко І. Критичне мислення та штучний інтелект: сучасні можливості взаємодії. Педагогіка, психологія, філософія. 2025. Т. 13. № 2. URL: https://humstudios.com.ua/uk/journals/tom-13-2-2025/kritichne-mislennya-ta-shtuchny-intelekt-suchasni-mozhlivosti-vzayemodiyi (дата звернення: 07.10.2025).
5. Надурак В. В. Критичне мислення: поняття та практика. Філософія освіти. 2022. № 28 (2). С. 129–147. DOI: https://doi.org/10.31874/2309-1606-2022-28-2-7
6. Поппер К. Логіка наукового відкриття / пер. з англ. Київ: Основи, 1994. 432 с.
7. Altman S. Don’t trust ChatGPT too much. URL: https://www.ndtv.com/world-news/dont-trust-that-much-openai-ceo-sam-altman-admits-chatgpt-can-be-wrong-8808530 (дата звернення: 07.10.2025).
8. Colelough B. C., Regli W. C. Neuro-Symbolic AI in 2024: A Systematic Review. CEUR-WS, 2024. URL: https://ceur-ws.org/Vol-3819/paper3.pdf (дата звернення: 09.10.2025).
9. Dang H. A., et al. Survey and Analysis of Hallucinations in Large Language Models. Frontiers in Artificial Intelligence, 2025. URL: https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1622292/full (дата звернення: 10.10.2025).
10. Delvecchio G. P., et al. Neuro-Symbolic AI: A Task-Directed Survey in the Black-Box Models Era. IJCAI 2025 Proceedings, 2025. URL: https://www.ijcai.org/proceedings/2025/1157.pdf (дата звернення: 10.10.2025).
11. Google. Gemini 2.0 Flash / Gemini thinking models (Vertex AI). 2025. URL: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-0-flash (дата звернення: 10.10.2025).
12. Google DeepMind. AlphaGo. URL: https://deepmind.google/research/alphago/ (дата звернення: 11.10.2025).
13. Huang L., et al. A Survey on Hallucination in Large Language Models. arXiv:2311.05232, 2023. URL: https://arxiv.org/pdf/2311.05232 (дата звернення: 07.10.2025).
14. Kahneman D. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.
15. Liu F., et al. Self-Reflection Makes Large Language Models Safer, Less Biased, and Ideologically Neutral. arXiv:2406.10400, 2024. URL: https://arxiv.org/html/2406.10400v2 (дата звернення: 17.10.2025).
16. Nawaz U., et al. A review of neuro-symbolic AI integrating reasoning and learning for advanced cognitive systems. AI Open, 2025. URL: https://doi.org/10.1016/j.iswa.2025.200541 (дата звернення: 07.10.2025).
17. OpenAI. Introducing OpenAI o1 / Learning to reason with LLMs. 12 Sept. 2024. URL: https://openai.com/index/learning-to-reason-with-llms/(дата звернення: 09.10.2025).
18. Pearl J. Causality: Models, Reasoning, and Inference. 2nd ed. Cambridge: Cambridge University Press, 2009.
19. Pearl J., Mackenzie D. The Book of Why: The New Science of Cause and Effect. New York: Basic Books, 2018.
20. Sahoo P., et al. A Comprehensive Survey of Hallucination in Large Language Models. Findings of EMNLP 2024, 2024. URL: https://aclanthology.org/2024.findings-emnlp.685.pdf (дата звернення: 06.10.2025).
21. Searle J. R. Minds, Brains, and Programs // Behavioral and Brain Sciences. 1980. Vol. 3. № 3. P. 417–457.
22. Smith P. An Introduction to Gödel’s Theorems. Cambridge: Cambridge University Press, 2007.
23. Wang Y., Zhao Y. Metacognitive Prompting Improves Understanding in Large Language Models. NAACL 2024, 2024. URL: https://aclanthology.org/2024.naacl-long.106.pdf (дата звернення: 12.10.2025).
ISSN
ISSN 
