An Analysis of the Consequences of Delegating Cognitive Tasks to Artificial Intelligence and Its Impact on Moral Agency

Document Type : Research paper

Author

Assistant Professor, Department of Educational Sciences, Farhangian University, P.O. Box 14665-889 Tehran, Iran

10.22034/rahyaft.2026.12218.1641

Abstract

The integration of Artificial Intelligence into everyday decision-making has become a ubiquitous phenomenon. This process involves individuals effectively outsourcing their cognitive processing to external technological systems during their daily activities. While such delegation enhances efficiency and expedites computational tasks, its extension into the normative realm of ethics raises significant concerns and profound philosophical questions. This research analyzes the consequences of delegating habitual cognitive processes to Artificial Intelligence and explores its structural impact on human moral agency. To achieve this, the study adopts a philosophical-analytical approach, utilizing a tripartite theoretical framework grounded in Aristotelian practical wisdom (phronesis), Kantian autonomy, and the concept of responsibility to examine the relationship between humans and Artificial Intelligence. The findings indicate that delegating moral deliberation to Artificial Intelligence algorithms undermines the fundamental structure of the moral agent in three ways: 1) by eliminating the necessary "moral friction" in decision-making, it leads to the erosion of practical wisdom (moral deskilling); 2) by substituting opaque machine outputs for independent self-legislation, it fosters a state of algorithmic heteronomy; and 3) by obscuring accountability, it creates a psychological responsibility gap. The study concludes that by outsourcing moral reflection to Artificial Intelligence, we risk falling into the trap of an extreme instrumentalist view of technology. While Artificial Intelligence can function as an informational aid or support system, treating it as a moral proxy or decision-maker risks hollowing out the moral subject. To address these challenges, the study recommends fostering critical Artificial Intelligence literacy among users and incorporating "reflective friction" into the design of Artificial Intelligence tools.

Keywords