The Rise of AI and Diverging Human Reactions
Artificial intelligence (AI) has become an umbrella term that generally refers to all machines capable of using psychological skills to perform actions, which are traditionally understood to require human minds. These human-like functions orient many users to imagine the potential for AI to replace humans across every sphere in the future.This optimistic position on AI technology has been intensified by the advent and wide deployment of Large Language Models (LLMs), such as ChatGPT, Microsoft Copilot, and Google Gemini. The permeation of LLMs into human society entails the impression that AI has yielded and will continue to bring forth blessings to human flourishing.
However, not all human users subscribe to the hype surrounding AI. An increasing number of people are paying attention to moral and social issues related to AI applications, and AI governance has sparked much debate over the past few years.1
From Ethical Reflections to Ontological Foundations: Why AI Is Not Value-Neutral
Recent studies on Christian theology and AI have looked into risks arising from the extensive application of AI, mindful of its favorable consequences within human life. For example, Eric Stoddart helpfully reminds us that AI-enhanced caregiving practices should not undermine human responsibilities to pastoral care while recognizing value in AI systems.2
Another instance is Andrea Vicini’s study on AI and social control. Following the Roman Catholic ethical engagement with AI, he spells out the social benefits that AI contributes to the common good of humanity through social control technology (e.g., facial recognition systems), which, in turn, raises ethical challenges to employment, the marketplace, justice systems, and other societal domains.3
These studies are all aligned with the belief that AI is not value-neutral but oriented towards certain purposes by values loaded through data, algorithms, model architecture, and other aspects of the development of AI systems. From this, two questions arise: How would Christian theology expand upon the implications of value-loaded AI? What theological legacies can we draw on to address ethical and social issues related to AI?
In my most recent monograph, I have leveraged the theological concepts of archetype-and-ectype and the image of God to articulate a theological ontology for AI.4 In what follows, I will summarize the takeaway points relevant to this article.
Archetype (ἀρχέτυπος) literally means the ultimate exemplar or pattern, and ectype (ἔκτυπος) literally refers to a copy, replica, or reflection of the ultimate pattern. God is the archetype, and the human being is the ectype. This archetypal–ectypal relationship between God and humans is in conformity with the theological concept that the human being is the image of God, revealing the ontological distinction between God as the creator and humans as creatures. From this, it logically follows that there is the indelible distinction between divine and human actions on all levels. That said, the concept of archetype-and-ectype also implies the connection between God and humans. As the ectype of the divine archetype, humans image divine creative activities, crafting their own ectypes such as AI.5 Furthermore, AI as the ectype of the human being (the image of God) is increated with multi-layered values insofar as the image of God indicates the meaning of being human in both religious and moral terms. In short, AI copies and reflects the values held by its designers, providers, users, and other stakeholders.
AI Mirrors Humanity—Virtues and Vices Alike
The conceptual apparatus crafted in my monograph lays out the implications of value-loaded AI and its link to humans. As the ectype of humans, the value loaded within AI systems mirrors that held by humans, be it virtuous or vicious. Theologically speaking, value-loaded AI goes some way to reflecting the nature of humans as both created and fallen. Both moral virtues endowed through creation and moral vices within corrupted humanity coexist in humans and AI systems.
How can virtues and vices co-exist in humanity and then be mediated to AI? Augustine’s twofold interpretation of corruption is useful here. He argues, “what is corrupted is deprived of integrity and purity.” Following this, he adds that “what is corrupted is perverted, and what is perverted is deprived of order. But the order is good. Therefore, what is corrupted is not devoid of good, and it is for this very reason that it can be deprived of good in being corrupted.” 6 Augustine makes it clear that the fall does not overthrow the good created within human substance but perverts the order of human nature. As such, the corruption of human nature refers to the perversion of the order of good human substance.
It stands to reason that post-fall humans are still the image of God, though the image is distorted. This interprets the reality of the concurrence of virtues and vices in human life, which is in turn mirrored in the artificial ectype of humanity, that is, AI. From this vantage point, it can be seen that the blessings and threats of AI should be considered, reviewed, and treated in light of humanity’s good yet corrupted nature.
Facebook and the Automation of Social Division
Thus, the challenges AI poses to human society must be assessed with reference to the perverted order, integrity, and purity of the being of humans. With this in mind, we proceed to explore how the perverted order may manifest inthe application of AI systems. By way of illustration, consider Facebook’s feed algorithm and ElliQ, an AI-driven carebot for elderly care.
Meta (formerly Facebook) has been accused of intensifying ideological segregation and undermining societal harmony through its news feed algorithm during the 2020 US presidential election.7 AI algorithms rank news content in Facebook users’ feeds through reinforcement learning models and statistical analysis of users’ data regarding political ideas and behavior. Users’ online profiles and choice of political news keep providing AI algorithms with data for prediction. As a result, users continue to receive their preferred political posts and emotionally engaging content, leading to a sharp division within American society and a weakening of social cohesion. Needless to say, Facebook’s algorithms are inherently problematic. However, it should also be conceded that users’ preferences and online actionstogether exacerbate social fragmentation. If AI algorithms automate social division, it is Facebook users who keep endorsing the rupture of social harmony.
ElliQ and the Ethics of Artificial Care
ElliQ, an AI-driven companion robot developed by Intuition Robotics, is designed for elderly care. Traditional artificial companions are largely passive in their interaction with humans, responding to human prompts when interaction is triggered by users. By contrast, ElliQ initiates conversations and actively interacts with elderly people. Grace Andruszkiewicz reported in 2023, “ElliQ has been welcomed into the homes of older adults around the US for the past few years and has achieved more than 10 million interactions with users. We’ve been privileged to see some unbelievably powerful results. In fact, 95% of users find ElliQ useful in reducing their loneliness and improving their well-being, and 90% report that ElliQ has improved their quality of life.”8 Despite this, recent studies indicate that some users are annoyed by ElliQ’s proactive operation due to frequent invitations to engage in conversations each day.9 Amanda Sharkey and Noel Sharkey make a more critical observation on AI-driven elderly carebots. The elderly person is encouraged to interact with artificial companions as if they were human companions, which is a deception and may even infantilize the elderly.10 Such infantilization may enable someone to escape their responsibility to provide care and delegate their caregiving practice entirely to ElliQ.
When AI Reflects a Distorted Humanity
These two examples illustrate how AI may, albeit to varying degrees, mirror the perverted order of human nature. Underlying Facebook’s news feed algorithm is an egocentric political ideology. Political solipsism fences Facebook users off to shield them from different or, even, opposing political views. Social division emerges and is rapidly exacerbated when solipsistic political movements are automated and amplified by AI systems. Social fragmentation is aggravated as the egocentric political ideology continues to generate solipsistic data, leading to broken and irreconcilable human-human relationships. In short, Facebook’s news feed AI algorithms exemplify the perverted order of human nature in social and political spheres.
The issues related to ElliQ are categorically different from the political and social impacts generated by Facebook’s AI algorithms. They reveal another aspect of the perverted order of human nature—that is, the vulnerability of caring relationships. What is neglected in ElliQ-human interaction is that caring relationships are part of the meaning of being human. To be human is to be in relationships with human fellows. Indeed, ElliQ is more efficient and rapid in interaction with humans, and the tremendous volume of data enables it to initiate informative and insightful conversations. With this being said, such caring relationships are not authentic in light of the meaning of being humans. Moreover, ElliQ causes harm to those who bear the responsibility of providing care for the elderly. Shannon Vallor, a leading voice in the field of AI ethics, draws our attention to this issue with notable insight.
Returning to the Image of God
And if we are exhorted by carebot marketers or nursing home managers to “let the robot do its job”, to surrender caring tasks to a being which will surely not forget a pill, will not lose patience, will not let a loved one fall, we might be mistakenly led to think that both we and our cared-for are best served by this surrender. Yet … I enslave myself to a particularly unhappy task when I make this choice. As I chop away at the chains that bind me to loved others, asserting my freedom, I move into a wilderness of strangers and loneliness, leaving behind all who cared for me and even, perhaps, my own self.11
Vallor makes it clear that humans lose the authentic meaning of humanity while escaping their responsibility to care. The inclination to evade caring responsibility can be theologically understood as a manifestation of the vulnerability of caring relationships, resulting from the distorted image of God and corrupted human nature. For the corruption of humanity disinforms God-human and then human-human relationships.
Recent rapid progress of AI technology drives people to espouse an optimistic position towards AI and over-trust the benefits that AI can bring forth. For all that AI has profoundly reshaped numerous facets of our lives by generating a wide array of demonstrable advantages, Facebook’s feed algorithms and ElliQ illustrate that AI is loaded with value to mirror human interests, desires, aspirations, and inclinations in social and ethical issues surrounding AI applications. The concepts of the image of God and corrupted human nature theologically mitigate human over-confidence in their technological advancement. All the more important is that this pair of theological concepts always reminds us of circling back to humanity itself while delving into AI research. Following this, we are compelled to scratch beneath the surface of mirrored humanity in AI: How may the perverted order of humanity mirrored in AI be rectified and restored? It is at this point that we are motivated to explore how God’s grace can enable us to perform the image of God with our ectype, that is, AI.
- One of the most recent works comprehensively examining AI governance is Justin B. Bullock, et al., eds., The Oxford Handbook of AI Governance (Oxford: Oxford University Press, 2024).
- Eric Stoddart, “Artificial Pastoral Care: Abdication, Delegation or Collaboration?,” Studies in Christian Ethics 36, no. 3 (2023): 660–74.
- Andrea Vicini, “Artificial Intelligence and Social Control: Ethical Issues and Theological Resources,” Journal of Moral Theology 11, Special Issue 1 (2022): 41–69.
- Xu Ximian 徐西面, The Digitalised Image of God: Artificial Intelligence, Liturgy, and Ethics (London: Routledge, 2024).
- As Philip Hefner suggests, all technology can be viewed as the mirror or image of human beings. Philip Hefner, Technology and Human Becoming (Minneapolis: Fortress, 2003).
- Augustine, The Catholic and Manichaean Ways of Life Augustine of Hippo, trans. Donald A. Gallagher and Idella J. Gallagher, vol. 56, ed. Roy Joseph Deferrari (Washington, DC: The Catholic University of America Press, 1966), 2.5.7.
- David Garcia, “Effect of Facebook’s Feed Algorithm Put to the Test,” Nature 620, no. 7972 (2023): 39–41.
- Grace Andruszkiewicz, “ElliQ, a proven intervention for loneliness and social isolation,” ElliQ (2023).
- E. Broadbent, et al., “ElliQ, an AI-Driven Social Robot to Alleviate Loneliness: Progress and Lessons Learned,” Journal of Aging Research and Lifestyle 13 (2024): 22–28.
- Amanda Sharkey and Noel Sharkey, “Granny and the Robots: Ethical Issues in Robot Care for the Elderly,” Ethics and Information Technology 14, no. 1 (2012): 35.
- Shannon Vallor, “Carebots and Caregivers: Sustaining the Ethical Ideal of Care in the Twenty-First Century,” Philosophy & Technology 24, no. 3 (2011): 261–262.