The role of artificial intelligence (AI) in our society has received a lot of attention in recent years. As Christians, we are concerned with the impact of AI on Christianity and churches. Yet meaningful discussions should be rooted in a solid understanding of AI. This article attempts to demystify AI technologies and discern and regulate their usage.
Artificial Intelligence vs. Human Intelligence
The significant role of AI in modern society is self-evident, especially following the widespread deployment of ChatGPT.1 However, AI is somehow mysterious to most people. Many do not know what AI can and cannot do, or how to use this tool effectively. Its capabilities are often greatly exaggerated in the media. Let’s unravel the mystery of AI first.
The Turing test2 evaluates a machine’s ability to exhibit intelligent behavior equivalent to a human under the following setting. A human evaluator judges conversations between another human and a machine. If the evaluator cannot differentiate the two, the machine is said to pass the Turing test. Today’s large language models (LLMs) pass the Turing test under this definition (except for hallucination). However, is the Turing test a good definition of AI?
Searle3 raised a concern about this definition with an example called the “Chinese Room Argument.” Consider a person in a closed room who does not know Chinese. Yet he has auxiliary tools such as a language translator and a vast knowledge database. He communicates with people outside by leaving Chinese texts on paper slipped under the door. He can convert Chinese questions into his familiar language, look for answers, and translate answers back into Chinese. The person inside the room appears to know Chinese to the people outside. But this is not true.
Weak AI vs. Strong AI
Although today’s chatbots may pass the Turing test, the above example highlights the differences between machine intelligence and human intelligence (HI). The notions of “weak AI” and “strong AI” are used to explain the differences. Weak AI simulates the human mind, but it does not understand semantics. Humans have a mind which understands semantics and reasons with logical flows. Strong AI is expected to be as close to HI as possible.
It is common to link weak AI and strong AI with “heavily supervised” and “weakly supervised” learning paradigms, respectively. Humans learn fast with a few training samples. In contrast, machines demand a large number of training samples. For example, the Kaggle 2013 cat/dog dataset4 contains 12,500 dog and 12,500 cat images to train a dog/cat classifier. Children can recognize cats and dogs with significantly fewer examples. Learning with fewer (or more) examples implies greater (or weaker) generalizability.
AI research progressed slowly in the first 55 years (from 1956 to 2012). In contrast, there has been rapid development in the last 13 years. One may attribute its recent success to big data availability, powerful hardware such as GPUs, and the maturity of deep learning. They are all partially true. Most importantly, it is the change in the learning paradigm—going from weakly-supervised (strong AI) to heavily supervised learning (weak AI).
ChatGPT is an example of weak AI. It can answer many questions, but it does not have human intelligence. Strong AI, as represented by human intelligence, can think, understand, and analyze logically. There is a large gap between strong AI and weak AI. Weak AI is “managing complexity with complexity,” while strong AI is “managing complexity with simplicity.” For example, ChatGPT has trillions of parameters. It uses very complex models to grasp the complex world, using complexity to handle complexity. It cannot generalize well.
Based on heavily supervised learning, today’s AI requires labeling, which is time-consuming. Another type of supervised learning exploits self-supervision. For example, we use many sentences with an arbitrarily covered word as the input and the original sentences as the output. We can train a model to fill in a missing word. Such a model, called the “Generative Pre-Trained” (GPT) model, serves as the backbone of today’s ChatGPT.
Developments of Weak AI
Weak AI can be divided into three categories: 1) classical machine learning, 2) deep learning, and 3) green learning.5 The first category is traditional machine learning. Its pipeline consists of two parts: feature extraction and decision-making. The feature extraction process relies on humans’ ad hoc design. Generally, it is difficult to find all the needed features for a specific task. The second category is deep learning, which uses a large number of network model parameters to satisfy the input/output relationship. It is the dominant one in modern AI applications. Technologies in the third category have emerged in the last decade.
Deep Learning vs. Green Learning
We can compare deep learning and green learning below. Deep learning is characterized by two specific design attributes: a network architecture built upon a basic unit called the artificial neuron and the loss function. Once these two attributes are specified, model parameters can be automatically determined using an end-to-end optimization algorithm called backpropagation. When the number of training samples is less than the number of model parameters, it is common to adopt pre-trained networks to construct larger networks to achieve superior performance. The recent trend is the adoption of the transformer architecture,6 which trades a model of higher complexity for further performance boosting. However, the high carbon footprint yielded by larger and larger neural networks has become a concern for sustainability. Furthermore, the deep learning decision mechanism is somewhat obscure. Green learning (GL) is being proposed as an alternative paradigm to address these concerns. GL is characterized by low carbon footprints, lightweight models, low computational complexity, and logical transparency. It offers energy-efficient solutions in cloud centers as well as mobile/edge devices. GL also provides a more transparent, logical decision-making process, which is essential to gaining people’s trust.
AI as an Enabling Tool
Chatbots can collect information, answer questions, draft letters, and generate images and videos. Routine jobs can be replaced by AI. For example, many online chat services are performed by AI. However, AI will not replace humans in tasks that demand creativity, generalizability, and abstraction. In most scenarios, AI will serve as an enabling tool to assist humans in decision-making.
The behavior of a chatbot is completely controlled by its training data. Unfairness and biases can exist in the training data, which, in turn, affect the answers provided by a chatbot. Hallucination is another major concern. A chatbot may provide answers without substantiating evidence or with fake evidence. A chatbot may offer misleading advice to people and result in severe consequences. There are professionals in our society who provide valuable services, such as accountants, lawyers, doctors, etc. They need to pass exams to get certificates to serve, and they are liable if they make mistakes. There is no similar mechanism for AI-based services. Thus, AI will serve as a human assistant in the foreseeable future. AI can enhance the productivity of a human, but the final decision must still come from a human being. AI will play a critical role in the infrastructure of modern society, such as power systems, transportation, the economy, and weapons. A small mistake by AI could be catastrophic. More research and regulatory work are needed for the proper usage of AI.
AI in Christian Ministries and Contemporary Churches
We should leverage AI in Christian ministries in the twenty-first century. There are several directions worth considering.
1. Design of Christian-faith-based Chatbots
The Christian community should develop and deploy chatbots that are based on the Bible, Christian values, and practices. Then, Christians can find answers that align well with Christian doctrines. Christians can pray with a chatbot as a companion, since it can provide encouraging and/or comforting words and verses.
2. AI-Assisted Church Pulpit Ministry
Pastors can leverage chatbots in preparing sermons by inputting keywords, key concepts, or rough outlines. This will save time, while the sermon content can be richer, including supportive verses, stories, figures, tables, and even jokes. However, it does not mean that pastors need no effort. A pastor can pay attention to the specific situation of a local church and prepare a sermon tailored to the needs of the congregation.
3. AI and Theological Education
AI does not have a mind, a soul, emotions, values, or logical reasoning capability7 like humans, although some tech companies attempt to convince people otherwise. It is essential to know what AI can and cannot do and put AI in a proper perspective in our society. Modern theological education needs to provide such a distinction. On the other hand, AI will serve as a useful tool to facilitate theological (and non-theological) education.
Conclusion
AI will not challenge the unique role of humans. The potential of AI has been over-exaggerated in today’s media, leading to confusion and erroneous predictions. Although AI offers a powerful tool, humans take full responsibility for its usage. We need to develop new laws, ethics, and teaching to get full benefits while reducing the damages caused by misuse. Christian scholars and leaders should play an important role in this direction.
- “ChatGPT,” Wikipedia, https://en.wikipedia.org/wiki/ChatGPT.
- Alan M. Turing, “Computing Machinery and Intelligence,” Parsing the Turing Test, ed. Robert Epstein, Gary Roberts, and Grace Beber (Dordrecht: Springer, 2009), accessed July 5, 2025, https://doi.org/10.1007/978-1-4020-6710-5_3.
- John R. Searle, “The Chinese Room Revisited,” Behavioral and Brain Sciences, vol. 5, no. 2 (1982): 345-348.
- Will Cukierski, “Dogs vs. Cats,” Kaggle, 2013, accessed July 5, 2025, https://kaggle.com/competitions/dogs-vs-cats.
- C.-C. Jay Kuo, and Azad M. Madni, “Green Learning: Introduction, Examples and Outlook,” Journal of Visual Communication and Image Representation, vol. 90 (February 2023): 103685.
- Ashish Vaswani et al., “Attention is All You Need,” Advances in Neural Information Processing Systems, vol. 30 (2017).
- Cecily Mauran, “’The Illusion of Thinking’: Apple research finds AI models collapse and give up with hard puzzles,” Mashable, June 9, 2025, accessed July 5, 2025, https://mashable.com/article/apple-research-ai-reasoning-models-collapse-logic-puzzles.