Can we live with AI? Perspective from Oxford Emeritus Mathematics professor John Lennox
In a thought-provoking lecture, John Lennox explores the pressing question: “Can we live with AI?” He begins by acknowledging that we already live with narrow AI—technology that uses algorithms to process vast databases and simulate intelligence. This form of AI, however, lacks true consciousness or moral understanding. Examples of narrow AI include digital assistants, online shopping algorithms, medical AI, autonomous vehicles, and facial recognition technology.
Professor John Lennox highlights the dual nature of AI’s impact. On one hand, AI provides convenience and innovation; on the other, it presents ethical concerns. He mentions how facial recognition can identify criminals, yet it is also used for oppressive surveillance, such as monitoring the Uyghur Muslim minority in China. This intrusive use of AI, the speaker warns, may spread globally.
A major concern is AI’s potential threat to democracy, particularly through deepfakes. The head of MI5, Ken McCallum, has expressed fear that AI could undermine societal trust by blurring the lines between reality and deception, especially during crucial moments like elections.
The lecture delves into AI’s moral limitations, referencing Yoshua Bengio, who emphasizes that current and foreseeable AI will lack a sense of morality. This echoes the dystopian visions of Orwell’s “1984” and Huxley’s “Brave New World”—where Orwell feared an imposed oppression, while Huxley warned of people embracing technology to their own detriment. The speaker suggests that, in our current era, both dystopias are unfolding simultaneously.
The lecture also touches on the pursuit of Artificial General Intelligence (AGI)—AI that matches or surpasses human intelligence. While some, like Stephen Hawking, have warned of AI’s potential to outsmart and overpower human goals, others, like J.B.S.T. Landgra and Barry Smith, argue that AGI is mathematically impossible. The speaker sides with the latter view, noting support from prominent mathematician Sir Roger Penrose.
However, the more immediate concern, the Oxford mathematician John Lennox asserts, is not a distant AI apocalypse but the present-day risks AI poses—like undermining critical thinking and human judgment. He references recent articles in “Nature” and “Scientific American” that call for effective AI regulation to prevent societal harm.
Sam Altman, CEO of OpenAI, also acknowledges the need for AI regulation, warning that while current AI is not yet dangerous, more advanced systems could pose significant risks. AI expert Stuart Russell proposes principles for managing AI, such as ensuring AI systems prioritize human goals and remain uncertain about those goals, encouraging continuous learning and adaptation.
The speaker then shifts to a provocative parallel between AI and biblical prophecy, drawing comparisons to the Book of Revelation’s depiction of future authoritarian control. He points out the eerie resemblance between AI-driven surveillance and the “mark of the beast”—suggesting a future where AI enforces economic and societal control.
Ultimately, the lecture presents a balanced view: while AGI may be far off, AI’s current trajectory poses real-world threats that require urgent ethical consideration and robust regulation. The key takeaway is not to succumb to sci-fi fears but to address AI’s tangible risks today while keeping a watchful eye on its future development.
0:00 // Start
1:36// Being forced to live with AI – what are the ethical implications of AI?
5:53 // Does artificial intelligence have feelings?
9:55 // Prof. John Lennox’s perspective on the future of AI
13:14 // What does the Bible say about the future and AI?
15:25 // How are symbols used to represent reality and the future of AI?
17:12 // AI – A new kind of religion
19:15 // What does the Bible say about deepfakes and deception?
20:09 // What is the central message of the Christian faith?
21:21 // When was the problem of human death solved?
22:16 // What is the Christian response to the future of AI?