Recently ‘AlphaGo’, a Google/Deepmind programme, defeated the two most elite players at the Chinese game ‘Go’. These victories were, by current understandings of AI, a vast leap forward towards a future that could contain human-like technological entities, technology-like humans, and embodied machines. As corporations like Google invest heavily in technological and theoretical developments leading towards further, effective advances – a new ‘AI Summer’ - we can also see that hopes, and fears, about what AI and robotics will bring humanity are gaining pace, leading to new speculations and expectations, even amidst those who would position themselves as non-religious.
Speculations include Transhumanist and Singularitarian teleological and eschatological schemes, assumptions about the theistic inclinations of thinking machines, the impact of the non-human on our conception of the uniqueness of human life and consciousness, representations in popular culture and science fiction, and the moral boundary work of secular technologists in relation to their construct, ‘religion’. Novel religious impulses in the face of advancing technology have been largely ignored by the institutions founded to consider the philosophical, ethical and societal meanings of AI and robotics.
This symposium sought to explore the realities and possibilities of this unprecedented apocalypse in human history.
Dr. Robert Geraci
Dr. Robert M Geraci is Professor of Religious Studies at Manhattan College and author of Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Robotics (Oxford University Press, 2010) and Virtually Sacred: Myth and Meaning in World of Warcraft and Second Life (Oxford University Press, 2014).
Dr. Beth Singler
Dr Beth Singler is the conference advisor and a Keynote speaker for AI and Apocalypse, she is a Research Associate on the Human Identity in an age of Nearly-Human Machines project. She is working with Professor John Wyatt and Professor Peter Robinson to explore the social and religious implications of technological advances in AI and robotics at the Faraday Institute for Religion and Science. She is also an associate fellow at the Leverhulme Centre for the Future of Intelligence.
Schedule for this two day conference
- Thursday April 5th
9.00 – 9.30 Registration and coffee
9.30 – 9.40 Welcome
9.40 – 10.40 Keynote Speaker: Robert Geraci, Professor of Religious Studies at Manhattan College.
Bearers of the Apocalypse: Horses, Robots, and the Digital Future of India.
The Iron Horsemen ride digital horses. They ride them in Silicon Valley and Bangalore, though in this latter only a sacrificial horse could inaugurate a digital kingdom. In the West, a Christian devotion to apocalyptic eschatology—the radical inauguration of a glorious new world—has percolated from Revelation to pop culture. Today, that devotion radiates from the scientists and engineers who ride forth to transmute biological life into cyborgs, robots, and immortal online avatars. Whether or not they will succeed is beside the point; they represent a shift in the religious visions of modernity. Our secular age is not without religious faith or practice and often religious faith and practice have been distributed into other domains. The apocalyptic claims of Hans Moravec, Ray Kurzweil, Kevin Warwick, and Philip Rosedale run rampant through Western culture. Those claims do not map as easily into other cultural contexts, as shown by the functional absence of similar ideas in late 20th and early 21st century India. And yet, just as ancient Indian kings established dominion through the wanderings and subsequent sacrifice of a horse (which thereby inaugurates a new political world), there are bearers of Apocalyptic AI in India today. The new religious movements that wed traditional Indian religious ideas with faith in technological transcendence have few champions in India, but their number proceeds apace with the rise of more robust Indian science fiction and speculative fiction and the increasing distribution of the Internet through smartphones. But in India, the horses must give more than their speed and strength to those who wish to reign, they must give their lives too. As Apocalyptic AI catches the imagination of the Indian public, that public must decide whether to sacrifice from their inherited traditions or from those they import. In the latter case, the technological apocalypse will align with but differ from its envisioning in Europe or the United States.
10.40 – 11.00 Coffee break
11.00 – 11.30 Michael Morelli, doctoral candidate studying theological ethics at University of Aberdeen.
Representing the End—The Ethical Implications of Praying to Bots.
Although full fledged artificial intelligence technologies currently remain in research and development phases there already have been modest advances in AI. Chatbots are an example. The New York Times reports:
For celebrities who already use Twitter, Instagram, and Snapchat to lend a personal touch to their interactions with fans, the next frontier of social media is a deliberately impersonal one: chatbots, a low-level form of artificial intelligence that can be harnessed to send news updates, push promotional content, and even test new material.
Fan engagement with such bots has become substantial enough to spark interest, investment, and research from a variety of individuals and groups. Even if fans are aware that bots are not human and the celebrity being represented is not present, there is enough of a sense of connection to keep fans engaged. This is because fans interact with the bot as if the bot is the individual being imitated.
One way to view these interactions is to read them as basic pop culture phenomena: fans desiring to connect with a celebrity. Another way to view these interactions is to read them as religious phenomena: individuals praying to their respective god or gods. The frequent crossover between pop culture and religious language is revealing in this respect. Fans worship pop stars who are often referred to as gods, and as a part of that worship, pray to their god(s) in order to satisfy existential desires for connection. Even if what—or whom—is prayed to is a bot.
Thinking of AI chatbots from this perspective reveals an implicit religious narrative in the fundamental architecture of chatbot AI technologies. In fact, there also exist chatbots designed explicitly for prayer. With such developments in AI technology in view, this paper examines the religious narratives present in these AI technologies and considers their ethical implications. Then, it builds upon this examination with a consideration of the implicit and explicit religious connection between prayer and understandings of the apocalypse.
As sociologist-theologian Jacques Ellul argues in Prayer and Modern Man, “Prayer is the ultimate act of hope, otherwise it has no substance. Because it is an act of hope, every prayer is necessarily eschatological [and apocalyptic].” If prayer is a hopeful look to the future and a potentially apocalyptic end, what sort of future and apocalyptic end might be represented by prayer and chatbots?
11.30 – 12.00 Maximillian Dinu-Gavrilciuc, MA in Philosophy of Religion from Heythrop College, University of London.
The Theological Singularity – Are Machines Capable of an Ontological Leap?
Christian theology makes a distinction between reason (logic) and intellect (intuition), under the form of dianoia and nous. The intellect is the intuitive imago Dei and has a direct perception of reality, while reason is the ability to process sequentially the input that we receive from senses (and sometimes from the intellect as well). Computers reproduce in an increasingly competent manner human reason, by processing symbols through algorithms and producing output. However, from a religious point of view, would an A.I. system able to develop an intellect (nous) – and, if so, what would be the implications?
A possible way to reconcile the large consensus on evolution with the concept of man as Imago Dei is the ontological leap. Thus, while man could descend from a long line of proto-human ancestors, he became a different being, sentient and endowed with an intellect able of communion with God, in one swift moment. My research focuses on the possibility of such an ontological leap occurring in an A.I. system – from a task-oriented machinery to a being capable of will and, potentially, communion with God. While most of the (yet-scarce) research on A.I. and religion focuses on decision-making and data-mining processes that could, in theory, turn a thinking machine towards self-awareness and religious impulses, an ontological leap of machinery towards intellect would be the true theological singularity that would have unprecedented implications not just on religion, but also on ethics and psychology. The paper also addresses the two likely types of singularity that are popular in literature – the “evil” theogenesis-type – in which A.I. sees itself as God-like, intent on enslaving or exterminating humanity, and the Adamic-type singularity of machines becoming humanly-sentient and contemplating existence.
12.00 – 1.00 Lunch
1.00 – 1.30 Victoria Lorrimar, University of Oxford, Faculty of Theology and Religion, DPhil Student.
Mind-Uploading and the Importance of the Human Body for Intelligence.
One of the more radical transhumanist proposals for future human being envisions the uploading of our minds to a digital substrate, trading our dependence on frail, degenerating “meat” bodies for the immortality of software existence. This paper explores these proposals in light of what we know about the embodied dimension of human cognition. Studies in the neuroscience of metaphor indicate that much of our use of metaphor extends beyond language to concept, operating in our bodily inhabiting of the world. Similarly, a phenomenological approach to knowing emphasises a “hybridity” to human being that resists traditional dichotomies between the mind and body. What would jettisoning the body do to our ability to make sense of our world? Can ‘human’ intelligence exist in a silicon substrate?
Exploring several transhumanist responses to these questions surrounding embodiment, as well as their expression in select pieces of science fiction, the question of how a radical morphological freedom and diversity as a result of enhancing technologies might impact our present ability to comprehend the world and the other will be examined. From a theological perspective, the implications of these proposals for theological anthropology and the human future will be briefly considered, including whether a Christian hope of redemption can accommodate the creation of artificially intelligence beings or the ‘uploading’ of human minds to inorganic substrates.
1.30 – 2.00 Syed Mustafa Ali, School of Computing and Communications The Open University, UK.
‘White Crisis’ and/as ‘Existential Risk’, or The Entangled Apocalypticism of Artificial Intelligence.
In a series of works exploring the mobilization of apocalyptic themes and ideas drawn from the Western religious – more specifically, and significantly, ‘Judeo-Christian’ – tradition in contemporary discourses addressing the alleged convergence of so-called GRIN/NBICS technologies in a ‘singularity’ phenomenon, Geraci has drawn attention to various important ‘entanglements’ of science, technology and religion which need to be engaged when considering the rhetoric and reality of contemporary concerns about ‘existential risk’. Notwithstanding the importance of such explorations, I want to suggest that they are marked by certain shortcomings which become apparent when one shifts interrogating the phenomenon of ‘Apocalyptic AI’ from the perspective of religious studies to the perspective of critical religion studies, with the latter underpinned by the understanding that ‘religion’ and ‘race’ are entangled in various ways, at least since the onset of modernity if not earlier.
Building on earlier work, in this paper I want to explore the theme of ‘existential risk’ associated with Apocalyptic AI and other contingently-related existential threats in relation to ‘White Crisis’ as a modern racial phenomenon with pre-modern ‘religious’ origins; more specifically, I want to suggest that Apocalyptic AI, and the attendant discourse of ‘existential risk’, is a strategy, albeit possibly a ‘merely’ rhetorical one, for maintaining white hegemony under non-white contestation. I further suggest that this claim can be shown to be supported by the disclosure of continuity through change in the long durée entanglement of ‘race’ and ‘religion’ associated with the establishment, maintenance, expansion and refinement of the modern/colonial world system if and when such changes are understood as ‘iterations’ in a ‘programmatic’ trajectory of domination, the continuity / historical essence of which might usefully be framed as ‘algorithmic racism’.
2.00 – 2.30 Break
2.30 – 3.30 Round table discussion on questions from the audience
3.30 – 3.45 Day 1 closing comments
3.45 – 5.00 Tour of the Panacea Museum and “De/coding the Apocalypse”
- Friday April 6th
9.00 – 9.30 Registration and coffee
9.30 – 9.40 Welcome
9.40 – 10.40 Keynote Speaker: Beth Singler, Research Associate on the Human Identity in an age of Nearly-Human Machines project, Faraday Institute for Religion and Science.
“Mind in the Wrong Place: Thinking About AI Apocalypticism Anthropologically”.
Drawing upon ethnographic fieldwork amongst online communities, technologists, and religious believers that has formed a part of the “Human Identity in the Age of Nearly Human Machines” research project at the Faraday Institute for Science and Religion, Cambridge, as well as narratives based research done in collaboration with the Centre for the Future of Intelligence, this keynote talk will develop ideas of an ‘anthropology of AI anxiety’. Apocalyptic or fearful interpretations of AI will be explored to consider where they come from and what work they are being put to in helping our understandings of a wider cosmology of beings, including the animate and the inanimate, the born and the made.
In exploring a distinction between the prophetic and the predictive, this talk’s consideration of AI apocalypticism will emphasise the moral judgement and fear elements of such narratives. It will also explore how anxiety can result from self-comparison with the made, as well as from a near instinctive feeling of the ‘wrong’. Employing anthropological concepts and frameworks from figures such as Mary Douglas, Arnold Van Gennep, and Victor Turner, the argument will proposed that this feeling of the ‘wrong’, appearing colloquially as the Bukimi no Tani Genshō, or ‘Uncanny Valley’, of Masahiro Mori, can be understood in terms of an apprehension of mind in unexpected or transgressive places. Apocalypticism around AI is an accelerationist’s view of this transgression, and is often expressed in visceral horror forms such as Roko’s Basilisk, or in science fictions such as “BLIT”, and “I have no mouth and I must scream”.
However, perceptions of mind as being in the wrong place can also perhaps be described as a precautionary survival mechanism, and explorations of existential risk discussions coming from such AI apocalypticism can show us how comparison with the made and the detection of difference can also be a way into understanding the human; that we can see our own humanity through the fractured mirror of AI.
10.40 – 11.00 Coffee break
11.00 – 11.30 David Gamez, Department of Computer Science, Middlesex University, London.
Machine Consciousness and the AI Apocalypse.
AI research is not a single field with a clearly defined objective. Some people are trying to solve narrowly-defined tasks, such as face recognition or Go. Others are attempting to produce the full spectrum of human behaviour. There is also a research area known as machine consciousness, which uses computers to study human consciousness and explores whether consciousness can be embodied in a machine. This talk will discuss the relationship between this work on machine consciousness and the possibility of an AI apocalypse.
In the first part of the talk I will identify four types of machine consciousness: (1) Machines with the same external behavior as conscious systems. (2) Models of the correlates of consciousness. (3) Models of phenomenal consciousness. (4) Machines that are phenomenally conscious. I will give examples of systems that fit into each of these categories.
I will then consider the relationship between this work on machine consciousness and a possible AI apocalypse. I will argue that the only type of conscious machine that is capable of taking over the world is one that exhibits conscious human behaviour. However, this type of system is unlikely to pose an existential threat because we have made little progress towards robots that behave in similar ways to conscious humans. Computer errors are much more likely to lead to apocalypse than the mad machinations of a conscious machine.
In the last part of the talk I will explore Asimov’s suggestion that conscious computers should be encouraged to take over the world if they can run the world better than us. In the future it is also possible that machine consciousness could become superior to human consciousness. If consciousness is ethically valuable in itself, it can be argued that humans should be replaced by machines with greater quantities of higher quality consciousness.
11.30 – 12.00 CenSAMM artist in residence, Leila Johnston presentation
Leila's talk will outline some key discoveries made during her AI and Apocalypse residency with CenSAMM. She will explain how and why she has involved the Panacea archives with a number of creative technologies, and share highlights from conversations with contemporary AI thinkers. With residency experiments including religious garb co-designed with a learning algorithm, and the resurrection of Octavia's jackdaw as a robot (see whatjacksaw.com), Leila finds contemporary relevance in the Panaceans themes of time, waiting, fear and authenticity.
(artist in residence https://censamm.org/artist-in-residence/leila-johnston )
12.00 – 1.00 Lunch
1.00 – 1.30 Aura Schussler, Associate Lecturer, PhD. Babeș-Bolyai University, Department of Philosophy, Faculty of History and Philosophy, Cluj-Napoca, Romania.
Digital Immortality and the Seeking of Death.
Unlike death – which is the only certainty of human existence which the human being knows, realizes and experiences as an intrinsic phenomenon of his consciousness – immortality remains for the moment, a simulacra of existence. Thus, whether we are talking about the religious, philosophical or scientific paradigm, immortality has represented, and still represents, an unsolved issue. The purpose of this research is to analyze one of the possible solutions that the near future offers us, regarding the issue of immortality, namely, digital immortality. The overall objective of the research is to analyze the consequences of this digital immortality in the paradigm of transhumanist and singularitarian philosophy, based on the arguments of Ray Kurzweil and Martine Rothblatt, on the possibility of obtaining immortality, through mind uploading (namely through mindclones, mindfiles and mindware). But this hypothesis also raises a series of issues related to the ontological integrity of Being, in the paradigm of digital immortality, which opens a series of questions such as – What risks/benefits do digital immortality offer? Will death be possible, in digital immortality? If so, what kind of death do we speak of from the point of view of the Ontology of Being? If not, then will death become a desideratum, such as immortality is today? The theoretical objective is to analyze the arguments of Kurzweil and Rothblatt, through Martin Heidegger's ontology and concept of being-towards-death. Along with this paradigm shift in the death-immortality relationship, there is the (possible) situation in which we are going to seek death, not immortality, in this digital context. Hence the question – Is this paradigm shift a new type of apocalypse? If so, how does it affect the ontological dimension of being-towards-death?
1.30 – 2.00 William Barylo, EHESS (Paris, France).
Becoming Muslim in Europe in the age of AI: a metacolonial journey.
In the second decade of the 21st century, the world saw the birth of Instagram and YouTube Muslim influencers. Boasting millions of followers, they have become part of a new privileged elite, securing business deals with major brands and appearing at the top of social media feeds. Paradoxically, they may also signify the end of plural Muslim identities. At times when trending looks, speeches and behaviours are increasingly ‘suggested’ by AI, is there a future for those who do not fit the norm? Fitting the neoliberal ideal of success defined by financial performance, fame, power and material wealth, these virtual role models also fit the orientalist ideal aesthetics of the fair-skinned, slim or athletic, hyper-masculine or hyper-feminine ‘exotic’ and desirable Muslim. At the intersection of neoliberalism and colonialism, conflicting with Islamic ethics and etiquette, this phenomenon poses the question of gender- and ethno-normative conformity. Therefore, by creating a commoditised version of Islam, is the rise of these polished Muslim influencers a play of AI algorithms, hinting a subtle endeavour to police minorities and perhaps, establish a perfect Deleuzian ‘control society’? Psychologist H. Bulhan posits that from an occupation of land, the world has entered times of the occupation of minds, a phenomenon he names ‘metacolonialism’. As Muslim eschatologists observe most prophesised signs of the Final Hour, is AI an avatar of the Dajjal, the Muslim equivalent of the Antichrist? Following 7 years of qualitative sociological fieldwork in France, Poland and the UK, this paper explores how millennial European Muslims venture in search of liberation and safe spaces through the neoliberal market and virtual reality. Employing Morin’s concept of culture as complex systems, this work eventually discusses how the rise of the empire of Artificial Intelligence is perhaps marking the end of plural diasporic identities.
2.00 – 2.30 coffee
2.30 – 3.00 Scott Midson, The University of Manchester, Religions and Theology, Post-Doc.
Love and Liberation from Creation to the Singularity.
Anticipation of the Singularity is polarised between those who envisage a dystopian apocalypse and the end of the world as we know it on the one hand; and those who see promise for humans in the digital awakening of the universe on the other. One way to understand the divide in opinion is to consider perspectives on love: those who fear the Singularity are anxious about a hopeless and loveless world, and they see machines as unloving and thus as unlovable. Those who embrace and celebrate the Singularity challenge these views, and instead look forward to a time when technologies can liberate us and help us to realise a wider spectrum of love. In this paper, I discuss these themes, focusing on positive attitudes to the Singularity and how they relate to themes of love and liberation. I analyse some of the underpinning assumptions behind these positions, linking them to theological discussions of love, and in particular ideas about God’s love for His creation and its flourishing. Relating to this, I highlight the work of Teilhard de Chardin, whose proposal that the universe will awaken and become conscious as part of a transition to a spiritual ‘noosphere’ has been considered as a precursor to the Singularity. Is this vision in accordance with that of the flourishing of creation? What kinds of loves are involved in imagining and realising these visions? Ray Kurzweil and others consider the Singularity as spiritual, whereas I will demonstrate that these views suggest a narrow interpretation of love as liberation that neglects other models of love, such as vulnerability, which is theologically significant. Thus, I will argue that digital love is theologically justifiable, but the Singularity is not necessarily the best vision that should be pursued.
3.00 – 3.30 Ian Huyett, Washington and Lee University School of Law,
Religious Parallels to the Simulation Hypothesis: Gnosticism, Mormonism, and Neoplatonism.
According to Nick Bostrom’s simulation hypothesis, our universe is almost certainly a
simulation created by posthuman programmers. Although the hypothesis has since become an object of fascination and debate, little attention has been paid to its implications for religion and naturalism.
Disregarding Bostrom’s broader argument, I assume the simulation hypothesis is true in
order to explore those implications. I review startling, and as-yet unexplored, parallels between the simulation hypothesis and certain religious ideas. In his original paper, Bostrom noted that his hypothesis “suggests naturalistic analogies of certain traditional religious conceptions” through its implications for omnipotence, omniscience, and the afterlife. The analogies, however, do not stop there. An often-overlooked aspect of Bostrom’s hypothesis is his suggestion that “the posthumans running our simulation are themselves simulated beings,” and so on. Bostrom himself does not identify this suggestion as having a religious analogue. Yet this feature of his hypothesis has striking parallels with the cosmogonies of Gnosticism, Mormonism, and Neoplatonism.
All three narratives entail emanationism, or the view that our level of reality is the
outcome of a descending hierarchy of levels. Gnosticism and Mormonism, moreover, envision descending hierarchies of recurring deities, each presiding over corresponding levels or sets of levels.
Although this latter belief brings Gnosticism and Mormonism broadly closer to the
hypothesis, Neoplatonism does uniquely prefigure Bostrom through a principal known as
“coinherence”: a parallel that is most evident in the work of Jewish Neoplatonist Solomon ibn Gabirol.
Briefly summarizing the cosmogonies of Gnosticism, Mormonism, and Neoplatonism, I
note each aspect of the hypothesis they prefigure. These parallels mean that the simulation
hypothesis, if accepted, affects a greater paradigm shift – in the relationship between religion and naturalism – than Bostrom initially conceived.
3.30 Closing comments.