Ian Huyett, “Religious Parallels to the Simulation Hypothesis: Gnosticism, Mormonism, and Neoplatonism”, CenSAMM conference on AI and Apocalypse (published 12 April 2018)
“According to Nick Bostrom’s simulation hypothesis, our universe is almost certainly a simulation created by posthuman programmers. Although the hypothesis has since become an object of fascination and debate, little attention has been paid to its implications for religion and naturalism. Disregarding Bostrom’s broader argument, I assume the simulation hypothesis is true in order to explore those parallels to the Simulation Hypothesis: Gnosticism, Mormonism, and Neoplatonism. Implications. I review startling, and as-yet unexplored, parallels between the simulation hypothesis and certain religious ideas. In his original paper, Bostrom noted that his hypothesis “suggests naturalistic analogies of certain traditional religious conceptions” through its implications for omnipotence, omniscience, and the afterlife. The analogies, however, do not stop there. An often-overlooked aspect of Bostrom’s hypothesis is his suggestion that “the posthumans running our simulation are themselves simulated beings,” and so on. Bostrom himself does not identify this suggestion as having a religious analogue. Yet this feature of his hypothesis has striking parallels with the cosmogonies of Gnosticism, Mormonism, and Neoplatonism.”
Beth Singler, “Mind in the Wrong Place: Thinking About AI Apocalypticism Anthropologically”, CenSAMM conference on AI and Apocalypse (published 11 April 2018)
“Drawing upon ethnographic fieldwork amongst online communities, technologists, and religious believers that has formed a part of the 'Human Identity in the Age of Nearly Human Machines' research project at the Faraday Institute for Science and Religion, Cambridge, as well as narratives based research done in collaboration with the Centre for the Future of Intelligence, this keynote talk will develop ideas of an ‘anthropology of AI anxiety’. Apocalyptic or fearful interpretations of AI will be explored to consider where they come from and what work they are being put to in helping our understandings of a wider cosmology of beings, including the animate and the inanimate, the born and the made.
In exploring a distinction between the prophetic and the predictive, this talk’s consideration of AI apocalypticism will emphasise the moral judgement and fear elements of such narratives. It will also explore how anxiety can result from self-comparison with the made, as well as from a near instinctive feeling of the ‘wrong’. Employing anthropological concepts and frameworks from figures such as Mary Douglas, Arnold Van Gennep, and Victor Turner, the argument will propose that this feeling of the ‘wrong’, appearing colloquially as the Bukimi no Tani Genshō, or ‘Uncanny Valley’, of Masahiro Mori, can be understood in terms of an apprehension of mind in unexpected or transgressive places. Apocalypticism around AI is an accelerationist’s view of this transgression, and is often expressed in visceral horror forms such as Roko’s Basilisk, or in science fictions such as ‘BLIT’, and ‘I have no mouth and I must scream’.
However, perceptions of mind as being in the wrong place can also perhaps be described as a precautionary survival mechanism, and explorations of existential risk discussions coming from such AI apocalypticism can show us how comparison with the made and the detection of difference can also be a way into understanding the human; that we can see our own humanity through the fractured mirror of AI.”
William Barylo, “Becoming Muslim in Europe in the age of AI: a metacolonial journey”, CenSAMM conference on AI and Apocalypse (published 12 April 2018)
“In the second decade of the 21st century, the world saw the birth of Instagram and YouTube Muslim influencers. Boasting millions of followers, they have become part of a new privileged elite, securing business deals with major brands and appearing at the top of social media feeds. Paradoxically, they may also signify the end of plural Muslim identities. At times when trending looks, speeches and behaviours are increasingly ‘suggested’ by AI, is there a future for those who do not fit the norm? Fitting the neoliberal ideal of success defined by financial performance, fame, power and material wealth, these virtual role models also fit the orientalist ideal aesthetics of the fair-skinned, slim or athletic, hyper-masculine or hyper-feminine ‘exotic’ and desirable Muslim. At the intersection of neoliberalism and colonialism, conflicting with Islamic ethics and etiquette, this phenomenon poses the question of gender- and ethno-normative conformity. Therefore, by creating a commoditised version of Islam, is the rise of these polished Muslim influencers a play of AI algorithms, hinting a subtle endeavour to police minorities and perhaps, establish a perfect Deleuzian ‘control society’? Psychologist H. Bulhan posits that from an occupation of land, the world has entered times of the occupation of minds, a phenomenon he names ‘metacolonialism’. As Muslim eschatologists observe most prophesised signs of the Final Hour, is AI an avatar of the Dajjal, the Muslim equivalent of the Antichrist? Following 7 years of qualitative sociological fieldwork in France, Poland and the UK, this paper explores how millennial European Muslims venture in search of liberation and safe spaces through the neoliberal market and virtual reality. Employing Morin’s concept of culture as complex systems, this work eventually discusses how the rise of the empire of Artificial Intelligence is perhaps marking the end of plural diasporic identities.”
David Gamez, “Machine Consciousness and the AI Apocalypse”, CenSAMM conference on AI and Apocalypse (published 11 April 2018)
“AI research is not a single field with a clearly defined objective. Some people are trying to solve narrowly-defined tasks, such as face recognition or Go. Others are attempting to produce the full spectrum of human behaviour. There is also a research area known as machine consciousness, which uses computers to study human consciousness and explores whether consciousness can be embodied in a machine. This talk will discuss the relationship between this work on machine consciousness and the possibility of an AI apocalypse.
In the first part of the talk I will identify four types of machine consciousness: (1) Machines with the same external behavior as conscious systems. (2) Models of the correlates of consciousness. (3) Models of phenomenal consciousness. (4) Machines that are phenomenally conscious. I will give examples of systems that fit into each of these categories.
I will then consider the relationship between this work on machine consciousness and a possible AI apocalypse. I will argue that the only type of conscious machine that is capable of taking over the world is one that exhibits conscious human behaviour. However, this type of system is unlikely to pose an existential threat because we have made little progress towards robots that behave in similar ways to conscious humans. Computer errors are much more likely to lead to apocalypse than the mad machinations of a conscious machine.
In the last part of the talk I will explore Asimov’s suggestion that conscious computers should be encouraged to take over the world if they can run the world better than us. In the future it is also possible that machine consciousness could become superior to human consciousness. If consciousness is ethically valuable in itself, it can be argued that humans should be replaced by machines with greater quantities of higher quality consciousness.”
Robert Geraci, “Bearers of the Apocalypse: Horses, Robots, and the Digital Future of India”, CenSAMM conference on AI and Apocalypse (published 11 April 2018)
“The Iron Horsemen ride digital horses. They ride them in Silicon Valley and Bangalore, though in this latter only a sacrificial horse could inaugurate a digital kingdom. In the West, a Christian devotion to apocalyptic eschatology—the radical inauguration of a glorious new world—has percolated from Revelation to pop culture. Today, that devotion radiates from the scientists and engineers who ride forth to transmute biological life into cyborgs, robots, and immortal online avatars. Whether or not they will succeed is beside the point; they represent a shift in the religious visions of modernity. Our secular age is not without religious faith or practice and often religious faith and practice have been distributed into other domains. The apocalyptic claims of Hans Moravec, Ray Kurzweil, Kevin Warwick, and Philip Rosedale run rampant through Western culture. Those claims do not map as easily into other cultural contexts, as shown by the functional absence of similar ideas in late 20th and early 21st century India. And yet, just as ancient Indian kings established dominion through the wanderings and subsequent sacrifice of a horse (which thereby inaugurates a new political world), there are bearers of Apocalyptic AI in India today. The new religious movements that wed traditional Indian religious ideas with faith in technological transcendence have few champions in India, but their number proceeds apace with the rise of more robust Indian science fiction and speculative fiction and the increasing distribution of the Internet through smartphones. But in India, the horses must give more than their speed and strength to those who wish to reign, they must give their lives too. As Apocalyptic AI catches the imagination of the Indian public, that public must decide whether to sacrifice from their inherited traditions or from those they import. In the latter case, the technological apocalypse will align with but differ from its envisioning in Europe or the United States.”
Syed Mustafa Ali, “‘White Crisis” and/as “Existential Risk’, or The Entangled Apocalypticism of Artificial Intelligence”, CenSAMM conference on AI and Apocalypse (published 11 April 2018)
“In a series of works exploring the mobilization of apocalyptic themes and ideas drawn from the Western religious – more specifically, and significantly, ‘Judeo-Christian’ – tradition in contemporary discourses addressing the alleged convergence of so-called GRIN/NBICS technologies in a ‘singularity’ phenomenon, Geraci has drawn attention to various important ‘entanglements’ of science, technology and religion which need to be engaged when considering the rhetoric and reality of contemporary concerns about ‘existential risk’. Notwithstanding the importance of such explorations, I want to suggest that they are marked by certain shortcomings which become apparent when one shifts interrogating the phenomenon of ‘Apocalyptic AI’ from the perspective of religious studies to the perspective of critical religion studies, with the latter underpinned by the understanding that ‘religion’ and ‘race’ are entangled in various ways, at least since the onset of modernity if not earlier.
Building on earlier work, in this paper I want to explore the theme of ‘existential risk’ associated with Apocalyptic AI and other contingently-related existential threats in relation to ‘White Crisis’ as a modern racial phenomenon with pre-modern ‘religious’ origins; more specifically, I want to suggest that Apocalyptic AI, and the attendant discourse of ‘existential risk’, is a strategy, albeit possibly a ‘merely’ rhetorical one, for maintaining white hegemony under non-white contestation. I further suggest that this claim can be shown to be supported by the disclosure of continuity through change in the long durée entanglement of ‘race’ and ‘religion’ associated with the establishment, maintenance, expansion and refinement of the modern/colonial world system if and when such changes are understood as ‘iterations’ in a ‘programmatic’ trajectory of domination, the continuity / historical essence of which might usefully be framed as ‘algorithmic racism.’”
Victoria Lorrimar, “Mind-Uploading and the Importance of the Human Body for Intelligence”, CenSAMM conference on AI and Apocalypse (published 11 April 2018)
“One of the more radical transhumanist proposals for future human being envisions the uploading of our minds to a digital substrate, trading our dependence on frail, degenerating ‘meat’ bodies for the immortality of software existence. This paper explores these proposals in light of what we know about the embodied dimension of human cognition. Studies in the neuroscience of metaphor indicate that much of our use of metaphor extends beyond language to concept, operating in our bodily inhabiting of the world. Similarly, a phenomenological approach to knowing emphasises a ‘hybridity’ to human being that resists traditional dichotomies between the mind and body. What would jettisoning the body do to our ability to make sense of our world? Can ‘human’ intelligence exist in a silicon substrate?
Exploring several transhumanist responses to these questions surrounding embodiment, as well as their expression in select pieces of science fiction, the question of how a radical morphological freedom and diversity as a result of enhancing technologies might impact our present ability to comprehend the world and the other will be examined. From a theological perspective, the implications of these proposals for theological anthropology and the human future will be briefly considered, including whether a Christian hope of redemption can accommodate the creation of artificially intelligence beings or the ‘uploading’ of human minds to inorganic substrates.”
Maximillian Dinu-Gavrilciuc, “The Theological Singularity – Are Machines Capable of an Ontological Leap?”, CenSAMM conference on AI and Apocalypse (published 11 April 2018)
“Christian theology makes a distinction between reason (logic) and intellect (intuition), under the form of dianoia and nous. The intellect is the intuitive imago Dei and has a direct perception of reality, while reason is the ability to process sequentially the input that we receive from senses (and sometimes from the intellect as well). Computers reproduce in an increasingly competent manner human reason, by processing symbols through algorithms and producing output. However, from a religious point of view, would an A.I. system able to develop an intellect (nous) – and, if so, what would be the implications?
A possible way to reconcile the large consensus on evolution with the concept of man as Imago Dei is the ontological leap. Thus, while man could descend from a long line of proto-human ancestors, he became a different being, sentient and endowed with an intellect able of communion with God, in one swift moment. My research focuses on the possibility of such an ontological leap occurring in an A.I. system – from a task-oriented machinery to a being capable of will and, potentially, communion with God. While most of the (yet-scarce) research on A.I. and religion focuses on decision-making and data-mining processes that could, in theory, turn a thinking machine towards self-awareness and religious impulses, an ontological leap of machinery towards intellect would be the true theological singularity that would have unprecedented implications not just on religion, but also on ethics and psychology. The paper also addresses the two likely types of singularity that are popular in literature – the ‘evil’ theogenesis-type – in which A.I. sees itself as God-like, intent on enslaving or exterminating humanity, and the Adamic-type singularity of machines becoming humanly-sentient and contemplating existence.”
Michael Morelli, “Representing the End—The Ethical Implications of Praying to Bots”, CenSAMM conference on AI and Apocalypse (published 11 April 2018)
“Although fully-fledged artificial intelligence technologies currently remain in research and development phases there already have been modest advances in AI. Chatbots are an example. The New York Times reports:
For celebrities who already use Twitter, Instagram, and Snapchat to lend a personal touch to their interactions with fans, the next frontier of social media is a deliberately impersonal one: chatbots, a low-level form of artificial intelligence that can be harnessed to send news updates, push promotional content, and even test new material.
Fan engagement with such bots has become substantial enough to spark interest, investment, and research from a variety of individuals and groups. Even if fans are aware that bots are not human and the celebrity being represented is not present, there is enough of a sense of connection to keep fans engaged. This is because fans interact with the bot as if the bot is the individual being imitated.
One way to view these interactions is to read them as basic pop culture phenomena: fans desiring to connect with a celebrity. Another way to view these interactions is to read them as religious phenomena: individuals praying to their respective god or gods. The frequent crossover between pop culture and religious language is revealing in this respect. Fans worship pop stars who are often referred to as gods, and as a part of that worship, pray to their god(s) in order to satisfy existential desires for connection. Even if what—or whom—is prayed to is a bot.
Thinking of AI chatbots from this perspective reveals an implicit religious narrative in the fundamental architecture of chatbot AI technologies. In fact, there also exist chatbots designed explicitly for prayer. With such developments in AI technology in view, this paper examines the religious narratives present in these AI technologies and considers their ethical implications. Then, it builds upon this examination with a consideration of the implicit and explicit religious connection between prayer and understandings of the apocalypse.
As sociologist-theologian Jacques Ellul argues in Prayer and Modern Man, ‘Prayer is the ultimate act of hope, otherwise it has no substance. Because it is an act of hope, every prayer is necessarily eschatological [and apocalyptic].’ If prayer is a hopeful look to the future and a potentially apocalyptic end, what sort of future and apocalyptic end might be represented by prayer and chatbots?"
Beth Singler, Robert Geraci, Vicki Lorrimar and Scott Midson, “AI and Apocalypse roundtable discussion”, CenSAMM conference on AI and Apocalypse (published 11 April 2018)