Photo by Umesh Soni on Unsplash
This essay is also available as a podcast on anchor.fm, Spotify, and other platforms
I grew up watching the television show Star Trek: The Next Generation, an optimistic science fiction program depicting the command crew of the USS Enterprise, the space-faring flagship of an interplanetary military alliance several centuries in humanity’s future. Among the characters is Lieutenant Commander Data, an artificially-intelligent android who possesses consciousness and will but who lacks the ability to feel emotion. The character of Data—brilliantly written, with a thoughtful and nuanced character arc, and brought vividly to life by the actor Brent Spiner—was one of several ways in which the show’s writers explored one of the show’s core themes: the relationship between humans and their technology.
In one of the show’s best episodes, “The Measure of a Man” (Scheerer, 1989), a hearing is conducted to determine Data’s rights as an individual. Another Star Fleet officer, Commander Bruce Maddox, wants to disassemble Data so as to study his inner workings and possibly replicate him. Data is not confident that he will survive the procedure and so refuses to submit to it, but Star Fleet rejects his refusal on the basis of him being a piece of equipment owned by Star Fleet rather than an autonomous person. Data’s commanding officer, Captain Jean-Luc Picard, challenges this ruling, thus prompting the hearing, which is administered by Judge Advocate General Captain Philippa Louvois. Much of the discussion in the hearing hinges on the philosophical questions discussed in the first episode in this series: is Data truly conscious and sentient or does he merely present the appearance of these things? Is he capable of receiving and creating meaning? Early on in the episode, Commander Maddox quotes to Data from a book of poetry given to Data by Captain Picard, and then asks whether the poetry is “just words” to him or if he can “fathom the meaning.”
Towards the end of the episode, Captain Louvois frames this discussion in a particular way: does Data have a soul? Although this is not explicitly stated, I don’t think Louvois (or the writers of the episode) intended “soul” to mean an immortal spirit which inhabits our bodies and constitutes our true self—the future humans of Star Trek seem to be largely irreligious—but rather something more abstract, a kind of distillation of the essence of personhood, the answer to Commander Maddox’s question as to whether Data can fathom the meaning of poetry. This is the question I’ll be examining in today’s episode: do artificially intelligent programs or entities have souls, in Louvois’ sense of the word? And furthermore, what does AI mean for our own souls?
Being a musician and a casual follower of various music forums, I often see posts discussing the incursion of artificial intelligence into songwriting, composition, and production. This is a subject that also came up in the last essay when I discussed my own use of artificial intelligence technology in my music production process: the software that I use for a channel strip, Neutron (produced by the audio technology company iZotope), includes several tools which use artificially-intelligent machine learning algorithms to analyze audio and recommend initial settings. I’ve found these tools to be immensely useful, not as a replacement for my own musical preferences but rather as an augmentation of them, but the use of artificial intelligence in music extends far beyond these limited applications.
Aiva Technologies, based in Luxembourg, developed the program AIVA (Artificial Intelligence Virtual Artist) for which they named their company. At a TED talk featured on Aiva’s webpage, Aiva CEO and co-founder Pierre Barreau describes how they programmed the AIVA program to compose classical music by training it on over 30,000 classical scores. It seems it has been trained in other genres as well since the time of his talk. He mentions that this is “…not the first time in history that technology has augmented human creativity” and proceeds to describe how the technology of music recording was implemented in order to make music more portable for the purposes of accompanying film (About AIVA, n.d.). This reminded me of a line from the Neutron technology literature, which I quoted in the last episode:
…[W]e have always surrounded ourselves with audio processors that exhibit some form of intelligence, but because their level of intelligence was relatively low, their utility was so obvious, and their adoption has become so commonplace, nobody thinks twice about whether or not to use them. For example, compressors are ubiquitous in audio production. One could argue that a compressor is also a form of AI, which intelligently lowers the gain of an audio signal based on its momentary signal strength. As such, we can give our tracks consistency or body without having to spend hours manually writing gain automation.
Nercessian, 2019
In both cases, the implication is that the development and implementation of AI technology in music is a further quantitative progression of music technology in general, rather than the qualitative leap that I believe it to be.
Barreau continues from this line of thinking to his epiphany that “…personalized music would be the next single biggest change in how we consume and create music” (About AIVA, n.d.). He specifies what he means by “personalized music” by invoking the Ludwig van Beethoven piano composition “Für Elise,” which, given the title, is believed to have been written by Beethoven for a woman named Elise, though the exact identity of that woman and her relationship to Beethoven remain unknown. This is a piece of music that we believe was written by Beethoven with a specific person in mind, written with her particular personality and taste in mind; this is the sense in which “Für Elise” was personalized, and Barreau suggests that his AIVA technology is capable of creating a personalized song on demand for anyone.
Aiva Technologies has a YouTube channel where they feature compositions created by the AIVA AI. One of these, “Plutonium,” is a rock song vaguely evocative of late-90’s grunge (Aiva, 2019). Absent vocals, it’s a little dull, but actually exhibits some interesting features. The harmonic structure is a solid and effective chord progression in D minor, typical of the style in which the song was written. The melody is also typical in style and tone, but curiously, seems at first to be organized around the A minor scale. In fact, the scale used for the melody is A Phrygian, a scale which is derived from D minor, and when that scale’s characteristic minor 2nd appears (a B♭ in this case), it effects a deviation from the expectations that the melody has established which simultaneously makes perfect sense in the harmonic context of the song. The musical effect is striking and in analytical terms it’s really very clever—or at least it would be if the song was composed by a human.
If “Plutonium” had been human-composed, that B♭ would communicate something to me on the part of its composer. What exactly it would communicate to me is hard to put into words; the ineffability of that transmission is precisely the reason we compose music at all: there is something of music that only music can express. But being that this song was composed by an AI, I find that that transmission is absent, and I’ll explain why.
Many years ago I started developing a theory that music experience and appreciation exist along five dimensions: intellectual, technical, emotional, atmospheric, and musical. It’s not something I ever fleshed out—though now that it comes up it’s something I’d be interested in doing—so don’t take this as anything rigorous or set in stone, but if we agree that there are multiple dimensions to musical experience and that my list constitutes at least some of them, then it’s sufficient for a starting point. Going over each of these dimensions: the intellectual axis is music at its most directly cognitive; music that “makes you think” has strong intellectual qualities. The technical axis concerns the difficulty and complexity of the techniques implemented in the music’s composition and performance. A lead solo by a virtuoso guitarist like Steve Vai would be an example of music with strong technical properties. The emotional axis is, I believe, self-explanatory. “Atmosphere” is, in this context, a sense of place. A Vangelis piece from the original Blade Runner soundtrack or from the new Blade Runner 2049 soundtrack by Hans Zimmer and Benjamin Wallfisch that makes us feel present in the dystopian future of the films would have strong atmospheric properties. And finally, the musical axis is that ineffable dimension to music which lies beyond words.
Any given piece of music exhibits all of these properties to varying degrees. A fast, brutish thrash metal song would likely have strong technical and emotional qualities, and weaker intellectual and atmospheric qualities, with the purely musical element immediately present for anyone who enjoys fast and aggressive music. I find the serialist compositions of Arnold Schönberg to be strongly intellectual and technical and less emotional and atmospheric, with its purely musical qualities being more abstract and less immediately evident than those of the thrash metal song. But each dimension is reliant not only on the human experience of my listening to it but on the composers’ and performers’ intent in creating it. An AI-composed work of staggering harmonic complexity that would have been a monumental intellectual task for a human has no intellectual meaning for me because no real intellectual labor was involved in its creation. However difficult it would be for a human to perform such a work, a sufficient AI would be able to toss it off without the least bit of difficulty, and thus the music would have no technical meaning for me. However joyful or sorrowful the music, it would have no emotional meaning for me because I would know that it hadn’t arisen from a human’s personal emotional experience. Sense of place is a similarly-human experience and thus AI-composed music can have no atmospheric meaning for me. The B♭ in “Plutonium” would transmit to me a shared understanding of the ineffible qualities of music if it had been composed by a human; as it is, that song is musically meaningless to me in every possible sense.
For this reason, I do not believe that AIVA is capable of fulfilling its creators’ promise of “personalized songs.” The AI’s attempt to do so would doubtlessly rely on various metrics and tags, whether given to it by a person or gleaned from collections of data about my life and habits; however, not only is my identity not reducible to this information, no matter how detailed and robust, said information actually seems quite peripheral to my essential being. I can imagine sitting down with someone whom I’ve never met, spending a brief period of time with them, perhaps as little as ten minutes, and then composing a piece of music based on my impression of the person. In this way, I think I could capture something authentic about the person in my music, despite a relative dearth of information about them. I would not compose this kind of musical portrait in an analytical way, e.g. writing the music in a minor key because the person told me a sad story about their life; I might make a few such decisions, but for the most part, I would be relating my ineffable experience of the person—my experience of their soul, in other words, in Captain Louvois’ sense—to my ineffable experience of music by means of a personal and intuitive process that is likewise beyond language. In short, what I could do after ten minutes with a person is beyond what an AI could do even knowing every detail of the person’s life. Lacking that capacity to receive the ineffable essence of a person’s presence and being—to feel the presence of their soul, in other words—even the most detailed analytical “personalization” is without value.
Our musical experience is dependent not only on our humanity as individuals but on our shared humanity.
Generalizing from this, as a theoretical matter, I don’t find myself much concerned about the incursion of AI into the arts, at least not until we develop strong AI—truly conscious AI—and that presents problems of its own. Much of what makes art meaningful for us arises from it having been created by other humans who are expressing something of their inner experience that cannot be put into the kind of plainly-descriptive expository language with which I wrote this essay. Art is a bridge between the private inner worlds of humans; art created by AI would be a bridge to nowhere.
As a practical matter, I find the presence of AI in the arts to be somewhat more concerning because of its potential invisibility. What if, at some time in the future when AI has become more proliferate, as it surely will, I come across a random song on the internet that I find to be especially potent, affecting, and meaningful. The song would only have these properties for me so long as I believe that it was created by another person. The kind of AI that I employ in my mixing process would have no affect on this, but if I were to discover that the entire thing had been created by an AI, it would lose all meaning for me. Lacking that information with regards to any particular song I encounter, the knowledge that its having been composed by an AI is even a possibility would necessarily hedge my experience of it until I could determine the matter for certain, which would likely be impossible in most cases. AI thus possesses the potential to mediate and affect the entire universe of musical meaning, even in cases where it is unused.
Informed by this understanding of how AI affects the uniquely-human world of music, we can now move on to examine the affect of AI on the similarly-characteristic world of human religion.
In a 2019 article on the news website Vox entitled “Robot priests can bless you, advise you, and even perform your funeral,” Sigal Samuel discusses the recent implementation of a robot named Mindar in a Japanese Buddhist temple in Kyoto. The robot is programmed to recite a sermon on the Heart of the Perfection of Wisdom sutra, also known as the Heart Sutra, a central scriptural text within the canon of Mahayana Buddhism. It’s a sutra which I recited often at the temple where I used to practice Zen Buddhism. It’s quite beautiful and reflects on the nature of mind, consciousness, self, and reality. I’ll recite a portion of it:
Form is no other than emptiness,
Emptiness no other than form.
Form is only emptiness,
Emptiness only form.
Feeling, thought, and choice,
Consciousness itself,
Are the same as this.
At the moment, Mindar the robot is little more than a playback device, like a CD player or iPod, but its anthropomorphic appearance creates the impression that it is the author of the sermon rather than merely its delivery system. For me, this would have the effect of emptying the Heart Sutra of its meaning. A Zen priest who speaks of the Heart Sutra does so from a place of potent realization concerning the nature of all things, a realization hard-won through decades of study and meditation; this is not and cannot be the case for the robot.
In a CNN article on the robot, Buddhist monk Tensho Goto is quoted as saying, “The big difference between a monk and a robot is that we are going to die” (Hardingham-Gill, 2019). But the Great Question of Life and Death is exactly that which drove the Buddha—and which continues to drive the Buddha’s followers—to seek enlightenment. “Buddhism isn’t a belief in God,” Goto says elsewhere in the article, “it’s pursuing Buddha’s path. It doesn’t matter whether it’s represented by a machine, a piece of scrap metal or a tree” (ibid.). I certainly disagree with this: just as a B♭ in a melody otherwise indicating the A minor scale means something different to me depending on whether the decision to include it was a human one, a religious sermon means something different to me depending on whether it was authored by someone or something who is capable of understanding its meaning. Again, the ultimate source of the sermon is, in this case, a human one, but apparently Mindar’s designers have said that they intend to “give it machine-learning capabilities that’ll enable it to tailor feedback to worshippers’ specific spiritual and ethical problems” (Samuel, 2019), and the monk Goto said that “With AI, we hope [Mindar] will grow in wisdom to help people overcome even the most difficult troubles” (ibid.). But I don’t see AI as being any more capable of this than AIVA is capable of creating songs that are truly personalized to individuals.
In a 2001 article in Zygon, a peer-reviewed journal on religion and science, professor of psychology Matt J. Rossano suggests that we evaluate the religious concerns raised by AI technology in terms of religion’s evolutionary function. Drawing from a 1996 paper by William Irons entitled “Morality, Religion, and Human Evolution,” Rossano describes this evolutionary function of religion as being a solution to the commitment problem, the problem of determining to what degree someone is committed to a given society’s values. Being very costly in terms of time and other resources, Irons and Rossano say, adherence to religion is a good indication that one is indeed so committed. I agree that this is at least a possible evolutionary origin of religion, though not necessarily the only one, and I’m dubious as to whether religion actually does successfully solve the commitment problem, but regardless, I find the questions that this raises for Rossaro concerning the ethics and religious implications of AI technology to be good ones:
How will this affect an individual’s ability to establish and maintain significant long-term personal relationships? How will this affect an individual’s ability to contribute constructively to his or her local community? How will this affect an individual’s ability to give of him- or herself to others? How will this affect family and local community stability and cohesion?
Rossano, 2001, p. 70
Mindar the robot has taken a human relationship that is central to the study and practice of religion and to human life in general, that being the relationship between teacher and student, and replaced it with something one-sided and empty. The student receives something which is not wisdom but rather a simulacrum of wisdom, and the teacher is obviated entirely. And what happens when the “wisdom algorithm” begins to incorporate our human biases, prejudices, and hatreds, as Professor Safiya Noble’s research into Google’s search algorithm indicates is a likely possibility (2018)? We’ve seen in recent years how algorithm-driven platforms like Facebook have proliferated and weaponized human hatred and ignorance and we’ve seen from history the potential of religion to create widespread oppression and suffering. What happens when these systems are combined?
The dystopian science fiction anthology show Black Mirror features an episode, “San Junipero” (Harris, 2016), in which people can upload their conscious selves to a computer program prior to their death and then live indefinitely within the computer program’s simulated reality. As each individual’s conscious reality is being run by the computer program, we can see this as a particular application of artificial intelligence technology, one which is presently far beyond our grasp but which does not seem to be impossible in a practical sense given sufficient computational power and understanding of the human mind. We have every reason to believe that our minds, such as they are, are essentially programs being run on the hardware of our brains, with nothing magical or immaterial taking place, and so we have every reason to believe that the mind-program could be emulated on a sufficiently advanced computer.
Such a development would likely change our lives to a greater degree than any prior technology. Consider the centrality of afterlife beliefs to much of religion. I tend to view such beliefs in non-realist terms: the Christian Heaven is, for me, not a real place to which one’s soul transmigrates after death but rather an allegory for a relationship with the divine; reincarnation and rebirth within the dharmic religions of India mean to me that our existence in this life is intimately connected with what has come before and what will come after. As I’ve described before on the show, I find realist afterlife beliefs problematic because they motivate nihilism, devaluing life in this world because a superior, eternal future world awaits. Indeed, my observation of this phenomenon on my pilgrimage to Nepal was, I believe, a major influence on my adopting the antinihilist stance that ultimately led to my becoming a Satanist.
If mere belief in an afterlife, absent evidence, is sufficient to motivate nihilism, what would be the effect of certain knowledge that an eternal afterlife does indeed await? Our humanity is defined in many ways by our mortality: not only the fact that each of us will certainly die, but the story of our individual journeys from our births until that final moment, our bodies first maturing and growing and then aging and decaying and all the while changing. To be human is not only to be in process but to be a process. Removing this aspect of our humanity seems to me to be a kind of death, and a worse one than the one that we face as mortals. And beyond that, given the current organizational structure of human society, we would have to ask ourselves who would have access to this afterlife? And how would this technology be used to control people in this life? Christian hegemony is predicated, in part, on the promise of paradise and the fear of Hell: obedience to the hegemony means the former, and disobedience the latter. I personally find this tactic unpersuasive because I do not believe that the promised Heaven or Hell actually exist. If humans were to actually create these realms, I might remain defiant and choose true death over the promise of eternal paradise, though that is far from certain; however, standing up to a real threat of eternal torture would be beyond my will.
And the afterlife is not the only religious concept that AI could potentially have the power to reify. In the miniseries Devs (Garland, 2020), a powerful quantum computer is used to convey the godlike power of omniscience: total knowledge of the past and future. This knowledge is contingent on the universe being deterministic, but if that is not the case, there remains the potential for AI technology to possess a degree of knowledge far exceeding that of which humans are capable. If Sir Francis Bacon was correct that knowledge is power—that power arises directly from knowledge—or if Michel Foucault was correct that knowledge and power are actually the same thing, then such an omniscience or near-omniscience would also convey a commensurate omnipotence, and there we have two out of the three necessary attributes traditionally ascribed to the Christian God. But while the omniscience of AI is a mere matter of technological advancement and omnipotence arises invariably from omniscience, the third attribute, omnibenevolence, perfect goodness, is not only causally disconnected from the other two but indeed seems unlikely in the extreme to arise from human efforts and may, if objective goodness does not actually exist (as I believe it doesn’t), be impossible even in theory.
Thus we are faced with the possibility of a human-engineered but fully real god absent any sort of moral compass, which is certainly a terrifying prospect.
Last year I wrote an essay entitled “Satanism and Perfectionism” (Bilsborough, 2020) in which I examined the foundations of the moral theory known as perfectionism, which, in my own words “posits the development and exercise of our characteristic human capacities—those things which are unique and essential to humans—as being intrinsically good and the foundation for right action.” I see this moral framework as having a great deal of potential within the framework of Satanism, and I believe that it can avail us here in the question of how Satanists and perfectionists in general should respond to AI.
Simply put, to abdicate our characteristic human capacities to a machine is in direct contradiction to their human development and exercise. However, the matter becomes complicated when we consider that the use of tools is itself a characteristic human capacity: many species use tools, but we’re among the only ones who build tools specific to our purposes, and the centrality of our tools to our lives is something unique to us. Following the criteria set forth in Thomas Hurka’s formulation of perfectionism (1993), our tool-usage plays an explanatory role in much of human civilization, and were I to imagine beings who were like us in all respects except that they do not use tools, they would intuitively seem to me to be not fully human.
The boundary between tools which facilitate and extend our characteristic human capacities and tools that subsume those capacities into themselves is not a clear one, and the matter is made murkier still by the fact that these technologies do not come to us all at once but rather appear in our lives incrementally. For we perfectionists, who value our humanity as an intrinsic good, what can we do to delineate between the use of our tools and the abdication of ourselves to our tools? This problem is not limited to AI. In his book Nihilism (2019), philosopher Nolen Gertz presents an argument, based on the work of Martin Heidegger, that, in our efforts to create technology to serve our ends, we have made ourselves means to the service our technology.
Devices frequently require that we organize our activities in accordance with what is necessary to keep them functioning properly. We are even becoming increasingly interrupted in our activities in order to help the device carry out functions. These are functions that often we did not choose for the device to perform, functions that we do not even understand. The device informs us that we need to download an update, and we click download. The device informs us that we need to click accept, and we click accept. The device informs us that we need to restart the device, and we click restart. The device informs us that we need to create a new password, so we enter a password—only the device then informs us that it does not accept our password, and it advises us on how to enter the best password, so we keep trying until we meet the device’s approval.
p. 173
It is not difficult to see the potential for AI technology to amplify this problem significantly. In the Matrix movies, humanity becomes enslaved to a race of artificially intelligent robots as a result of military conquest; it seems more likely that we would, over time, enslave ourselves to our technology without a shot fired.
Viewed from the standpoint of the technology itself, the problem is intractable and can only be addressed by stepping back to look at its involvement with the structure and values of our societies. Christianity, capitalism, patriarchy, and hegemony in general—as we saw in our discussion of Herbert Marcuse in the last episode, these are pervasive structures from which total extrication is impossible, and AI has the potential to infiltrate and reinforce all of them. Satanists necessarily oppose these systems but we do not have the option to live outside of the world of capitalist hegemony and technology, nor would we want to, as opposition requires engagement and participation with what is being opposed. This is the way, I believe, to address the problems posed by artificial intelligence technology, not by standing directly in opposition to the technology but rather to the social values that hold technological progress as an end unto itself.
I hope you’ve found this piece interesting and informative. If you’ve enjoyed it, I encourage you to look at some of my other essays, and if you find my approach to philosophy and religion at all valuable, I hope that you’ll stop in at my Patreon page, which features bonus content for patrons, and that you’ll stop back by to check on my new content.
Works Cited or Referenced
About AIVA. (n.d.). Retrieved March 20, 2021, from https://www.aiva.ai/about
Aiva. (2019, May 21). Plutonium—Rock Song Composed by AI | AIVA. https://www.youtube.com/watch?v=i2TjTb_Psh8&list=PLv7BOfa4CxsHAMHQj0ScPXSbgBlLglRPo&index=2&ab_channel=Aiva
Arendt, H., Allen, D. S., & Canovan, M. (2018). The human condition (Second edition). The University of Chicago Press.
Bilsborough, T. (2020, September 17). Satanism and Perfectionism. A Satanist Reads the Bible. https://asatanistreadsthebible.com/satanism-and-perfectionism/
Chaudhary, M. Y. (n.d.). The Artificialization of Mind and World. 21.
Für Elise. (2021). In Wikipedia. https://en.wikipedia.org/w/index.php?title=F%C3%BCr_Elise&oldid=1013035159
Garland, A. (2020, March 5). Devs [Drama, Mystery, Sci-Fi, Thriller]. DNA Films, FX Productions, Scott Rudin Productions.
Gertz, N. (2019). Nihilism. MIT Press.
Hardingham-Gill, T. (2019, August 28). The android priest that’s revolutionizing Buddhism. CNN. https://www.cnn.com/travel/article/mindar-android-buddhist-priest-japan/index.html
Harris, O. (2016, October 21). San Junipero [Drama, Sci-Fi, Thriller]. House Of Tomorrow.
Heilweil, R. (2019, March 28). Deus Ex Machina: Religions Use Robots to Connect With the Public. Wall Street Journal. https://www.wsj.com/articles/deus-ex-machina-religions-use-robots-to-connect-with-the-public-11553782825
Hurka, T. (1993). Perfectionism. Oxford University Press.
Nercessian, S. (2019, June 6). Behind the Technology of Mix Assistant. IZotope. https://www.izotope.com/en/learn/behind-the-technology-of-mix-assistant.html
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
Roko’s basilisk—RationalWiki. (n.d.). Retrieved February 28, 2021, from https://rationalwiki.org/wiki/Roko’s_basilisk
Rossano, M. J. (2001). Artificial Intelligence, Religion, and Community Concern. Zygon®, 36(1), 57–75. https://doi.org/10.1111/0591-2385.00340
Samuel, S. (2019, September 9). Robot priests can bless you, advise you, and even perform your funeral. Vox. https://www.vox.com/future-perfect/2019/9/9/20851753/ai-religion-robot-priest-mindar-buddhism-christianity
Scheerer, R. (1989, February 11). The Measure Of A Man [Action, Adventure, Mystery, Sci-Fi]. Paramount Television.