Photo by Michael Dziedzic on Unsplash
This essay is also available as a podcast on anchor.fm, Spotify, and other platforms
In the backstory of Frank Herbert’s Dune universe is an event called the Butlerian Jihad, a human revolt against “thinking machines.” Herbert himself provided few details of the Jihad in his books but the idea of a future without computers is foundational to many of the themes in his stories. The underlying philosophical justification for the Jihad, as later recorded in one of the Dune universe’s core religious texts, the Orange Catholic Bible, reads as follows: “Thou shalt not make a machine in the likeness of a human mind.” Though Herbert provided little other information in the six original Dune novels, it is clear that, in the universe of his stories, artificial intelligence had become a catastrophic problem for humanity. Now at the edge of advances in technology which have made artificial intelligence a reality, we have some questions to ask about what that means for us as humans.
Hail and welcome to A Satanist Reads the Bible and this, the first episode in a three-part series on artificial intelligence and its relationship with religion and Satanism. As I mentioned at the end of the last episode, this episode is going to be on the shorter side because of the short turnaround time between the last episode and this one. Having only just started my research, I’m mainly going to be using this episode as an initial exploration of of the subject of artificial intelligence. Next episode will get more into the general ethical and practical problems of AI, and then the third episode in the series will focus on the potential relationships between AI, religion in general, and Satanic religion in particular.
Since we’re talking about artificial intelligence, it seems the best way to open the discussion is with a look at what intelligence is in the first place. There are at least two different ways that we speak about intelligence: one is as a faculty possessed by some entities and not others (for example, humans are intelligent and rocks are not); the other is as a quantitative measure of that faculty among humans (for example, Albert Einstein was intelligent; Donald Trump is not, while it remains the case, perhaps arguably, that Donald Trump is intelligent in our other sense, in terms of possessing the faculty of intelligence).
The various definitions I’ve examined generally define intelligence as an umbrella term for various mental faculties, including reasoning, abstraction, imagination, problem-solving, understanding, learning, planning, creativity, self-awareness, and critical thinking. These definitions are neutral as to whether these faculties are discrete and separable, grouped under the general heading of “intelligence” as a matter of convenience, or whether they are all modalities of a singular and unified faculty which we call intelligence. Computational research scientist Alex Wissner-Gross defined intelligence in terms of a mathematical formula, F = T ∇ Sτ, under which intelligence is a force acting at some level of strength to maximize future freedom of action out to some time horizon (Wissner-Gross, 2013). A 2007 paper by machine learning researchers Shane Legg and Marcus Hutter collected and analyzed over 70 definitions of “intelligence” and synthesized the following definition from their analysis: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” Well then, what is an agent and what does it mean for an agent to have or to achieve goals? The word agent derives from the Latin infinitive verb form agere, which means “to do.” The English words act, action, active, and agenda are likewise derived from this root. An agent, then, is something which actively does something. I am presently reading this essay into a microphone connected to my computer and am thus acting as an agent because I am actively doing something. The microphone is only passively translating the sound pressure of my voice into electrical signals and is thus not an agent. How about the computer? There’s a degree of processing involved but really the computer is only passively carrying out those instructions given to it by myself and others. It’s not recording my voice because it wants to or for any similar reason that we associate with agency and is not capable of choosing to do otherwise. But then, is my reading this essay truly a result of my own agency or am I just carrying out my own biological and neurological programming? If I am, it seems to remain the case that I am doing so actively, choosing to act on my various drives whether or not those drives were chosen by me or were chosen for me, believing that I could have chosen otherwise; however, this is not something that I can conclusively demonstrate to myself. Goals, similarly, seem to involve this element of choice and free will: if I am instructed to do something and have no choice but to carry it out in an active and pre-programmed way, as a computer does, I could hardly refer to the completion of that task as a goal.
The simplicity of our definitions belies the complexity and difficulty of the questions, which involve unanswered and possibly unanswerable questions about consciousness and free will. But without considering the matter entirely settled, I think we have a sufficient speculative basis for understanding intelligence to proceed with the discussion of its artificial or synthetic manifestation. I don’t see the need to settle on any one definition of intelligence in order to move forward; rather, I’ll be keeping all of the ones I’ve encountered in mind. However we define intelligence, artificial intelligence is then any form of intelligence which we ourselves have created through technological rather than biological means.
My computer, as it is, certainly creates a superficial appearance of intelligence, but I would argue, at least on a preliminary basis, that this is indeed only an appearance and not true intelligence. I can ask my computer to perform complex mathematical calculations but it will do so in a rote and predictable way without any hint of the invention or imagination that are necessary for the human pursuit of mathematics. If make a mistake in the script which provides the general algorithm that the computer will use to perform the calculation and get a wrong answer, the computer does not and cannot know this and indeed does not even know what it means for a given number to be the correct answer to a mathematical expression. Despite appearances, my computer does not know anything at all. The typical account of what knowledge is is given as justified, true belief and my computer has neither beliefs nor any understanding of how beliefs are justified nor any notion of what truth is. Such a self-aware understanding seems necessary to me to any functional notion of intelligence, but am I correct in that assumption? What are the boundaries of this understanding? At what point would we be able to say that a machine has become intelligent?
There are a few philosophical tools and thought experiments that will help us sort things out. The first is the famous notion of the philosophical zombie. A zombie is, in this context, someone who has every appearance of being a normal human but who has no conscious inner experience. There is no procedure for testing whether someone is a zombie; indeed, the entire notion is unfalsifiable and there is no way to prove the solipsist hypothesis that you, listening to this podcast, are not the only conscious human being in a world of zombies. And if you know someone who adopts that hypothesis, you have no way of proving to them that you are in fact a conscious being and not a zombie. There is, at the same time, no good reason for believing the solipsist hypothesis. As philosopher Julian Baggini mentions in one presentation of this trope, “If they walk like us, talk like us and have brains and bodies like us, then the chances are they are like us in all significant respects, including how things feel to them from the inside” (2005, p. 278). But if such zombies did exist, would they be intelligent? Would they be able to “achieve goals in a wide range of environments”? The question hinges on whether or not they can be said to have goals at all—if so, they’re certainly capable of achieving them to the same degree as a conscious human (otherwise there would be a point of distinction between zombies and conscious humans that would allow them to be differentiated).
The next tool comes to us courtesy of one of the greats in the history of mathematics and computation, Sir Alan Turing, and his 1950 paper “Computing Machinery and Intelligence,” which sought to answer the question of whether machines can think. To this end, Turing developed a process he called the “imitation game,” though it is now more widely known as the Turing test. The idea is that an interrogator asks questions of an entity which may be a human or a computer. The interrogator does not know which and has no means of determining this except for the test itself, using the answers to the questions to make that determination. Turing’s conclusion was that, if a computer is capable of passing the Turing test—if it can fool the interrogator into thinking that it’s a human—then we have a basis for being able to say that that a computer can think, that it is intelligent. There are numerous objections to the validity of this test—I’ll be examining one of them in detail shortly—and Turing anticipates and responds to many of them in his paper. Among the most significant objections for our purposes is what Turing calls the “Argument from Consciousness,” which states that we cannot know whether machines think unless we know whether they’re conscious. Turing’s argument to the contrary is essentially the same as Baggini’s argument about solipsism: if something behaves as if it thinks, is it not better to assume that it actually does think rather than assuming that it is capable of appearing as if it does without actually being able to do so? If we were to make such an assumption, why wouldn’t we then make that assumption about the humans around us who appear to think and thus adopt the solipsist hypothesis?
The most famous objection to the Turing test, and indeed one of the most famous and controversial thought experiments in all of philosophy, is the Chinese Room Argument of philosopher John Searle, presented in his 1980 article “Minds, Brains and Programs” (Cole, 2020). Here’s my version of it: imagine yourself in a room with two slots on opposite walls and a very large book. Pieces of paper on which are written characters in the Mandarin language of China enter the room through one of the slots, and if you actually know the Mandarin language, pretend that you don’t or pretend that the language in question is some other language that you don’t know. The book contains instructions for writing down more Mandarin characters based on those already on the paper. Unbeknownst to you, the person writing down what gets handed to you through the slot is writing down various questions in Mandarin. Unbeknownst to you, by following the procedures in the book, you are writing down reasonable answers to these questions, also in Mandarin. To anyone observing from the outside, it looks like you can write down a question in Mandarin, feed it through the slot, and get a reasonable answer in Mandarin from the other slot some time later. If you were handing written questions in Mandarin to a Chinese person and they wrote down reasonable answers in Mandarin in reply, you would say that this person is able to do this because they know the written form of the Chinese language. You, inside the room, are able to produce exactly the same effect, but can we then say that you (or the system of the room of which you are a part) know Mandarin? Intuitively, we would say no, neither you nor the system know Mandarin, and this bears on the Turing test as this is essentially what computers are doing when they appear to us to be thinking, following rote instructions for manipulating symbols without knowing or understanding what those symbols mean.
The Stanford Encyclopedia of Philosophy article on the Chinese Room Argument details several replies that have been offered, and these replies fall into a few broad categories. Some acknowledge that the person in the room does not understand Mandarin but asserts that there is something—either the total system itself or some virtual aspect of it distinct from both the person and the total system—does understand Mandarin. Others acknowledge the essential conclusion of the Chinese Room Argument but assert that that conclusion is not universally applicable to artificial intelligence. The discourse surrounding the Chinese Room Argument is both extensive and deep—I could probably do nothing with this project from this point forward other than focus on this one thought experiment and would likely have plenty of material. Much of the discussion turns on the difficult problems regarding consciousness that we encountered earlier, and also on the meaning of words like “understanding” and the concept of meaning in general.
I could spend the rest of this essay and the entirety of the next two just exploring these sorts of questions without resolving anything on the matter of artificial intelligence, so I’ll have to hand-wave much of the discussion, hope that I don’t miss anything too critical, and address things in more depth as they arise. The matter that seems most relevant to me as I’ve explored these questions is that I have an intuitive sense that things such as my thoughts, my beliefs, my knowledge, my intelligence, and my understanding are intimately involved with my inward conscious experience of myself, and that anything lacking this sort of inward conscious experience cannot be said to think, believe, know, or understand anything or to be intelligent in any sense other than a metaphorical one. But I’ll reiterate that this is only an intuition; it is not something about which I am at all certain.
This brings us to a key distinction in the discussion of artificial intelligence: strong AI vs weak AI. Weak AI can have every appearance of thought and intelligence and all of the functionality derived therefrom without actually possessing those faculties. The Chinese room, as it is intuitively understood, would be an example of a weak AI. Strong AI, on the other hand, actually understands what it’s doing, actually thinks, actually possesses and utilizes intelligence. Searle’s thought experiment was intended to demonstrate that strong AI is impossible, and while I agree with him that nothing that could properly be called understanding of Mandarin is present in the Chinese room, I disagree that this makes strong AI impossible. At present, we have no reason to believe that human consciousness, along with associated faculties like intelligence and understanding, arise from anything other than the complex bio-electrical interactions of neurons in the human brain. There does not seem to be anything about the brain, in other words, that cannot, in principle, be replicated by suitably advanced technology, and then such technology would at least have the potential for consciousness.
A recent episode of the excellent podcast Philosophize This! (West, n.d.) covered the work of 20th century Canadian philosopher Marshall McLuhan, whose work has some significant bearing on our discussion here. McLuhan was interested in the analysis of media, by which he meant not only those things which we typically understand as “media” (such as news media or television media), but rather all technology. Media, as McLuhan described them, are extensions of our natural human faculties. When we cut food with knives we’re extending the ability of our hands and teeth to do the same; when we season and heat food we’re extending the ability of our digestive systems to digest the food. Our cars and other systems of transportations are extensions of the ability of our feet and legs to ambulate us to wherever we need to go.
Computers, then, are extensions of our brains’ ability to process and recall information, among many other things. When I was typing up this essay I was doing the actual writing in my head and then typing what I wrote into my word processor, allowing me to store and manipulate the information in a more reliable way than my brain is capable of. My recording this as a podcast and distributing it on the internet extends the natural faculty of my voice. But my computer is not and is not capable of being an extension of my ability to think. Granted, it facilitates my thinking by extending other of my natural faculties which are involved in thought—it’s often easier to think about things, for example, when I’ve written them down in my research database—but it never does any actual thinking itself. Computers that do think, whether of the weak AI or strong AI varieties, would mark a radical departure in McLuhan’s model, because, excepting artificial intelligence, all of our various technologies, our extensions of our natural faculties, those things which McLuhan called media, are predicated upon and implemented by way of human thought. When we want to go somewhere, we have to think about where we want to go and how to go about getting there. When we want to write something in a word processor, we have to think about what to write. Whatever technology we use, however much these technologies have changed the nature of human life, thought has remained the core of all human activity. Artificial intelligence, in contradistinction to the entire history of human technology, would extend our natural faculty of thinking itself.
This qualitative leap in the relationship between humans and our technology should give us pause, because as McLuhan points out, new technologies change what it means to be human in significant and fundamental ways, often leading to dramatic changes in society. Think of how much of human life is presently structured by written language, which, in terms of the entire history of the species, is a fairly recent innovation. With the invention of writing, humans became capable of transmitting a vastly greater quantity of information from generation to generation than had ever been possible, and we’ve never been the same since.
Though we’ve never before faced the possibility of technology extending our natural faculties of thought, our desire to offload our thinking through some means or another has a great deal of precedent. Take, for example, the concept of a law. Rather than attempting to administer justice on a case-by-case basis without recourse to any sort of general policy—rather than thinking about each individual case in full, in other words—we used our thinking to pre-establish policies for various general cases, and, having done that, we only need to think about which generalization applies to a given specific case: this person stole this amount of money under these circumstances, for example, and it’s already been decided that that warrants a particular punishment.
In itself, this is not a bad idea and actually has some significant potential advantages. There’s a reason the statue of justice is typically depicted as being blindfolded: justice requires consistency and neutrality, which are not likely to be present in a world absent any laws where all matters of justice were decided on a case-to-case bases. It makes sense to have a system of well-thought-out laws which are fair, known to all, and applied consistently. Unfortunately, this is not the system we have, and a quick look at the justice system that we do have (in the United States, at least) will do much to inform our discussion here on artificial intelligence, as many of the problems of the American justice system result precisely from our having abdicated our thought to institutional policies.
Consistency is a necessary condition for justice but not a sufficient one. Unfair laws will consistently perpetrate injustice, and many of the nation’s present laws were written by people with implicit or explicit biases and no knowledge of the workings of our contemporary world. And as laws are generalizations, which by definition cannot account for every possible circumstance, they can only be justice if some allowance is made for potential unique and extenuating circumstances. A judge must be able to think about a given case and determine if there are unique factors involved which might serve to make the generalized law inapplicable, or determine that the applicable law is unfair in general or as applied to that specific case. Obviously leeway for such thought leaves room for a great deal of personal bias to creep in, but a system in which policy must be strictly enforced without room for discretion has no hope of mitigating the biases already inherent in the system. The American justice system, with its massive racial disparities and mandatory minimum sentences for the “crime,” so-called, of substance addiction, is a clear example of exactly that scenario. When a mentally ill Black man receives a 42-year prison sentence for selling less than three grams of crack cocaine, exactly as happened to Atiba Parker of Mississippi in 2006 (“Atiba Parker,” n.d.), the problem is not that we’re not smart enough to understand or can’t think our way into understanding how profoundly unjust that is, but rather that such an understanding is irrelevant to the system that we’ve implemented in order to extend and facilitate our thinking. And as we’ll see in the next episode, the introduction of AI has the potential to exacerbate these specific problems significantly.
As physicist and machine learning researcher Max Tegmark wrote in his essay “Let’s Aspire to More Than Making Ourselves Obsolete,” which was included in the 2019 collection Possible Minds: 25 Ways of Looking At AI, edited by John Brockman, the real threat of AI isn’t malice, but rather competence. Our books, movies, and television shows often depict artificial intelligence rising up against us, but it seems to me, and to Max Tegmark, that the greater threat is that AI will work exactly as it is intended.
I hope you’ve found this piece interesting and informative. If you’ve enjoyed it, I encourage you to look at some of my other essays, and if you find my approach to philosophy and religion at all valuable, I hope that you’ll stop in at my Patreon page, which features bonus content for patrons, and that you’ll stop back by to check on my new content.
Works Cited or Referenced
Atiba Parker. (n.d.). FAMM. Retrieved March 4, 2021, from https://famm.org/stories/atiba-parker/
Baggini, J. (2005). The pig that wants to be eaten: 100 experiments for the armchair philosopher. Penguin Group.
Brockman, J. (Ed.). (2019). Possible Minds: 25 Ways of Looking at AI. Penguin Press.
Butlerian Jihad. (n.d.). Dune Wiki. Retrieved February 28, 2021, from https://dune.fandom.com/wiki/Butlerian_Jihad
Cole, D. (2020). The Chinese Room Argument. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2020). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2020/entries/chinese-room/
Legg, S., & Hutter, M. (2007). A Collection of Definitions of Intelligence. 13.
Searle, J. R. (1999). Mind, language, and society: Philosophy in the real world (1. paperb. ed). Basic Books.
Turing, A. (1950). Computing Machinery and Intelligence. Mind, LIX(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433
West, S. (n.d.). Episode 149—Transcript. Philosophize This! Retrieved March 1, 2021, from https://www.philosophizethis.org/podcast/episode-149-transcript
Wissner-Gross, A. (2013, November). A new equation for intelligence. https://www.ted.com/talks/alex_wissner_gross_a_new_equation_for_intelligence