Photo by Tobias Tullius on Unsplash
This essay is also available as a podcast on anchor.fm, Spotify, and other platforms
The short story “The Minority Report” by science fiction author Philip K. Dick (“The Minority Report,” 2021) and the movie Minority Report on which it is based (Spielberg, 2002) depict a society in which future murders are predicted, prevented, and prosecuted based on visions of the future received by three mutant humans. These psychic mutants, called precogs, receive visions of the future which are transmitted to a police department called Precrime. The detectives employed in the unit analyze the visions for information that they use to determine the perpetrator and victim of the future murder. They then find and arrest the future perpetrator, thus preventing the murder, and imprison the would-be perpetrator by placing them in stasis.
I have an alternative scenario to consider: a police department or investigation bureau which lacks mutant humans who can see the future but which does possess an inordinate amount of data—on the order of tens of billions of entries in a database—along with algorithms powered by artificial intelligence that can use this information to predict the most likely perpetrators of crimes. This information in itself is insufficient justification for arrest but is sufficient justification, according to the relevant authorities, for increased surveillance of the people in question. At that point, the investigators need only wait for the person to commit a crime, which would not be a chance occurence but rather a near-inevitability—according to a 2009 book by attorney and journalist Harvey Silverglate, due to the extent and ambiguity of the American legal code, the average person commits about three felonies every single day, and once the person being surveilled has committed a suitable crime, they can be arrested, thus bolstering the statistics that the department uses to secure funding and justify their existence, as well as helping to populate the for-profit prison state. And such departments need not use the information and algorithms at their disposal just to track potential criminals—political dissidents and others whom they may deem “undesirable” can be targeted through the same means just as easily.
This scenario is arguably more problematic than the Minority Report scenario—at least, under that premise, each arrest is preventing a real murder that would have otherwise really happened, thus serving the interests of justice in at least a perfunctory way; under the scenario that I described, the only interests being served are those of the police surveillance state. Also unlike Minority Report, the scenario that I described is not only fully within our present technological grasp but is actually being implemented in different forms and to different degrees across the United States.
Artificial intelligence is not something that emerges into the world in a neutral way, or which affects the world in a neutral way. Artificial intelligence is something created by humans within the context of a particular society and period of history; that context determines the nature of the particular artificial intelligence that we create, as well as the environment in which it operates and the scope of its potential influence. This being the case, a proper understanding of how artificial intelligence influences human society and the possibilities for how it may do so in the future begins with a study of human society as it presently exists. This is a very broad topic and a full survey is well beyond the scope of the entire Satanist Reads the Bible project, but we can hone in on some key points that are particularly relevant to the subject of artificial intelligence.
We’ll begin with the 20th-century German-American theorist Herbert Marcuse and his book One-Dimensional Man (originally published in 1964). Marcuse was a prominent member of the school of philosophers known as the Frankfurt School, which was primarily concerned with assessing and critiquing the power structures in contemporary society. Though influenced by Karl Marx and critical of capitalism, they were also critical of Marxist-Leninism and believed that classical Marxism was inadequate to describe the social conditions of the 20th century. Marcuse in particular wanted to know why Marx’s predictions, which in general seemed prescient and even inevitable, had not been fulfilled; in other words, why capitalism had not yet collapsed in on itself (Farr, 2020). His theory on that matter, as described in One-Dimensional Man, was that capitalism had reformed itself in the wake of the threat of Marxist socialism, becoming both more oppressive and more pernicious. The book’s famous opening reads:
A comfortable, smooth, reasonable, democratic unfreedom prevails in advanced industrial civilization, a token of technical progress. Indeed, what could be more rational than the suppression of individuality in the mechanization of socially necessary but painful performances; the concentration of individual enterprises in more effective, more productive corporations; the regulation of free competition among unequally equipped economic subjects; the curtailment of prerogatives and national sovereignties which impede the international organization of resources.
Marcuse, 1991, p. 1
For Marcuse, societies under post-industrial capitalism are totalitarian—this is the exact word that Marcuse uses, meaning totalitarian in the sense of being “totally administered”—but invisibly so. Such a society is structured such that the needs of the individual are both defined and met by the system itself, with our various entertainments and comforts disguising the system’s fundamentally exploitative nature and all potential opposition subsumed into the system itself such that what were previously irreconcilable qualitative differences become quantitative differences along a single dimension of thought. To understand this, consider the folk and rock music of the 1960’s, the era in which One-Dimensional Man was written and published. The music cultures at the time were transitioning from being an actual counterculture, something not fully subsumed into capitalism, to being another product. The rebellious music of the period can now be heard on the PA system at supermarkets, used to effect an environment which facilitates purchasing. “The music of the soul is also the music of salesmanship,” Marcuse writes (1991, p. 57). More recently, we have the famously anticapitalist band Rage Against the Machine, whose members had no choice except to participate in the very machine they were raging against in order to promulgate their music and their message. Similarly, those who wish to learn about Marxist socialism and communism—the antitheses of capitalism—by purchasing a copy of Das Kapital will, in doing so, generate revenue for capitalists.
Regular listeners here might note similarities between Marcuse’s theory of one-dimensional society and the theories of cultural hegemony espoused by Antonio Gramsci and Louis Althusser, as I described in the episode “Dogma and Hegemony” (Bilsborough, 2021). There is indeed a great deal of crossover and I only refrained from bringing up Marcuse in that episode due to time constraints. Marcuse’s take on the matter is distinguished by being particularly influenced by the work of Sigmund Freud: under Marcuse, what Gramsci called cultural hegemony and what Althusser called the ideological state apparatus operate through manipulation of our unconscious psychological drives, and Marcuse focused on our sexual drives in particular.
Next up is French postmodern theorist Michel Foucault and his 1975 book Discipline and Punish: The Birth of the Prison, which traced the genealogy of the use of punishment as a means of social control from the tortuous executions of the Ancien Régime of pre-revolution France through its use in both prisons and schools (and elsewhere) to standardize behavior. As Foucault documents, punishment was once a public spectacle directed against the criminal’s physical body, but, under the influence of various social forces, shifted over time to become “the most hidden part of the penal process” (Foucault, 1995, p. 9), focused not on the physical body but “on the heart, the thoughts, the will, the inclinations” (ibid., p. 16). Guilt under the Ancien Régime was determined not by the intent of the accused but rather by an assessment of whether the person in question had done the action in question; contemporary justice and criminology are concerned less with whether someone has committed a crime (a bare assessment of objective fact: did the person do the thing or not?) and more with whether someone is a criminal, someone who possesses mens rea, a “guilty mind.” Crimes must be punished, but criminals must be reformed, in other words normalized, something necessarily accomplished through institutional rather than corporal processes. That the contemporary criminal justice system does little to reform criminals in practice is beside the point; institutions of criminal justice represent the ideas of reform and obedience even if they do not accomplish them, and this representation is directed not towards criminals specifically but towards society in general: in Foucault’s words, “The penalty must have its most intense effects on those who have not committed the crime” (ibid., p. 95). As Foucault documents, the methods implemented by prisons to control their prisoners were adopted by other institutions such as schools and hospitals, as the goals are much the same: discipline, the standardization and control of individuals.
In addition to the foregoing, what is important for our purposes is Foucault’s analysis of knowledge and power. The 16th- and 17th-century English philosopher Sir Francis Bacon is famously believed to have stated that “knowledge is power,” meaning that knowledge is an instrument of power, that one who has knowledge can use it for purposes of power. For Foucault, the relationship between the two was even closer. He described them with the hyphenated term power-knowledge, a unity with dual aspects which imply each other: we can only control what we know and we can only know what we control. Thus, surveillance plays a key role in Foucault’s theory: in general, people act differently if they believe that it is possible that they are being watched, and so even mere observation is a means of control.
Keeping these theories in mind, we’ll now move forward to more recent works which can inform us as to how the use of artificial intelligence manifests in and reinforces existing social controls, beginning with the 2018 book Algorithms of Oppression by UCLA professor Safiya Noble. Internet search engines (Google’s in particular), their results, and the artificially intelligent algorithms which produce them are the subjects of Noble’s book, which was inspired when Noble discovered that the search phrase “black girls”—an entirely likely and innocent input for someone such as a young Black girl looking for inspiration for a new hairstyle—resulted primarily in racist and pornographic content.
The internet is, collectively, both the largest and the most accessible repository of information in human history, but such a vast quantity of information is useless without some means of retrieving and prioritizing what is immediately relevant. That’s where search engines come into play, and given their role as both gatekeepers and evaluators of the internet’s wealth of information, they have enormous power in terms of shaping the general knowledge and culture of society. According to Noble’s research, people in general believe that the results they get from search engines are objective and natural, that what Google says is the most important and relevant information regarding a given search term or phrase by virtue of it appearing on the first page of results is indeed the most important and relevant information. As internet researcher Alex Halavais describes them, search results are an “object of faith” (quoted in Noble, 2018, ch. 1, para. 28). In fact, internet search results are neither natural nor objective and do not represent what information is most relevant to the searcher but rather what is most valuable—both in terms of financial and cultural values—to the developers of the search engines, which are predominantly (as with Google) advertising companies with vested financial interests in their search results.
In December of 2020, a computer scientist at Google, Timnit Gebru, co-authored a paper for the Association for Computing Machinery Conference on Fairness, Accountability, and Transparency critiquing the methods that AI developers are using to train their natural language processing algorithms. Natural language processing is a core component of AI models; it has numerous applications, one of which is to allow users to interact with software using natural, everyday language. This is obviously critical to internet search applications, and natural language processing applications need to be trained on large datasets of the language in question (called corpora, or corpus in the singular) so that they can learn how natural language works. The data that represents the program’s “understanding” of the language (whether or not this is understanding proper is a philosophical matter that I won’t be taking up here) is referred to as a language model. The paper, entitled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” observed that certain language models were being generated from resources such as Reddit and Wikipedia without controlling for the biases inherent to those platforms, and that other language models used controls that exacerbated the biases in the corpora rather than mitigating them (Bender et al., 2021). Gebru claims that Google had demanded that she withdraw the paper from consideration at the conference or that she and the other Google employees involved in its authorship remove their names from it, and that when she pushed back against these demands, she was fired. Google disputes this, stating that she had resigned, and that they had contested her paper on the basis of it not including recent research that would have contradicted her claims, but about 2700 Google employees, as well as about 4300 others in the academic community, signed a letter condemning Google’s treatment of Gebru and Google CEO Sundar Pichai issued a letter of apology without formally acknowledging any wrongdoing (Metz, 2021; “Timnit Gebru,” 2021).
The general problem of biased search results highlights one of the key problems I see in artificial intelligence, which I’ll refer to as the presumption of objectivity. We tend to think of computers as being neutral and objective agents, free of human biases and prejudices—how could a computer be racist or sexist? we think to ourselves—when in fact they often manifest the biases and prejudices of their designers and of society in general in subtle or invisible ways. An example is the software that I use for mixing the music I produce, Neutron, developed by the company iZotope. Neutron is a channel strip, a bundle of common effects like EQ and compression which are commonly applied to each track of a music mix in progress. It includes a feature called Track Assistant which analyzes the content of the track to which it is assigned and uses artificially intelligent machine learning algorithms (which iZotope collectively refers to as assistive audio technology) to recommend settings for each of the component effects. Although I typically make a few tweaks after applying Track Assistant to suit my preferences (as iZotope recommends), I find that the results almost always serve as an excellent starting point, and I’ve consistently gotten very good results in my mixes by implementing it, better than I think I would be able to do otherwise on my own.
iZotope’s webpage on their assistive audio technology describes it using very neutral language; the general message is that the software does exactly what is needed for that particular piece of music (Nercessian, 2018). But what kind of processing a given piece of music needs is a matter of subjective human values, and iZotope does not explain how it is that they’ve implemented these values in their software; even the page “Behind the Technology of Mix Assistant” (Mix Assistant being another assistive audio technology feature included in Neutron) limits the details to the phrase “machine learning technology.” That page also features some curious paragraphs at the end, under the heading “Should we be worried about the implications of technologies like Mix Assistant?” and I’ll read those to you in full:
While we’re excited about Mix Assistant, we can understand how a feature such as this one can be met with some skepticism. In light of the impending AI takeover around us, it is only natural that these technologies be polarizing and received with some hesitation, and perhaps even anger. Ironically, we have always surrounded ourselves with audio processors that exhibit some form of intelligence, but because their level of intelligence was relatively low, their utility was so obvious, and their adoption has become so commonplace, nobody thinks twice about whether or not to use them. For example, compressors are ubiquitous in audio production. One could argue that a compressor is also a form of AI, which intelligently lowers the gain of an audio signal based on its momentary signal strength. As such, we can give our tracks consistency or body without having to spend hours manually writing gain automation.
We should consider it to be a healthy practice to question the line where utility stops and creativity begins, and be open to adopting new workflows that capitalize on our findings. After we developed Mix Assistant, I think many of us were in shock that a lot of what we thought to be craft could be distilled into objective theory. This is actually a good thing because it enlightens us on where to focus our creative energy and time so that we forge our musical endeavors into new, untapped territories.
Nercessian, 2019
I’m not trying to indict iZotope or their software here—again, I find their assistive audio technology very helpful when used in combination with my own musical judgement and I’ve had nothing but great experiences with the company and their products in general—but while this example may seem innocuous, it is at the same time both a manifestation of and a perpetuation of the view that artificially intelligent technology is objective, and that view can manifest as well in other ways that are not nearly so innocuous. A 2016 article in the investigative journalism publication ProPublica detailed how the software COMPAS, used by courts to assess potential recidivism among convicted criminals, led to the overincarceration of Black defendents (Angwin et al., 2016).
We realize the problem is even more pernicious than the foregoing would indicate when we understand that the presumption of objectivity is not a new problem but rather a very old one that is continually manifesting in new ways, with each new manifestation building on the problems of the last. The 2020 book Digitize and Punish: Racial Criminalization in the Digital Age (the title being a clear play on Foucault’s Discipline and Punish) by University of Illinois professor Brian Jefferson documents how statistical methods were first employed in early 20th century America to make criminology, policing, and justice more “scientific and objective.” In fact, such methods only served to institutionalize our existing racial prejudices, and once such prejudices became embedded in the data, they became less visible due to the presumption of objectivity. As Jefferson documents, an 1896 study by amateur statistician Frederick L. Hoffman was widely adopted by various disciplines as an objective demonstration that the so-called “colored” peoples of the United States were more prone to criminal behavior, despite that study’s very poor methods and controls and its having somewhat arbitrarily constructed its categories of race and crime in the first place (Jefferson, 2020, pp. 22-24). Studies such as these generate presumptions which may become deeply embedded in the public consciousness, and then the parameters of future studies are generated in the context of those presumptions. Any study, no matter how rigorous, must necessarily be predicated upon assumptions and decisions—possibly very arbitrary ones—regarding the categories it studies. A rigorous, well-controlled study may show very conclusively that a certain demographic is more prone to committing crimes, but however accurate the bare numbers may be, they are predicated on the ways that the demographic in question and the concept of crime are defined for the purposes of the study. As Marcuse states in One-Dimensional Man—and this is quoted as well in Digitize and Punish—“…[T]he technological veil conceals the reproduction of inequality and enslavement” (Marcuse, 1991, p. 32).
The general picture we have of the progression of data, computation, police power, and the prison state is as follows: in the early 20th century we have new statistical sciences emerge which are applied in ways that reinforce existing racial prejudices. Indications from these pseudoscientific statistical studies indicate that nonwhites are more prone to crime because categories such as “crime” and “nonwhite” have been defined along the lines of those preexisting prejudices. With this data as justification, those demographics are monitored and documented more extensively, a process which uncovers yet more crime—remember, the average person commits about three felonies a day, so there is ample opportunity for arrests even among the ostensibly law-abiding—and this exacerbates the problem. Racist bureaucrats interpret the data as implying an existential threat to white people and create new laws and expand old ones to target nonwhite demographics, as President Richard Nixon did during his administration (LoBianco, 2016). Crime among those demographics skyrockets precisely because “crime” has been defined, in part, as being things that those demographics are already known to do. The massive increase in arrests that results puts a heavy workload on the justice system, which, in response, seeks new tools to more efficiently analyze, prosecute, and punish criminal behavior. This demand is answered by technology firms, who have a vested financial interest in their products uncovering more crime and prosecuting and convicting more criminals, as such justifies the firms’ existence and the effectiveness and need for their products. This is where artificial intelligence enters the picture, with its concomitant presumption of objectivity. We do not see within this narrative any desire or need to explain or even to reduce crime—indeed, the system I’ve described seems geared towards both the proliferation of crime and the obviation of explanations for it—and thus we have no reason to believe that the artificial intelligence technology implemented by institutions of justice will be used for purposes of understanding or obviating crime itself.
This brings me to one of my overarching concerns regarding artificial intelligence. AI promises us solutions to problems that humans are not intelligent enough to solve on their own, but society’s largest and most pressing concerns—warfare, poverty, inequality, the degredation of the environment, government totalitarianism, and others—arise not because we’re stupid, but because we’re assholes. As we’ve seen, artificial intelligence is not a neutral and objective tool that will solve our problems for us; the problems I’ve mentioned are complex and intractable and artificial intelligence certainly has the potential to help us find our way through them, but only if designed and implemented by us in an intelligent way. These tools will do us little good and may indeed cause significant harm if we’re not willing to put our human intelligence to work first, which, looking over the past year and the coronavirus pandemic, seems to be something we’re consistently unwilling to do.
In television advertisements played during the most recent Super Bowl, an online automotive dealer promises to liberate us from the ostensible horror of purchasing a car at a dealership. Just as Marcuse described, the system simultaneously creates and fulfills our needs: Western capitalism creates both the need for cars and the system by which they are manufactured and purchased, and when those structures create further needs (such as the need to avoid the bureaucracy of car dealerships), capitalism fulfills those as well. In other Super Bowl advertisements, a financial software company promises that its tax software will find obscure loopholes in the tax code to save us money; an online retailer portrays their AI-powered virtual assistant as an object of sexual desire; a computer peripherals manufacturer ties the use of their products to innovative creativity in defiance of “genres, algorithms, and entire industries” (all examples documented in Walsh, 2021). Western culture is saturated by pervasive messages that technology is both the object of our desires and the solution to the problems that arise not from the natural conditions of human life but from our institutions and our other technologies. Advertisements and popular media portray such things as the capitalist labor market as objective, natural realities whose problems are best answered not through critical examination of their structural foundations but rather with new technologies and institutions which arise from the same structures whose problems they are ostensibly designed to solve.
One might imagine an architect who has designed a building with an unstable foundation. “Well, if the instability is a problem,” the architect says, “then we should add more floors so that we can get further away from it.”
I don’t intend to come across as a Luddite; technology, including artificial intelligence, and the institutions for which it serves as infrastructure have in many ways made my life easier, more comfortable, more productive, and more interesting than the life I would have led 50,000 years ago as a hunter-gatherer. It is, at the same time, often alienating and oppressive, sometimes profoundly so, and is often implemented in the service of institutions which are similarly alienating and oppressive. I don’t advocate for humanity to abandon its technology; such would not be possible in any case. Rather, my recommendation is for an understanding of our technology which acknowledges its arising from and manifesting our human nature—often its most problematic aspects—and the general conditions of society in which it is designed, built, and implemented.
Many times in the past, I have come to believe that some product or other is necessary to accomplish my goals as a musician. I have not always been mistaken in this thinking, but what has always been needed most for my purposes are the musical knowledge and skills that can only come from study and practice. I think that this is true of humanity in general as well: perhaps some new technological innovation will help to make our lives better; perhaps, at times, we even need some new technology to address a particularly intractable problem; but rarely does humanity need any such thing nearly so much as we need to work on ourselves and on the structural foundations of our societies.
Labor, in the sense of work performed to fulfill our basic biological needs, is an objective human necessity. Technology has vastly improved the efficiency and effectiveness with which this work can be performed, and yet we spend an ever-increasing amount of our time doing work that is more alienating and meaningless, less tied to our biological needs, and less lucrative for those performing it. AI has the potential to eliminate the need for humans to perform certain forms of labor, and yet we see this as a threat rather than a potential blessing. This ties into another of Marcuse’s ideas, the performance principle:
The performance principle, which is that of an acquisitive and antagonistic society in the process of constant expansion, presupposes a long development during which domination has been increasingly rationalized: control over social labor now reproduces society on an enlarged scale and under improving conditions. For a long way, the interests of domination and the interests of the whole coincide: the profitable utilization of the productive apparatus fulfills the needs and faculties of the individuals. For the vast majority of the population, the scope and mode of satisfaction are determined by their own labor; but their labor is work for an apparatus which they do not control, which operates as an independent power to which individuals must submit if they want to live. And it becomes the more alien the more specialized the division of labor becomes. Men do not live their own lives but perform pre-established functions. While they work, they do not fulfill their own needs and faculties but work in alienation.
Marcuse, 1998, ch. 2, para. 35
Our dream for artificial intelligence is that of liberation. Remember that quote from the literature on the artificially intelligent mixing software that I use: “[I]t enlightens us on where to focus our creative energy and time so that we forge our musical endeavors into new, untapped territories.” We want AI to take care of the lower-level intellectual drudgery so that we have more time and energy—so that we are more free—for the higher-level thought. But, as Marcuse describes, we have defined the concept of freedom exclusively within the constraints of the existing capitalist infrastructure, whose own purpose is to exploit and enslave us. Artificial intelligence built in order to liberate us in this sense will only further confine us rather than providing us with the kind of freedom about which Marcuse writes:
…freedom from the economy—from being controlled by economic forces and relationships; freedom from the daily struggle for existence, from earning a living… liberation of the individuals from politics over which they have no effective control… intellectual freedom [as] the restoration of individual thought now absorbed by mass communication and indoctrination…. The most effective and enduring form of warfare against liberation is the implanting of material and intellectual needs that perpetuate obsolete forms of the struggle for existence.
1991, p. 4
The next question to address on the subject of artificial intelligence is what it means for our humanity; in other words, how it informs, troubles, and redefines what it means to be human. What does it mean for musicians if AI is able to write music at the level of the greatest composers who have ever lived? What does it mean for art if AI is able to create original paintings that are indistinguishable from the works of the great masters? What does it mean for human religion if AI is able to create a fully real afterlife? What does AI mean for we Satanists? This matter will be the subject of the final essay in my series on artificial intelligence.
I hope you’ve found this piece interesting and informative. If you’ve enjoyed it, I encourage you to look at some of my other essays, and if you find my approach to philosophy and religion at all valuable, I hope that you’ll stop in at my Patreon page, which features bonus content for patrons, and that you’ll stop back by to check on my new content.
Works Cited or Referenced
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing?token=N4k88uB8H681Ql3s7NXVvflOxrkBWSTf
Arendt, H., Allen, D. S., & Canovan, M. (2018). The human condition (Second edition). The University of Chicago Press.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
Bilsborough, T. (2020, December 7). Dogma and Hegemony. A Satanist Reads the Bible. https://asatanistreadsthebible.com/dogma-and-hegemony/
Brayne, S. (2021). Predict and surveil: Data, discretion, and the future of policing. Oxford University Press.
Brockman, J. (Ed.). (2019). Possible Minds: 25 Ways of Looking at AI. Penguin Press.
Farr, A. (2020). Herbert Marcuse. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2020). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/sum2020/entries/marcuse/
Foucault, M. (1995). Discipline and punish: The birth of the prison (2nd Vintage Books ed). Vintage Books.
Jefferson, B. J. (2020). Digitize and punish: Racial criminalization in the digital age. University of Minnesota Press.
Marcuse, H. (1991). One-dimensional man: Studies in the ideology of advanced industrial society. Beacon Press.
Metz, R. (2021, March 11). How one employee’s exit shook Google and the AI industry. CNN. https://www.cnn.com/2021/03/11/tech/google-ai-ethics-future/index.html
Nercessian, S. (2018, July 19). IZotope and Assistive Audio Technology. IZotope. https://www.izotope.com/en/learn/izotope-and-assistive-audio-technology.html
Nercessian, S. (2019, June 6). Behind the Technology of Mix Assistant. IZotope. https://www.izotope.com/en/learn/behind-the-technology-of-mix-assistant.html
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
Silverglate, H. (2009). Three Felonies A Day: How the Feds Target the Innocent. Encounter Books.
Spielberg, S. (2002, June 20). Minority Report [Action, Crime, Mystery, Sci-Fi, Thriller]. Twentieth Century Fox, Dreamworks Pictures, Cruise/Wagner Productions.
The Minority Report. (2021). In Wikipedia. https://en.wikipedia.org/w/index.php?title=The_Minority_Report&oldid=1007338611
Timnit Gebru. (2021). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Timnit_Gebru&oldid=1010438093
Walsh, C. (2021, February 8). All the 2021 Super Bowl Commercials. Vulture. https://www.vulture.com/article/2021-super-bowl-commercials.html