Photo by Charlie Solorzano on Unsplash
Anyone who would follow a Satanic path authentically would do well to equip themselves with the tools of game theory. Game theory is a powerful mathematical abstraction of strategic thinking, without which one can only hope by chance to accomplish their objectives. And if one lacks a strong path towards one’s objectives, they are not living Satanically.
To begin with, consider your interests, the things in the world that can be used to fulfill your objectives. If we assume that, at a minimum, you want to stay alive, then things like food and shelter are your interests. Then, consider whom you’re competing with in terms of every interest that you hold. So, people with whom you’re competing for status, for money, for food, for attention, for space. I could add to the list promotions, recognition, whatever at all that one might value. The fact that you are competing with someone does not have to make your relationship to that person at all antagonistic. It’s a simple acknowledgement that you’re both participants in a game of some sort. In many cases, perhaps most cases, all parties can accomplish their highest-rated objectives, and in those instances cooperation is in your best interest. But some games are zero sum, and you will have to win by causing someone else to lose, or to in some way benefit at someone else’s expense. It is critical to recognize which situation is which and to act accordingly, and to do that you have to understand whom it is that you’re playing the game with and what their objectives are.
To know what the players’ objectives are, you need to know the payoffs. Of the players, who benefits most from what situations? How much do they benefit? Often we only need to know that one player prefers one outcome to another and not how much they prefer it, but that’s provided that we’ve taken all of the other players’ interests into account.
Then you need to know structure. What is the game you’re playing? What are the means by which the players accomplish their objectives and get the best possible payoff, given everyone else’s decisions? Is it, for example, sequential or simultaneous? In other words, do people take their “turns” in sequence with the information about each decision available to everyone, or do they take them effectively simultaneously, without knowledge of what decisions everyone else is making?
And once you have the players and the payoffs and the structure, you can think about strategy: what will everyone actually do, and how do those decisions affect each other?
To look forward, you must reason backward.
Lay out the options available in a format appropriate to the game being played. Decision trees are suitable for sequential games, and payoff tables are better for simultaneous games. Then look at the payoffs and reason as to what opponents will choose given those payoffs. Then make a decision on how you will act based on your payoffs in the context of how you believe everyone else will be acting based on their own payoffs, and based on their assessment of your predictions, and so on and so forth, ad infinitum
In nonzero-sum games, games with potential win-win payoffs, communication often yields better payoffs for all parties because it allows them more information about each side’s respective strategy, What to do in the absence of communication? Find Schelling points, points of intuitively obvious coordination. Let’s say you were dropped off at a random location in New York City with an objective of meeting another team in similar circumstances by the end of the day. This is a city of almost 20 million people, but there may be a way to play this game in such a way that the chance of succeeding is significant (meaning, greater than what random happenstance would give to such a degree as is worth implementing). Take a second to think about it before reading on… but you may already have an idea of what to do: post a sign for the other team at the Empire State Building (which was featured in several movies serving exactly this role) at noon. There is no more obvious time of day, and only one more obvious location: Times Square. In an experiment conducted by game theorist Barry J. Nalebuff1, six teams given this objective all succeeded: three teams met at the Empire State Building at noon, and three teams met at Times Square at noon.
Game theory has significant implications for philosophy. For example, I think that the main problem with objectivism is not that it isn’t viable but that people, Ayn Rand included, haven’t taken it far enough. By all means, pursue rational egoism. If you take it to its end while keeping strategic thinking in mind, I think we’ll find ourselves on the same page with a strategy of largely cooperative behavior in a socially democratic state attempting to cooperate with other states in a largely cooperative way. Those strategies give us the best payoffs in most games; that they give other players as well their best payoffs is secondary, even advantageous, because that fact will coerce them to continue cooperating. Here we might distinguish pure versus strategic rational egoism. In pure rational egoism, one simply does what is best for themselves, but (as with the prisoners’ dilemma game, which I will explicate in the next paragraph), this will, as a mathematical certainty, lead to suboptimal outcomes for everyone. In strategic rational egoism, one does what is best for themselves given that everyone else is also doing the same.
Prisoners’ dilemmas are especially difficult to sort out. The classic example of a prisoners’ dilemma is that two people have been apprehended on suspicion of having committed a crime. If they both keep their mouths shut, they both go free. If they rat on each other, they both receive two years in prison. If one rats on the other, the rat goes free and the other gets three years in prison. If neither one rats on the other, they both get one year in prison. The best outcome for both of them is that they keep silent; the best outcome for each individual is that they rat on the other, except that, given that this is true of both players, it is better for them to cooperate (because if they both rat, they both get two years, and the one year they would get from cooperating is better than two years they would get by both ratting; in theory, the zero years that one would get from ratting on the other without them ratting back is better, but it’s simply not feasible because of the symmetry of the problem). But of course they can’t communicate and decide on a strategy: it’s a simultaneous game, and each player must make their choice without information about the choice that the other player is making. If both knew game theory they would both cooperate and get the best outcome. If either one can reasonably believe that the other does not know game theory, their best bet is to rat, even knowing that doing so will result in a suboptimal outcome. Knowing game theory and using it to cooperate when possible results in the best outcomes for all parties.
This in mind, when you play zero-sum games, your objective is to win. Do so decisively, using every advantage available to you. And even in nonzero-sum games, one must seriously consider retaliation. Retaliation should be taken into account especially when one is dealing with a particularly intractable problem known as the tragedy of the commons, which was first described by biologist Garrett Hardin in relation to the problem of common use of commonly owned land in 15th- and 16th-century England. Some resource is limited but publicly available. The more this resource is farmed, the less is available, and the more work that everyone has to do to get what they need, and this perpetuates the problem. If everyone limited themselves, they’d be able to get what they’d need with less work, and the resource itself would be better preserved. The tragedy of the commons is the prisoners’ dilemma writ large, a collective action problem instead of one between only two players. In The Art of Strategy (2010), Dixit and Nalebuff recommend the findings of political scientist Elinor Ostrom (Governing the Commons, 1990) in addressing this problem. There must be a clear identification of who the players are, clear rules defining permissible and forbidden actions, a system of penalties for violations, a system to detect cheating, and the information about all the above must be available to the players during the design of the system. Without retaliation, there is no incentive for anyone to abide by the system, and everyone returns to suboptimal payoffs (and those still trying to abide by the system in those circumstances get the worst payoffs of all). Ideally, retaliation need never be used, but it must always be available.
This has been a very terse overview of game theory and its potential implications for Satanic thinking. It’s a topic I intend to return to with some regularity; I have a good half a dozen related topics I’d like to explore but didn’t get to this week for lack of time. For those interested in exploring the practical applications of game theory, I think that one can do no better than The Art of Strategy by Dixit and Nalebuff, linked above. Those interested in the mathematical origins of game theory might want to look into Theory of Games and Economic Behavior (1944), by John von Neumann and Oskar Morgenstern.
Thanks much for reading. I hope you’ve found this piece interesting and informative. If you’ve enjoyed it, I encourage you to look at some of my other essays, and to sign up for my mailing list (form on the sidebar) so you can stay current on my latest work. And if you find my approach to philosophy and religion at all valuable, I hope that you’ll stop in at my Patreon page, which features bonus content for patrons, and that you’ll stop back by to check on my new content. I’ll be publishing new work every Friday evening. I also have a reading list, which contains links to the books I used to research this and all of my other stories. Clicking through and buying books is a great, easy way to support my work.