Political Analysis: Choice

Now we’re talking about choice—why do people choose to do things? Why do they take bribes?

If we suppose an individual faced with a set of actions to choose from, and all of the actions are linked to clear outcomes, there are two principles of rational choice.

Principle One: The individual has a consistent set of preferences for outcomes. There are two types of preference ordering: strict and weak. Strict ordering is like a total dominance hierarchy. No matter what, between two outcomes the individual will always have a preference. Weak ordering is like a partial dominance hierarchy, and an individual can have outcomes that are tied in preference. Unlike a partial dominance hierarchy however, the ordering will never be ambiguous—choices will always be tied or ranked, never unknown (as they were in the black male/white female scenario.)

Principle Two: The individual chooses an action to achieve the most preferred outcomes. Sometimes the choice is easy. Sometimes the link between action and outcome isn’t clear. Sometimes the outcome depends on chance, or someone else’s choice.

Well this seems pretty obvious, so what use is it? It’s useful because by understanding what a person’s preferences are, it’s possible to predict more complex decisions involving the interaction of multiple preferences.

Now rational choice theory isn’t always true, because people don’t always make rational choices. They either don’t have the ability to think of things in the cost-vs.-benefit way, or they make decisions based on passion (often sexual rivalry) not logic.

Still, if we have to start somewhere, rational choice theory is a good place. If you begin by assuming someone is rational, their actions may make more sense than if you begin by assuming they’re just an irrational idiot.

One thing rational choice theory helps us understand is bribery.

It’s obvious that to get someone to do something they don’t want to do, you have to compensate them—but the implications of that are more interesting, and less obvious.

These scenarios can be modeled with the following equation:

U(p,m)= -|p-p*| + m

Where U is the payoff, “p” is the action taken, “p*” is the actor’s preferred action, and “m” is the bribe. Note that it’s an absolute value times -1—so whether the action taken is lower or higher than the preferred action, it will always come out as a negative. The only way not to get a negative number is if the action taken is the preferred action. Anything different from that is a negative.

For example, let’s say a city planner would prefer that the nuclear power plant be two miles away from the city. She doesn’t want it any closer because she doesn’t want to deal with citizens complaining about radiation, and doesn’t don’t want it any further because she doesn’t want to deal with workers complaining about a long commute. Her p* would equal 2, and if she received no bribe (-|2-2| + 0) her payoff would be 0.

Let’s say a citizen wanted the plant to be three miles away. If our city planner took that action with no bribe (-|3-2| + 0) her payoff would be -1. -1 is worse than 0, so there’s no reason for the city planner to do that. In fact, if there is no bribe, the best possible outcome the city planner can have is 0, so the bribe has to balance things out to an outcome of 0 for the action to be worth it. So we can model that like this:

-|3-2| + m = 0
-1 + m = 0
m = 1

The citizen would have to give a bribe of 1. If the citizen wants it to be four miles away they’ll have to give a bribe of 2, and so on.

1 or 2 what? It doesn’t matter. We could set m to equal $x/$100 or $x/$10,000 depending on how much a mile’s difference is worth to the city planner. What’s interesting is how the equation changes if we model it like this:

U(p,m)= -(p-p*)² + m

Now if the citizen wants the plant to be three miles away (-(3-2)² + = 0) we get the same bribe value of 1. But if they want it to be four miles away…

-(4-2)² + = 0
-(2)² + = 0
-4 + = 0
m = 4

And you can see how the further the action is from the ideal action, the more expensive each mile is going to be.

This model also shows that positive and negative reinforcement are basically the same. Negative reinforcement can be modeled as:

U(p,m)= -|p-p*| – m

So if I want someone to not perform their ideal action, I’ll give them a negative bribe.

So why is negative reinforcement used so much?

Let’s say I want you to bring me a bagel. I might use positive reinforcement and say, “bring me a bagel and I’ll give you five bucks.” Or, I might use negative reinforcement and say, “bring me a bagel or I’ll steal five bucks from you.” Well, my desired outcome is that I get the bagel. Unless I think my reinforcement is going to fail, that’s the outcome that I expect. Now if I use positive reinforcement, I have to give something up if I get the bagel But if I use negative reinforcement, I don’t have to do anything if I get the bagel. Even if I don’t get the bagel, stealing five bucks isn’t that bad.

Speaking of negative reinforcements, let’s talk about threats.

The key issue with a threat is, “is it credible?” A threat is credible if, once the condition of the threat is met (“if I don’t get my bagel” in the above scenario), it is still in the actor’s interest to carry out the threat.

To determine the credibility of a threat, we’d use an extensive form game—ooh, does someone smell game theory?

“…according to Wikipedia, which is my bible…”
-Professor Dion

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s