Social animals at work
You start with a preference ordering. This is based on an expected value of ‘utility’: what
each decision brings you. To be rational is to act on that preference ordering and to maxi-
mize utility. If you don’t act on the preference, you can get a violation of transitivity/the
strong axiom of revealed preferences (SARP), which can lead to a money pump effect (if
every trade costs one euro, you would keep trading, which makes him make imperfect
choices. When you have a WARP (weak axiom of preferences), people are irrational.
Bentham: Utility= happiness, satisfaction, pleasure. People are equal in the sense that
they seek happiness and we avoid things that hurt.
Pascal: rationality of gambling, by: chance to win x gain + win x gain… etc.
Bernoulli: You should base your decision on how much utility you get. This depends on the
person (pauper or rich man). You also have diminishing marginal returns, meaning if you
have nothing, 1 euro is a lot, but then a new euro means less.. etc.
Expected utility theory: EU(x) = p x U(x) and it leads to a concave utility function, meaning
taking chances also sort of has utility, making people risk-averse.
However, people are either risk-averse (which can make you lose out on choices), risk-
neutral or risk-seeking (which can make you lose a lot).
In moral decisions, you get the trolley dilemma: who should die? This can be done with:
Push the fat person to safe three people, but people are often reluctant. Kant thinks hu-
mans should never be the means to an end, so he is against utilitarianism. Should you sa-
crifice people to safe a lot of others? A real world example is for instance 9/11, should you
kill yourself by attacking the pilots to avoid the twin towers?
Rawls’ veil of ignorance: what if you were ignorant of race, religion, etc.? Then everyone
would want an equal society. But fear and risk-aversion makes for discrimination etc.
The Allais Paradox describes the phenomenon that if you prefer one option over the
other, you should always choose that, even if the options change but keep an equal
amount to each other. For instance, if you prefer apples over oranges, this should still be
the case when you can choose between apples+grapes and oranges+grapes, which is
called the axiom of independence. This sounds logical, but it is often not the case when
risk is involved, due to framing effects (Kahneman, & Tversky).
Situation A (loss frame):
400 people will die
A 33% that no one dies, 66%
chance that all 600 will die
Situation B (gain frame):
200 people will be saved
A 33% chance of saving 600
people, 66% chance of saving 0
Expected utility theory says that when the
outcome is higher, the utility is higher.
Prospect theory proposes that there is
risk-seeking behavior in the loss domain,
and that losses loom larger than gains:
Utility: do we calculate subjective value? Therefore, neuroscience was used to measure
whether activation patterns in the brain could predict preference, by holding the results
next to a preference order measured beforehand (Bartra, McGuire & Kable, 2013).
Near misses feel more equal to wins than full losses (Clark et al., 2009), which is why par-
ticipants were more willing to keep playing, even though the near miss said nothing about
their chances of winning in a future round.
When your decision impact others and their decisions impact you.
- identify the actors
- identify the actions of all actors
- define the outcomes for each action/actor pair:
Player two: option A
Player one: option A
- Or basic table:
Player two: option A
Player one: option A
Ways to be rational in this sense is:
- To calculate the Nash Equilibrium (which is a set of strategies where no player can
change it’s strategy and be better off (he will probably will be worse of) or a state in
which all players choose the best response to the strategy of the other).
- Nash example: if 2 players have to choose a number between 1 and 100 with the goal
of choosing the number closest to 1/2 of the average, which number would you choose?
Answer: 1, because the other will lower and lower his/her number as well. Everything
above 50 is the dumbest.
- Other examples of Game theory: Hotteling’s game (beach example, which is often why 2
competitors (for instance supermarkets) are often next to eachother) or Prisoners di-
lemma (A-A is called the Pareto optimal outcome, B-B the Nash Equilibrium):