I received a set of nontransitive dice in the post this week. Transitive is an interesting word. As we all know in grammar it refers to verbs which do things to something. What I didn’t learn at school was that if they do things to one thing they are called monotransitive, and ditransitive if they have both a direct and indirect object. A verb like to trade is categorised as tritransitive. If a verb does not play with others it is called intransitive, eg an example appropriate to this story, to die. If a verb swings both ways it is called ambitransitive.
In the mathematical world transitive is a description of a relation on a set. For example, if A = B and B = C, then A = C. So = is transitive. Similarly, if A > B and B > C then A > C.
Or does it? Let’s return to the dice (singular die: cemented in my memory on the occasion a teacher responded to a boy coming into his class and asking to borrow a dice by shouting “die, die, die!” at the startled youngster). Mathematicians do not use the word intransitive, preferring perhaps to avoid the ambiguity of words like flammable and inflammable, but instead use nontransitive. Nontransitive dice have the property that if die A tends to beat die B on average, and die B tends to beat die C on average, then rather counter-intuitively die C tends to beat die A on average. How does this work?
There are many different arrangements of the numbers on the faces of the dice which would achieve this effect. My red die has 4 on all its faces except one, which has a 6. My blue die has half its faces with 2s and the other half with 6s. My green die has 5 on all its faces except one, which is unnumbered (or, in fact, undotted).
If we take the average number we expect to get when throwing each die (the concept of expected value, first introduced by Blaise Pascal of triangle fame, also known as the mean, is the first thing that tends to get calculated in any statistical analysis), then red gives us 4⅓, blue gives us 4 and green 4 1/6. So we would expect from that to see red beat blue, green beat blue and red beat green.
When we pitch red against blue, if we throw a 2 with the blue die (probability of a ½), then we will always lose to red, since all of its faces are greater than 2. If we throw a 6 with blue, we have a 5/6 chance of beating red (since 5 of its 6 faces are 4s) and a 1/6 chance of drawing. So we have for blue a probability of ½ of losing, a probability of ½ x 5/6 = 5/12 of winning and a probability of ½ x 1/6 = 1/12 of drawing. So, in the long run, red beats blue on average, as we would expect it to.
When we pitch blue against green, blue will always win if we throw a 6 with it, with a probability ½. If we throw a 2, also with a probability ½, we have a 1/6 chance of winning against green (if green’s single blank face comes up) otherwise we will lose to a 5. So we have for blue a probability of losing of ½ x 5/6 = 5/12. And the probability of winning as blue (since no draws are possible this time) of 1 – 5/12 = 7/12. So, in the long run, blue beats green, exactly the opposite of what we would expect just going on the expected values.
Finally, when we pitch red against green, the only time green will beat red is when red has a 4 (with probability 5/6) and green has a 5 (also with probability 5/6). So we have a probability of green beating red of 5/6 x 5/6 = 25/36. And the probability of winning as red (since again no draws are possible as the two dice have no numbers in common) is therefore 1 – 25/36 = 11/36. So, in the long run (when as Keynes once helpfully pointed out, we are all dead) green beats red, again exactly the opposite of what we would expect just going on the expected values.
We only had to mess around a little with the 6 faces of the dice to get this counter-intuitive result. Nearly all financial instruments and products are obviously much more complicated than this, with the probabilities of certain outcomes being largely unknown, and even more so when in combination with each other, and therefore counter-intuitive results turn up almost too frequently to be called counter-intuitive any more. In fact the habit of trying to treat financial markets as if they were games obeying rules as fixed and obvious as those you can play with dice is what Nassim Nicholas Taleb refers to as the Ludic Fallacy.
If we double them up we get another surprise. Red still has the highest expected value (8⅔), followed by green again (8⅓) and then blue (8). But this time each pairing has three possible outcomes. Red and green both beat blue as expected from the expected values, but then green unexpectedly beats red.
This kind of behaviour is called nonlinearity, when adding quantities of things together does not just increase their effects, but instead changes them. Nonlinearity in this case means that blue beats green when we use one die each, but that green beats blue when we use two. Nonlinearity is also the single biggest threat to the financial system.
Anyone for darts instead?