Firstly, my previous post talks about the right way to determine whether the digits are random or not, I'm not going to go over that again, but I am going to go back over some of the actual figures that are presented in the article.

So begin with this quote:

But that's not all. Psychologists have also found that humans have trouble generating non-adjacent digits (such as 64 or 17, as opposed to 23) as frequently as one would expect in a sequence of random numbers. To check for deviations of this type, we examined the pairs of last and second-to-last digits in Iran's vote counts. On average, if the results had not been manipulated, 70 percent of these pairs should consist of distinct, non-adjacent digits.

Not so in the data from Iran: Only 62 percent of the pairs contain non-adjacent digits. This may not sound so different from 70 percent, but the probability that a fair election would produce a difference this large is less than 4.2 percent.

And there's a footnote:

This is a corollary of the fact that last digits should occur with equal frequency. For an arbitrary second-to-last numeral, there are seven out of ten equally likely last digits that will produce a non-adjacent pair. Note that we treat both 09 and 10 as adjacent.

Firstly, I believe they mean to say that they treat 09 and 90 as adjacent (not 09 and 10). That means that for any number there are two possible adjacent digits out of a ten, in other words 20% of digit pairs are adjacent, so 80% of digit pairs are non-adjacent.

In their article they say 70% 'distinct, non-adjacent'. OK, so their definition of non-adjacent means that you need to exclude repeats as well (so 23, 32 and 33 are all to be excluded).

They then present the argument that a figure of 62% or less will only happen in 4.2% of fair elections. Nowhere do they explain how they derived this figure, so I decided to run a simulation. (Hannah Devlin argues that this number is incorrect in her article, worth a read)

I ran a simulation of 1,000,000 elections that generate 116 counts of votes and I looked at the adjacent pairs of numbers in the vote counts and then I calculated the percentage of fair elections that would result in the same 62% or less as seen in the Iranian election. The figure is 2.66%. 2.66% of fair elections would produce the result (or 'worse') seen in Iran.

The difference, 4.2% vs 2.66%, comes about because the figure that they must have used is not 62%, but 62.07%. That is the actual number, to two decimal places, that comes from analyzing the digit distribution in the Iranian election results.

(Email me if you want my source code)

So, what does that tell you? That in almost 3 in 100 fair elections we would have seen the result in Iran. Or if you use their numbers 4 in 100. Either way that's pretty darn often. In the 20th century there were 26 general elections in the UK. Given their 4/100 number is 1/25 we shouldn't be at all surprised if one of those general elections looked fraudulent!

Now, we expect that the percentage of non-adjacent digits is normally distributed. And, in fact, my little simulation shows a nice little normal distribution centered on 70 with a standard deviation of 4.27.

So, we've got normally distributed data, a mean and a standard deviation and a sample (62.07%). Hey, time for a Z-test!

For this situation the Z value is -1.86 which yields a p-value of 0.063 for a two-tailed test (I'm doing two-tailed here because what I'm interested in is the deviation away from the mean, not the specific direction it went in). That's above the 0.05 value typically used for statistical significance and so we can't from this sample determine that there's statistical significance in the 62.07% figure.

So, I'd say that based on the figures given I can't find statistical significance. So I don't learn anything from that about the Iranian election.

Given that the Z-test on their 'non-adjacent, non-repeated' digits test doesn't find statistical significance, and my previous piece showed that the chi-squared test on the other claim in their paper didn't find statistical significance (that was on the randomness of the last two digits).

You might be scratching your head wondering how the authors made the claim that this was definitely fraud (their words: 'But taken together, they leave very little room for reasonable doubt.')

Well, what they do is take the probability of seeing the 62% or less number in a fair election (4.2%) and multiply it by the probability of seeing the specific variance they see in the digits 7 and 5 in a fair election (4%) to come up with 1.4% likelihood of this happening in a fair election:

More specifically, the probability is .0014 that a fair election (with 116 vote counts) has the characteristics that (a) 62% or fewer of last and second-to-last digits are non-adjacent, and (b) has at least one numeral occurring in 17% or more of last digits and another numeral occurring in 4% or fewer of last digits.

That's a very specific test. In fact, it's so specific that I'm going to name it the "Iranian Election Detector". It's a test that's been crafted from the data in the Iranian election results, it's not the test that they started with (which is all about randomness of digits, and adjacency).

So, let's accept their 1.4% figure and delve into it... that's 1.4 in 100 elections. That's roughly 1 in 71. So, they are saying that their test would give a false positive in 1 in 71 elections.

How is that 'leaving little room for reasonable doubt'?

## 5 comments:

"we treat both 09 and 10 as adjacent"

I think you misunderstood: they take both 09 and 10 as numbers composed of adjacent digits (0 is adjacent to 9 and 1 is adjacent to 0).

@Tamas

That's entirely possible. Either way I believe I am using the same figures as them (i.e. 70%).

That last test they do is really wrong. It is almost the same as saying: well, last night's lottery was won by ticket 672617, which has a 1-in-a-million chance of winning; hence the lottery must be fraudulent.

And I don't see why we should expect the numbers to show any signs of forgery, even if the election was fraudulent.

@martijn

I agree. In fact, I almost downloaded the UK national lottery historical figures and ran an analysis to look for 'suspicious digits' to prove there's fraud in the lottery.

Alternatively, I think I might start selling a 'lottery winning system' to people.

It's actually 0.14% and not 1.4%, and a random simulation should return their results if you use the same criteria for nonadjacent (09, 90, 01, and 10 are all adjacent) numbers without repeats. So, in their original article, they meant to say one in seven hundred elections will be fair, with the implication that there was only a one in seven hundred chance that this election was fair (actually one in two hundred in the Post because of a simulation error).

That said, identifying a rare event and then calculating its probability is obviously not kosher for reasons you've addressed before. See my post here - http://alchemytoday.com/2009/06/25/more-on-that-devil/

Post a Comment