Of course, most of these are not sensible English statements, and he goes on to estimate how many of these there are. This analysis is based on Shannon's estimate of the entropy rate for English -- about 1.1 bits per letter. This leads to a revised estimate of 2140 x 1.1 English tweets, or about 2 x 1046. The rest of the post explains just what a hugely big number that is -- it's a very, very big number.
The problem is that this number is also wrong.
It's not that the calculations are wrong. It's that the entropy rate is the wrong basis for the calculation.
Let's start with what the entropy rate is. Basically, given a sequence of characters, how easy is it to predict what the next character will be. Or, how much information (in bits) is given by the next character above and beyond the information you already had.
If the probability of a character being the ith letter in the alphabet is pi, the entropy of the next character is given by
– Σ pi log2 piIf all characters (26 letter plus space) were equally likely, the entropy of the character would be log227, or about 4.75 bits. If some letters are more likely than others (as they are), it will be less. According to Shannon's original paper, the distribution of letter usage in English gives about 4.14 bits per character. (Note: Shannon's analysis excluded spaces.)
But, if you condition the probabilities on the preceding character, the entropy goes down. For example, if we know that the preceding character is a b, there are many letters that might follow, but the probability that the next character is a c or a z is less than it otherwise might have been, and the probability that the next character is a vowel goes up. If the preceding letter is a q, it is almost certain that the next character will be a u, and the entropy of that character will be low, close to zero, in fact.
When we go to three characters, the marginal entropy of the third character will go down further still. For example, t can be followed by a lot of letters, including another t. But, once you have two ts in a row, the next letter almost certainly won't be another t.
So, the more characters in the past you condition on, the more constrained the next character is. If I give you the sequence "The quick brown fox jumps over the lazy do_," it is possible that what follows is "cent at the Natural History Museum," but it is much more likely that the next letter is actually "g" (even without invoking the additional constraint that the phrase is a pangram). The idea is that, as you condition on longer and longer sequences, the marginal entropy of the next character asymptotically approaches some value, which has been estimated in various ways by various people at various times. Many of those estimates are in the ballpark of the 1.1 bits per character estimate that gives you 1046 tweets.
So what's the problem?
The problem is that these entropy-rate measures are based on the relative frequencies of use and co-occurrence in some body of English-language text. The fact that some sequences of words occur more frequently than other, equally grammatical sequences of words, reduces the observed entropy rate. Thus, the entropy rate tells you something about the predictability of tweets drawn from natural English word sequences, but tells you less about the set of possible tweets.
That is, that 1046 number is actually better understood as an estimate of the likelihood that two random tweets are identical, when both are drawn at random from 140-character sequences of natural English language. This will be the same as number of possible tweets only if all possible tweets are equally likely.
Recall that the character following a q has very low entropy, since it is very likely to be a u. However, a quick check of Wikipedia's "List of English words containing Q not followed by U" page reveals that the next character could also be space, a, d, e, f, h, i, r, s, or w. This gives you eleven different characters that could follow q. The entropy rate gives you something like the "effective number of characters that can follow q," which is very close to one.
When we want to answer a question like "How many unique English tweets are possible?" we want to be thinking about the analog of the eleven number, not the analog of the very-close-to-one number.
So, what's the answer then?
Well, one way to approach this would be to move up to the level of the word. The OED has something like 170,000 entries, not counting archaic forms. The average English word is 4.5 characters long (5.5 including the trailing space). Let's be conservative, and say that a word takes up seven characters. This gives us up to twenty words to work with. If we assume that any sequence of English words works, we would have 4 x 10104 possible tweets.
The xkcd calculation, based on an English entropy rate of 1.1 bits per character predicts only 1046 distinct tweets. 1046 is a big number, but 10104 is a much, much bigger number, bigger than 1046 squared, in fact.
If we impose some sort of grammatical constraints, we might assume that not every word can follow every other word and still make sense. Now, one can argue that the constraint of "making sense" is a weak one in the specific context of Twitter (see, e.g., Horse ebooks), so this will be quite a conservative correction. Let's say the first word can be any of the 170,000, and each of the following zero to nineteen words is constrained to 20% of the total (34,000). This gives us 2 x 1091 possible tweets.
That's less than 1046 squared, but just barely.
1091 is 100 billion time the estimated number of atoms in the observable universe.
By comparison, 1046 is teeny tiny. 1046 is only one ten-thousandth of the number of atoms in the Earth.
In fact, for random sequences of six (seven including spaces) letter words to total only to 1046 tweets, we would have to restrict ourselves to a vocabulary of just 200 words.
So, while 1046 is a big number, large even in comparison to the expected waiting time for a Cubs World Series win, it actually pales in comparison to the combinatorial potential of Twitter.
One final example. Consider the opening of Endymion by John Keats: "A thing of beauty is a joy for ever: / Its loveliness increases; it will never / Pass into nothingness;" 18 words, 103 characters. Preserving this sentence structure, imagine swapping out various words, Mad-Libs style, introducing alternative nouns for thing, beauty, loveliness, nothingness, alternative verbs for is, increases, will / pass prepositions for of, into, and alternative adverbs for for ever and never.
Given 10000 nouns, 100 prepositions, 10000 verbs, and 1000 adverbs, we can construct 1038 different tweets without even altering the grammatical structure. Tweets like "A jar of butter eats a button quickly: / Its perspicacity eludes; it can easily / swim through Babylon;"
That's without using any adjectives. Add three adjective slots, with a panel of 1000 adjectives, and you get to 1047 -- just riffing on Endymion.
So tweet on, my friends.
C. E. Shannon (1951). Prediction and Entropy of Written English Bell System Technical Journal, 30, 50-64