The idea is that entropy is measured with possible words instead of possible characters. It turns out 7 7-bit ascii characters have less entropy than 4 14-bit equivalent words (that is, the 16,384 most common ones). And that’s in the ideal case it’s a totally random 7 characters.
Every attack is technically a dictionary attack here, but it doesn’t help enough because the password to a computer is still 30 characters long. To a human it seems a lot easier than ")f1:.{yJCzNv]@R=S
K$~=", though.
PS. Turning /dev/random output into 7-bit ascii characters is surprisingly involved in Haskell. C would have been easier. This was the world’s slowest ninja edit. Look at the source to see all the control characters that made it in, and the newline.
The idea is that entropy is measured with possible words instead of possible characters. It turns out 7 7-bit ascii characters have less entropy than 4 14-bit equivalent words (that is, the 16,384 most common ones). And that’s in the ideal case it’s a totally random 7 characters.
Every attack is technically a dictionary attack here, but it doesn’t help enough because the password to a computer is still 30 characters long. To a human it seems a lot easier than ")f1:.{yJCzNv]@R=S K$~=", though.
PS. Turning /dev/random output into 7-bit ascii characters is surprisingly involved in Haskell. C would have been easier. This was the world’s slowest ninja edit. Look at the source to see all the control characters that made it in, and the newline.
Thanks for the explanation, I remember the explanation in https://xkcd.com/936/ but wasn’t sure how that held up for different attack methods.