So you may be expecting this to be another one of those articles going on at length about how much smarter RPG nerds are than other people. It’s not that. But we do have an article that disagrees with that position in our archives if you’d like to read it. It doesn’t quite predate Unpopular Opinion Puffin, which is a shame because it would have made a stellar example of the meme and that would be a good excuse why we didn’t include it.
So that aside, where the hell am I headed instead? Well, here’s a common rule of thumb for Dungeons & Dragons or any RPG that shares its 3d6 stat generation (or RPGs that have a similar range of stats): Your character’s IQ is equal to their INT score times ten. In fact, Gary is said to have repeatedly endorsed this interpretation. This came up on a discussion board I’m part of recently and I had a minor epiphany I’d like to share here (along with some math). While it may be valid to say: “Your INT score times ten is equal to the approximate real world IQ equivalent to their mental capacity”, it is absolutely not valid to say “Your character’s game world IQ is equal to their INT times ten.” Minor difference, but to me, the fun part is why this is clearly the case.
The explanation starts with something called the Flynn Effect. Very loosely, this effect says that aggregate results on IQ tests change, sometimes dramatically, over time. But why then do IQ scores stay in the same range and are interpreted the same all the time? In order for IQ scores to be useful, they have to be standardized even over time so they are routinely normalized so that recorded IQs at a given time follow a normal distribution with a mean of 100 and a standard deviation of 15.
Which of course means that after converting a 3d6 stat score which has mean 10.5 and standard deviation of square root(35/4)* to an in game world IQ score, INT 10 would not be an IQ of 100. INT 10.5 would be an IQ of 100. Similarly, shifting by a stat point would not change your IQ 10 points. It would change it by 15/square root(35/4) or about 5.1 points.
Now, of course different roll methods will result in different conversions but surprisingly enough PC rolling methods aren’t really of interest. In most campaign worlds if we include both assumed and explicit instances generic NPCs will so outweigh PCs and NPCs of interest that the game world’s “IQ distribution” will be based entirely on generic NPC INT scores. This means that for most common 3d6 stat systems, in world IQ scores would be based off of a 3d6 roll. Of course the message board where this came up was a 1e message board, which often used three “averaging dice” for generic NPC scores. These are six sided dice with faces 2,3,3,4,4,5. Those dice create a less bell shaped curve with the same mean of 10.5 but a standard deviation of square root(11/4). So INT 10.5 would still be IQ of 100, and shifting by a stat point changes your game world IQ by 15/square root(11/4) or about 9 points. Which is pretty darn close to Gary’s rule of thumb after all.
One last note: All adding a bonus or a penalty to a stat does is shift the distribution over by that much. So if you’re looking at a +2 INT race, just add +2 to the mean score and keep the standard deviation the same. So for the below table, you’d just add 2 to every value in the INT column.
|INT Score||3d6 game world IQ||3 averaging dice game world IQ|
*Variance of a single d6 is 35/12. Since the 3d6 are independent of one another, the variance of the three of them added is 3* 35/12 or 35/4. Standard deviation is of course square root of variance.
** Variance for a single averaging die is 11/12. Independent, so variance for 3 is 11/4. Standard Deviation is then square root(11/4)