
caith |

I was looking at the distribution curves for this scenario on anydice.com, but I suck at stats. Anyone with a statistics background that can explain - preferably in plain English - what the difference is between determining die results in these two different ways?
Example: Roll 16d6 vs. roll 4d6 and multiply by 4.

Atarlost |
The more dice are being rolled the more likely high or low results are compared to average results.
For example 1d3X100 will give you 100, 200, 0r 300 with equal probability. 100d3 will almost never give you 100 or 300 or anything near them and will give you 200 or something near 200 much more frequently.

Darksol the Painbringer |

With a 16D6 roll-out, you must roll each die individually, and add the results together. If I roll an average of 5 per dice, multiplied by 16, equates to approximately 80 points of damage.
However, with a 4D6 roll-out (the results multiplied by 4), it allows a character to roll out less dice, while at the same time maintaining further consistency.
The ratio to roll maximum damage with 16d6 than with 4d6 (with the results quadrupled) is significantly lowered; at the same time, this allows the dice to deal a more "average" or "consistent" amount of damage.
With the 4D6 (X4), sure, I can roll 4 6's (and be really lucky even with this), and max out damage, but if I roll horribly (2 1's, and 2 2's), I would only equate the damage amount to 24, versus a 24 X4 = 96 damage output.
The first option is nice for consistency; generally, you will vary about the amount of dice you roll from the average roll of the core die you use (so if I roll a 16D6, and the average roll is 3.5 out of 6 [meaning 3's and 4's are rolled evenly], the amount of damage I deal shouldn't vary too much further than that on an average, unprecedented roll, meaning that the average damage +/- 16 [the amount of dice rolled] is generally where you will be hitting at in a total result. There are outliers, though).
While the methods have their benefits and drawbacks, they are what they are; different rolling methods. Sure, one has better averages and is nearly physically impossible to pull off, but the other has better spike (and can also have severely weak) damage, and can go for (or against) the Players or the DM.

Grimmy |

The more dice are being rolled the more likely high or low results are compared to average results.
For example 1d3X100 will give you 100, 200, 0r 300 with equal probability. 100d3 will almost never give you 100 or 300 or anything near them and will give you 200 or something near 200 much more frequently.
Im getting a contradiction between the first statement and the example. Am I reading this wrong?

![]() |

Im not a statistics guy, so I'm just basing this off of observation.
Looking at your example, both have a max number of 96 and a min number of 16, and the number with the highest chance of rolling is 56.
The difference is that when rolling 16d6, you have a lower chance of rolling that 56 because there are more possible outcomes than compared to rolling 4d6*4, and thus the chance of a single number is less.
As an example, when rolling 16d6, there is a 5.72% chance to roll a 55. When rolling 4d6*4, there is a 0% chance to roll a 55 because it is not possible to get that result with that roll.
Ummmm...does that help? 8-/

Darksol the Painbringer |

@ Hangar: Kind of; the thing is that while yes, the Multiply subject has less outcomes (due to reduced/condensed variables), it also increases the standard chances for each probability it does have to occur.
I would have much better luck rolling out all 6's on the Multiply subject compared to the full-roll subject because the chance to roll all 6's (and achieve the maximum number of damage within the given roll) is increased by a significant amount compared to the other.
Again, the thing about the full-roll subject is that it gives a more average, (and generally accurate) way to determine the average damage of an object, and a good rule of thumb to determine an average damage output is to take the average roll on the die type, and add or subtract the amount of dice rolled to the true average.
While the rule of thumb does not take outliers (AKA, Good or Bad rolls) into account, it gives a fair indication as to whether the attack is potent, or severely ineffective on a regular basis.

![]() |
Basically the more dice you add the more normalised the distribution becomes (assuming the dice a fair) gaining a larger central peak with fewer outlying values, this means that 4d6*4 is not equal to 16d6 except with regards to the mean, the standard deviation of 4d6*4 is much higher than that of 16d6 meaning you have a much higher chance to get outlying values.

Atarlost |
Atarlost wrote:Im getting a contradiction between the first statement and the example. Am I reading this wrong?The more dice are being rolled the more likely high or low results are compared to average results.
For example 1d3X100 will give you 100, 200, 0r 300 with equal probability. 100d3 will almost never give you 100 or 300 or anything near them and will give you 200 or something near 200 much more frequently.
No. I typed it backwards. It should read.
"The more dice are being rolled the more likely average results are compared to high or low results."
Anomander |

Basically the more dice you add the more normalised the distribution becomes (assuming the dice a fair) gaining a larger central peak with fewer outlying values, this means that 4d6*4 is not equal to 16d6 except with regards to the mean, the standard deviation of 4d6*4 is much higher than that of 16d6 meaning you have a much higher chance to get outlying values.
I like this explanation. Though you could also mention that the 4th order moments (kurtosis) of the distributions are much greater for the 4d6*4 than the 16d6 distribution. Which means that not only does the 4d6*4 have a higher deviation around the mean value of 56, it also has a lot higher probability of getting extreme values (tail probability).

![]() |

I like this explanation. Though you could also mention that the 4th order moments (kurtosis) of the distributions are much greater for the 4d6*4 than the 16d6 distribution. Which means that not only does the 4d6*4 have a higher deviation around the mean value of 56, it also has a lot higher probability of getting extreme values (tail probability).
"Plain English" he said :)
The least you could do is hyperlink to the wikipedia articles explaining the jargon...

Jubal Breakbottle |

Think of it this way:
Every result that you roll has a certain probability of happening. When you roll more dice the probability getting a result near the average is greater because there more permutations to get he same result. For example, rolling 2d6, you can get 2 only one way (two 1's) and you can get a result of 7 six different ways (1,6 pair; 2,5 pair; and 3,4 pair). Therefore, the probability of rolling a 7 on 2d6 is 6 times greater than rolling a 2 or a 12.
As you roll more dice, 4d6 or 16d6 the ratio of the probabilities of the average and the extremes reduces from 6x. When you graph these distributions with the x-axis being the total results and the y-axis being the probability of happening, the shape changes when you add dice. At 2d6, the shape is a thin, steep hill with the center in the middle. As you add dice, the hill in the middle shortens and flattens and significantly reduces the probability for extreme results.
Better?
cheers

Sissyl |

If you multiply by 4, you aren't going to end up with 17, if it matters.
Otherwise, it is as has been stated previously: More dice means a higher probability of rolling at or near the mean value. Less dice means a higher probability for extreme results. If you want to see this in action more clearly, try drawing the probabilities for 2d6, and make a graph for the outcomes. Then either roll (or make a simple computer program, taking care to manage randomness properly) that rolls a buttload of d6. You will find that the expected average of 3.5 is more and more tightly the result the more dice you roll.