The new "whiteboarding": why DPR as a metric is broken.


Advice

101 to 150 of 153 << first < prev | 1 | 2 | 3 | 4 | next > last >>

FowlJ wrote:
krazmuze wrote:
d6+1 is in fact not always greater than d6 - simply because you are NOT rolling the average every time, but instead are more likely to see a deviation.

You... do realise that you aren't even talking to anyone here, right?

Literally nobody has claimed the thing that you're disputing here. Nobody has said that through some magic 1d6+1 will always roll better than 1d6, 100% of the time. It has been said, accurately, that 1d6+1 will always be better than 1d6, in that given the choice between the two there is no reason why 1d6 would be a better option.

I'm willing to bet the average min/maxer discounts the effect of dice variance as "just luck". Calculating averages is easy, but most people haven't taken a stats course.

Thinking about dice variance is useful in so far as it can tell you how likely the theoretically higher average of min/maxing is to actually affect your play experience. I know that once I started thinking about dice variance, I became a lot more comfortable playing "sub-optimal" builds.


I kinda like thinking about variance now. It's always awful having to choose something that isn't as flavor or be more constrained because I feel like I could be doing better.

Acknowledging that variance is an important factor while coming up with builds can help everyone realize that sometimes the "most optimal" choice isn't giving you the edge you thought it would, thus leaving you free to choose something else that's more interesting or fitting for your character.

It definitely creates the very welcome environment of build variety, nobody will be afraid of deviating just a little bit from the standard DRP required or stuff like that. Some people, like me, have very unforgiving AND lucky GM's and that also need to pick up the slack of other players in my party so that we manage to stay alive (I can't remember the last time our party had an appropriated CR encounter, it's always CR+1 at least).

It may be just my personal situation, of course, but I like to have the best of both worlds: Realizing my character's vision through feat, traits and other choices while also making an effective character. If PF2e's math allows me to do that because there will be some bonuses that I can get away with not picking, then I'm happy.

I've never before have given much thought about DPR and whatnot, just kinda accepted that the veterans new better than me, but I find myself realizing that this new perspective actually changes even more my way of approaching character building.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber

Correction -

I should not say d6+1 PC will lose half the time fighting the d6 NPC - that assumes the histogram overlap is uniform (it is instead falling half gaussian vs. rising half gaussian). That would be interesting to simulate. It is still correct to say below average d6+1 is in the same range as the above average d6.

It is still a sucker bet to claim you will win every encounter over a level, you need to fight d6+3 vs. d6 or AC10 d6+1 vs. d6 to win that bet.

By accounting for stats you can find those builds that for a given AC range will always be better over a level than the other build. That is why variance is useful. DPR will tell you how you improved your rolls, but it cannot guarantee that bonus was enough to pull you out of your below average slump to guarantee you defeat the NPC.

You would hope the game devs considered the stats when they put large bonus choices out there, that they come with some non damage related cost. Like the Outwit ranger, yes its longsword is always bad compared to the Flurry ranger longsword, but the Outwit ranger has an always on shield that they do not have to waste an action for. Maybe their goal is to tank an opponent to control the battle field rather than damage them. If you are dead your DPR is irrelevant...

You would hope the game devs considered the stats when they put the small bonuses into the game, they allow your character to be differently flavored and play slightly differently without feeling bad about that choice, knowing that the choice is more situational.

Now keep in mind if a GM is faced with PCs that all calculated their deviations to ensure that they will always beat the moderate encounter? You cannot win at charop, because now the GM will throw that math back at you - and change all the encounters to severe!


Pathfinder Lost Omens Subscriber
krazmuze wrote:

This is not about adding modifiers to your past rolls, it is about comparing the possibility of everyones future rolls

d6+1 is in fact not always greater than d6 - simply because you are NOT rolling the average every time, but instead are more likely to see a deviation.

sum(randi(6,10,1)+1)=34
sum(randi(6,10,1))=48

Sure a cherry picked two player 10 roll anecdotal...does not tell you anything at all.

But compare many thousands of players over an entire level and look at the average encounter? Surely the tables running d6+1 should never be as bad as the tables running d6 right?

At AC20 for a lvl2 flurry twin kukri ranger the average encounter damage for d6 is 15+/-4 with d6+1 is 19+/-4.

What that means is that 95% of players using d6 scored <=19 while 50% of players using d6+1 scored <=19. The top 5% d6 players are averaging >19 damage while the worst 5% d6+1 players are averaging < 15 damage.

In simpler words just over half of the d6+1 players did as poorly as the d6 players. That is only even 50:50 odds that a d6+1 player will do better than a d6 player.

Therefore it is not possible to say that d6+1 is statistically better.

Now if we talk about a d6+2 only the worst 5% are being beat by the best 5% of the d6.

I have to run a d6+3 to have perfect odds that an average round is always beating the d6.

However if we look at AC10, then all those crits doubling that fixed damage of +1 to +2 - now the +1 is always going to do better. In 5e this would not be the case because only nat20 crits and you do not double modifiers. This is why PF2e is more deadly, the bosses can reliably crit you to death.

you... are not getting it.

DPR calculatiosn are used to determine how you should build your character, what you will get the most out of. d6+1 versus d6, there is not choice to take, d6+1 is the better choice. there is no reason for me, even in play choose an option that does 1d6 damage versus choosing an option that does 1d6+1.

short of cheating, no choice i make effects what the dice will come out to, and so nothing about a d6 is makes it attractive over 1d6+1. this is nonsense.

like i said, variance is only important when looking into overkill, and how much this reduces your effective DPR.

like if 2d6 and 1d12+1 both have about 2 round to kill an opponent, you should use 2d6 as it's more consistent and less likely to overkill or underkill. (likewise a repeatable 1d6 per action is better than 2d6 as 2 actions as you have to deal with overkill less)

ALMOST off topic, this is why casters and martials are balanced, misses do half damage when targeting saves instead of nothing, and both can crit. casters have a bit worse overall DPR but are more consistent across rounds to deal damage.


2 people marked this as a favorite.
Bandw2 wrote:


you... are not getting it.

I think you might also not be getting it? If a build choice stands a chance of not actually making an impact in performance, isn't that a valuable metric to consider when building your character?

For instance, if I'm contemplating making a bard with only 16 charisma, don't you think it's helpful for me to know that the difference between a 16 and an 18 charisma will only become statistically significant after X number of saving throws?


1 person marked this as a favorite.
Bandw2 wrote:


DPR calculatiosn are used to determine how you should build your character, what you will get the most out of. d6+1 versus d6, there is not choice to take, d6+1 is the better choice. there is no reason for me, even in play choose an option that does 1d6 damage versus choosing an option that does 1d6+1.

The reason to choose 1d6 over 1d6+1 is presumably that you'll get to make some other build choice instead (e.g. boosting your Int instead of your Str). The fact that variance may stand a decent chance of making that +1 insignificant (depending on the projected length of your campaign) is just ammunition for justifying whether or not you make that choice.


Pathfinder Lost Omens, Rulebook Subscriber
Bardic Dave wrote:
Bandw2 wrote:


you... are not getting it.

I think you might also not be getting it? If a build choice stands a chance of not actually making an impact in performance, isn't that a valuable metric to consider when building your character?

For instance, if I'm contemplating making a bard with only 16 charisma, don't you think it's helpful for me to know that the difference between a 16 and an 18 charisma will only become statistically significant after X number of saving throws?

Yes but you dont need a graph showing comparisons between two sets of rolls to show that. What you need to know is against the dcs you are likely to face, how often would that plus 1 been the difference in a success category.


Bandw2 wrote:
krazmuze wrote:

This is not about adding modifiers to your past rolls, it is about comparing the possibility of everyones future rolls....

you... are not getting it.

DPR calculatiosn are used to determine how you should build your character, what you will get the most out of. d6+1 versus d6, there is not choice to take, d6+1 is the better choice. there is no reason for me, even in play choose an option that does 1d6 damage versus choosing an option that does 1d6+1.

short of...

The point of the variance is actually to know that you CAN choose de 1d6 option because it's an actual choice you're making rather than "gimping" your character for the sake of being more flavorful. This way, you can more easily choose the options you think are better suited for your character and playstyle without worrying that you're supposed to be doing far more if you picked the cookie-cutter feats such as Power Attack, Deadly Aim, Furious Focus, Precise Shot, Point Blank Shot, etc. you had to pick in PF1e.

To put it simply: In PF1e if you're Barbarian without Power Attack you're basically forfeiting the main reason why you're playing a Barb in the first place, the huge amount of damage that's beyond other classes reach, without it other combat-oriented PC's will manage to do the same or far more damage while doing everything else they do that Barbs can't.

Grand Lodge

1 person marked this as a favorite.
Lightning Raven wrote:
To put it simply: In PF1e if you're Barbarian without Power Attack you're basically forfeiting the main reason why you're playing a Barb in the first place, the huge amount of damage that's beyond other classes reach, without it other combat-oriented PC's will manage to do the same or far more damage while doing everything else they do that Barbs can't.

"I am not of the knowing the problem."


Malk_Content wrote:
Bardic Dave wrote:
Bandw2 wrote:


you... are not getting it.

I think you might also not be getting it? If a build choice stands a chance of not actually making an impact in performance, isn't that a valuable metric to consider when building your character?

For instance, if I'm contemplating making a bard with only 16 charisma, don't you think it's helpful for me to know that the difference between a 16 and an 18 charisma will only become statistically significant after X number of saving throws?

Yes but you dont need a graph showing comparisons between two sets of rolls to show that. What you need to know is against the dcs you are likely to face, how often would that plus 1 been the difference in a success category.

DC (i.e. AC) is already factored into DPR calculations. That's not really what's under discussion. We're talking about dice variance. How a +1 bonus compares to the noise of the d20 across a given number of rolls is what's being discussed.


1 person marked this as a favorite.
Pathfinder Lost Omens, Rulebook Subscriber
Bardic Dave wrote:
Malk_Content wrote:
Bardic Dave wrote:
Bandw2 wrote:


you... are not getting it.

I think you might also not be getting it? If a build choice stands a chance of not actually making an impact in performance, isn't that a valuable metric to consider when building your character?

For instance, if I'm contemplating making a bard with only 16 charisma, don't you think it's helpful for me to know that the difference between a 16 and an 18 charisma will only become statistically significant after X number of saving throws?

Yes but you dont need a graph showing comparisons between two sets of rolls to show that. What you need to know is against the dcs you are likely to face, how often would that plus 1 been the difference in a success category.
DC (i.e. AC) is already factored into DPR calculations. That's not really what's under discussion. We're talking about dice variance. How a +1 bonus compares to the noise of the d20 across a given number of rolls is what's being discussed.

I know, I'm debating whether that adds any appreciable utility to a players desicion making.


Oh hey! We're circling back around to the original topic! :-P

My point in the OP was that 1d6+1 is always better than 1d6. That's not debatable even if it gets lost in the noise most of the time, but that's only relevant in a vacuum, and PF2 isn't a vacuum.

As @Bardic Dave pointed out, though, that +1 is presumably coming at the cost of something else. That something else might matter more than a small average damage boost, and may actually mean you're doing more damage (not wasting actions healing due to better defense, for example).

In a sense, I think you all are arguing somewhat different things. I never meant to call into account the accuracy of the actual metric, merely its relevance in a game that is much more dynamic than PF1.


1 person marked this as a favorite.
Malk_Content wrote:
Bardic Dave wrote:
Malk_Content wrote:
Bardic Dave wrote:
Bandw2 wrote:


you... are not getting it.

I think you might also not be getting it? If a build choice stands a chance of not actually making an impact in performance, isn't that a valuable metric to consider when building your character?

For instance, if I'm contemplating making a bard with only 16 charisma, don't you think it's helpful for me to know that the difference between a 16 and an 18 charisma will only become statistically significant after X number of saving throws?

Yes but you dont need a graph showing comparisons between two sets of rolls to show that. What you need to know is against the dcs you are likely to face, how often would that plus 1 been the difference in a success category.
DC (i.e. AC) is already factored into DPR calculations. That's not really what's under discussion. We're talking about dice variance. How a +1 bonus compares to the noise of the d20 across a given number of rolls is what's being discussed.
I know, I'm debating whether that adds any appreciable utility to a players desicion making.

It does if you have an idea of how long your campaign will be and how likely a particular kind of roll is to come up. My contention is that recognizing the role of dice variance can help free a player to feel comfortable making a "sub-optimal" choices dictated by personal preference. Squeezing that extra +1 out of your build might seem less essential when you realize there's a 63% chance it won't actually make a difference over the next 6 sessions (or whatever).


Pathfinder Lost Omens Subscriber
Bardic Dave wrote:
Bandw2 wrote:


DPR calculatiosn are used to determine how you should build your character, what you will get the most out of. d6+1 versus d6, there is not choice to take, d6+1 is the better choice. there is no reason for me, even in play choose an option that does 1d6 damage versus choosing an option that does 1d6+1.
The reason to choose 1d6 over 1d6+1 is presumably that you'll get to make some other build choice instead (e.g. boosting your Int instead of your Str). The fact that variance may stand a decent chance of making that +1 insignificant (depending on the projected length of your campaign) is just ammunition for justifying whether or not you make that choice.

that's a non sequitur, you could just as easily use DPR calculations to figure out if +1 to deception is worth losing a point in strength. I lose on average X DPR, or reducing charisma to gain strength makes melee have X higher DPR, how does this effect my rounds to kill?(goign from 2.1 rounds to kill, to 2.2 rounds is more or less fine for a little boost in side stats) etc. Variance still isn't the best tool for that >_>


3 people marked this as a favorite.
Bandw2 wrote:
Bardic Dave wrote:
Bandw2 wrote:


DPR calculatiosn are used to determine how you should build your character, what you will get the most out of. d6+1 versus d6, there is not choice to take, d6+1 is the better choice. there is no reason for me, even in play choose an option that does 1d6 damage versus choosing an option that does 1d6+1.
The reason to choose 1d6 over 1d6+1 is presumably that you'll get to make some other build choice instead (e.g. boosting your Int instead of your Str). The fact that variance may stand a decent chance of making that +1 insignificant (depending on the projected length of your campaign) is just ammunition for justifying whether or not you make that choice.
that's a non sequitur, you could just as easily use DPR calculations to figure out if +1 to deception is worth losing a point in strength. I lose on average X DPR, or reducing charisma to gain strength makes melee have X higher DPR, how does this effect my rounds to kill?(goign from 2.1 rounds to kill, to 2.2 rounds is more or less fine for a little boost in side stats) etc. Variance still isn't the best tool for that >_>

I think you’re misreading my post; I’m not suggesting that variance can help you determine the relative value of increased DPR vs improved decpetion. I’m saying that if variance can show that within a projected number of sessions a +1 bonus will not be statistically significant, you might feel more inclined to invest that +1 based solely on personal preference. This isn’t a non-sequitur, it’s my entire point.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber
Malk_Content wrote:

Yes but you dont need a graph showing comparisons between two sets of rolls to show that. What you need to know is against the dcs you are likely to face, how often would that plus 1 been the difference in a success category.

That is exactly how I entered this thread doing exactly that.

I did indeed do it as % success and not damage, but added damage to it specifically to investigate the increased dominance of crit hits at low AC because of the AC+10 being a crit. Turning that +1 into a +2 more than 5% of the time makes a big difference.

It is why I plot the DMG vs. DC because the significance of that +1 varies. At low DC the +1 will always be better regardless of bad vs. good luck. At high DC it will not always be better, bad luck vs. good luck is more important.

DPR in a vacuum does not tell you this, it will always say the +1 is higher. You have to know DPR+/-DEV to determine the luck bounds.

It is not as simple as just saying take the min/max of the die - the entire point of doing stats is that it is not relevant to this game to know the 6 sigma bounds that once in the age of the universe does @wil roll all ones all the time. I am now using 5% outliers because everyone playing this game can relate that to the d20 steps. Those outlier players are the ones that always crit every damn roll, even after their cheater dice are taken away.


Pathfinder Lost Omens Subscriber
Bardic Dave wrote:
Bandw2 wrote:
Bardic Dave wrote:
Bandw2 wrote:


DPR calculatiosn are used to determine how you should build your character, what you will get the most out of. d6+1 versus d6, there is not choice to take, d6+1 is the better choice. there is no reason for me, even in play choose an option that does 1d6 damage versus choosing an option that does 1d6+1.
The reason to choose 1d6 over 1d6+1 is presumably that you'll get to make some other build choice instead (e.g. boosting your Int instead of your Str). The fact that variance may stand a decent chance of making that +1 insignificant (depending on the projected length of your campaign) is just ammunition for justifying whether or not you make that choice.
that's a non sequitur, you could just as easily use DPR calculations to figure out if +1 to deception is worth losing a point in strength. I lose on average X DPR, or reducing charisma to gain strength makes melee have X higher DPR, how does this effect my rounds to kill?(goign from 2.1 rounds to kill, to 2.2 rounds is more or less fine for a little boost in side stats) etc. Variance still isn't the best tool for that >_>
I think you’re misreading my post; I’m not suggesting that variance can help you determine the relative value of increased DPR vs improved decpetion. I’m saying that if variance can show that within a projected number of sessions a +1 bonus will not be statistically significant, you might feel more inclined to invest that +1 based solely on personal preference. This isn’t a non-sequitur, it’s my entire point.

well statistical significance isn't something i think most people should mind when talking about builds(especially when talking about modifiers to damage numbers or saves). a loss of a +1 here is just a plus 1 added somewhere else. I can't help but think it wouldn't really effect your decision making either way.

With soft failures however(half damage, etc) and DCs and damage modifiers, you're going to have those bonuses mean a lot more.

with the multiple tiers of failure a +1 to DC effects 10-15% of possible rolls (the values that would have been 1 tier lower if not for that +1), etc. +2 doubles that value potentially.

I think a lot of you guys are forgetting that in this variance stuff. i +1 to Dc changes not only just to-hit but also crit range and failure range. so if your enemy has to roll a 1-9 for failure, 10-18 for success, and 19-20 for crit, +1 changes 3 seperate rolls all at once. 1 bonus crit failure, 2-10 becomes failure, 11-19 become success and 20 becomes failure. rolls of 1, 10, and 19 change what they would have done.

to be clear, i think DPR shows this cleared in a way easier to deal with. DPR only loses effectiveness when dealing with overkill and the like. knowing exactly when where your critfail, fail, succ and critsucc ranges are aren't that important and show up in DPR well enough. you should just be leery of DPR that mainly comes from one or the other of to-hit and damage and not from a mixture of both.


3 people marked this as a favorite.
Bardic Dave wrote:
I think you’re misreading my post; I’m not suggesting that variance can help you determine the relative value of increased DPR vs improved decpetion. I’m saying that if variance can show that within a projected number of sessions a +1 bonus will not be statistically significant, you might feel more inclined to invest that +1 based solely on personal preference. This isn’t a non-sequitur, it’s my entire point.

Very easy to calculate. You need 22 d20 rolls for a +1 to be statistically significant 90% of the time, and 11 for a +2.

If it's an attack roll, if we consider 2 attacks per round, and 4 rounds of combat, you have it statistically significant after 3 fights with 90% chance. So, just one session.

All the variance calculation in this thread is just plain wrong. The 5000 rolls are just a maths mistake. A +1 is very quickly significant.


SuperBidi wrote:
Bardic Dave wrote:
I think you’re misreading my post; I’m not suggesting that variance can help you determine the relative value of increased DPR vs improved decpetion. I’m saying that if variance can show that within a projected number of sessions a +1 bonus will not be statistically significant, you might feel more inclined to invest that +1 based solely on personal preference. This isn’t a non-sequitur, it’s my entire point.

Very easy to calculate. You need 22 d20 rolls for a +1 to be statistically significant 90% of the time, and 11 for a +2.

If it's an attack roll, if we consider 2 attacks per round, and 4 rounds of combat, you have it statistically significant after 3 fights with 90% chance. So, just one session.

All the variance calculation in this thread is just plain wrong. The 5000 rolls are just a maths mistake. A +1 is very quickly significant.

By which you mean a +1 will yield a higher average result than a +0 over 22 rolls 90% of the time, correct? I think that’s pretty conclusive.

Thanks for doing the math.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber

You are talking about the confidence in your average being higher which is a different thing.

We are talking about the worst half of the people with the +1 get the same number range as the better half of the people without the +1. That is the variation of concern - that variation says you need the +3 before you can say you will always do better regardless of luck.

But even if you want to talk about confidence in average - then what you are actually saying is the odds are high that you lost the encounter because it took 22 rolls to stabilize your average. When the best you can do is half that number of rolls in an encounter - that means some of those fights you lose, some you win because you cannot obtain the average within the span of an encounter.


Pathfinder Lost Omens Subscriber
krazmuze wrote:

You are talking about the confidence in your average being higher which is a different thing.

We are talking about the worst half of the people with the +1 get the same number range as the better half of the people without the +1. That is the variation of concern - that variation says you need the +3 before you can say you will always do better regardless of luck.

But even if you want to talk about confidence in average - then what you are actually saying is the odds are high that you lost the encounter because it took 22 rolls to stabilize your average. When the best you can do is half that number of rolls in an encounter - that means some of those fights you lose, some you win because you cannot obtain the average.

look and i keep telling you you can't plan for that, and thus it shouldn't be a concern when building your character.


krazmuze wrote:

This is not about adding modifiers to your past rolls, it is about comparing the possibility of everyones future rolls

d6+1 is in fact not always greater than d6 - simply because you are NOT rolling the average every time, but instead are more likely to see a deviation.

sum(randi(6,10,1)+1)=34
sum(randi(6,10,1))=48

Sure a cherry picked two player 10 roll anecdotal...does not tell you anything at all.

But compare many thousands of players over an entire level and look at the average encounter? Surely the tables running d6+1 should never be as bad as the tables running d6 right?

At AC20 for a lvl2 flurry twin kukri ranger the average encounter damage for d6 is 15+/-4 with d6+1 is 19+/-4.

What that means is that 95% of players using d6 scored <=19 while 50% of players using d6+1 scored <=19. The top 5% d6 players are averaging >19 damage while the worst 5% d6+1 players are averaging < 15 damage.

In simpler words just over half of the d6+1 players did as poorly as the d6 players. That is only even 50:50 odds that a d6+1 player will do better than a d6 player.

Therefore it is not possible to say that d6+1 is statistically better.

Now if we talk about a d6+2 only the worst 5% are being beat by the best 5% of the d6.

I have to run a d6+3 to have perfect odds that an average round is always beating the d6.

However if we look at AC10, then all those crits doubling that fixed damage of +1 to +2 - now the +1 is always going to do better. In 5e this would not be the case because only nat20 crits and you do not double modifiers. This is why PF2e is more deadly, the bosses can reliably crit you to death.

I think if you made this same argument, but used it to argue that people overvalue the "Deadly" weapon trait, you'd get a lot more people agreeing with you.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber
Bandw2 wrote:


look and i keep telling you you can't plan for that, and thus it shouldn't be a concern when building your character.

But it most certainly is when someone is telling me I have to rebuild away from my flavor I liked over a +1, that my character is bad because I have 16 STR rather than 18 STR. When I very well know that the stats will tell me I can leave that table, the charop can build the new PC to replace me and run it themselves - yet is likely to do just as poorly as I was.

You can plan for this in charop by finding those better options that improve the variation such that luck no longer matters.

What I am doing is math that quantizes your odds into encounters because that is the win/loss breakpoint. It does not matter that you roll the d20 a thousand times and can achieve a stable d20 average in the campaign. What matters is that you rolled it ten times per encounter, thus each encounter cannot achieve a stable average hit/dmg. So what we are doing is saying OK how bad is that unreliability. Is there an option that can demonstrate it is more reliable than my bad luck is only as good as someone else good luck. Find the option where your bad luck will always win regardless.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber
Strill wrote:
I think if you made this same argument, but used it to argue that people overvalue the "Deadly" weapon trait, you'd get a lot more people agreeing with you.

Umm in my other thread that I broke off from this thread in the attempt to let the OP have his discussion of utlity vs. DPR - I did that very thing using histogram analysis.

The conclusion is your GM loves deadly because you cannot do anything with it to them as they slice you in half because of deadly. It is a very lopsided feature, only of use to PC for heroic fantasy games where you just beat up endless waves of minions - otherwise you are the minions that your GM can have their way with. Rely on DPR you would never predict this because in all cases deadly has better DPR.


Bandw2 wrote:
krazmuze wrote:

You are talking about the confidence in your average being higher which is a different thing.

We are talking about the worst half of the people with the +1 get the same number range as the better half of the people without the +1. That is the variation of concern - that variation says you need the +3 before you can say you will always do better regardless of luck.

But even if you want to talk about confidence in average - then what you are actually saying is the odds are high that you lost the encounter because it took 22 rolls to stabilize your average. When the best you can do is half that number of rolls in an encounter - that means some of those fights you lose, some you win because you cannot obtain the average.

look and i keep telling you you can't plan for that, and thus it shouldn't be a concern when building your character.

You're correct, 1d6+1 is always better than 1d6, for a given individual, but Krazmuze does have a point. He's just terrible at picking examples. Let me play Devil's advocate.

Let's say you have a choice between +3 damage and +1d6 damage. The +1d6 is higher on average (3.5), but it comes with a risk that you'll deal less than 3.5 damage in the short term. Short-term performance is important because some hits are more important than others. For example, if an enemy gets debuffed, you want to make sure that you can capitalize on that moment of vulnerability while it lasts. Those hits, are therefore more important than other hits you might make. If you make choices that create variance, rather than consistency, you introduce more random chance into your performance. This means you've created the chance of flubbing an important hit on a debuffed enemy, in exchange for the chance that you'll get a really good hit later, which may or may not be as important, and which you cannot control. When you choose options that provide consistency, you therefore ensure that your important hits have the best chance of landing with good damage.

Beyond the ability to more reliably capitalize on opportunities, consistency also allows you to better plan for future turns. A reliable character can plan combos that chain into one another, with a high expectation that the combo as a whole will work, whereas a more variable character might do exceptionally well on one part of the combo, and fail another part, causing the combo as a whole to fail.

Does that make more sense?


Pathfinder Lost Omens Subscriber
krazmuze wrote:
Bandw2 wrote:


look and i keep telling you you can't plan for that, and thus it shouldn't be a concern when building your character.

But it most certainly is when someone is telling me I have to rebuild away from my flavor I liked over a +1, that my character is bad because I have 16 STR rather than 18 STR. When I very well know that the stats will tell me I can leave that table, the charop can build the new PC to replace me and run it themselves - yet is likely to do just as poorly as I was.

You can plan for this in charop by finding those better options that improve the variation such that luck no longer matters.

What I am doing is math that quantizes your odds into encounters because that is the win/loss breakpoint. It does not matter that you roll the d20 a thousand times and can achieve a stable d20 average in the campaign. What matters is that you rolled it ten times per encounter, thus each encounter cannot achieve a stable average hit/dmg. So what we are doing is saying OK how bad is that unreliability. Is there an option that can demonstrate it is more reliable than my bad luck is only as good as someone else good luck. Find the option where your bad luck will always win regardless.

no just literally tell them, that the loss of DPR isn't much of a worry to you and you'd like to be a bit better doing whatever else you put your stat into...

the rest of this smells like gambler's fallacy. a +1 is still a plus 1 to every roll you do, if you want that +1, you're doing it for the +1 not a 90% chance of it being useful over 22 rolls...


So this entire debate is about choosing the flavorful option...

In which case DPR doesn't matter it's a guideline (as has been stated) and people already agree that DPR calculations need to be more inclusive of other stats.

Variance wouldn't really help this. A possibly better way (with 0 math) is just saying "Results may vary, these are theoretical average values".

***********
Also yes that makes sense, but that doesn't sound like his argument.

His argument sounds more like, "add variance measurement to show that the +1 isnt that important". His motive to me appears to be wanting for viewers/readers to put more value on flavorful options when the static modifier isn't very noticeable.

Which is not a bad thing to want, just bad example as you said.

************
Most of the listed PF1e feats mentioned (not Deadly Aim) aren't cookie cutter options. They are literally needed to take later feats, and that's because of how PF1e/3.5 is designed. Heck Power Attack and Deadly Aim aren't even that good for most builds due to the atk roll penalty. Precise Shot and Point Blank Shot are non options for any ranged build.

The Barbarian taking Power Attack happens to be one of the best users of the feat given how rage used to give a Str bonus (+to hit) and they feats to further benefit from it. Ex: Reckless Rage and Raging Brutality.


Pathfinder Lost Omens Subscriber
Strill wrote:
Bandw2 wrote:
krazmuze wrote:

You are talking about the confidence in your average being higher which is a different thing.

We are talking about the worst half of the people with the +1 get the same number range as the better half of the people without the +1. That is the variation of concern - that variation says you need the +3 before you can say you will always do better regardless of luck.

But even if you want to talk about confidence in average - then what you are actually saying is the odds are high that you lost the encounter because it took 22 rolls to stabilize your average. When the best you can do is half that number of rolls in an encounter - that means some of those fights you lose, some you win because you cannot obtain the average.

look and i keep telling you you can't plan for that, and thus it shouldn't be a concern when building your character.

You're correct, 1d6+1 is always better than 1d6, for a given individual, but Krazmuze does have a point. He's just terrible at picking examples. Let me play Devil's advocate.

Let's say you have a choice between +3 damage and +1d6 damage. The +1d6 is higher on average (3.5), but it comes with a risk that you'll deal less than 3.5 damage in the short term. Short-term performance is important because some hits are more important than others. For example, if an enemy gets debuffed, you want to make sure that you can capitalize on that moment of vulnerability while it lasts. Those hits, are therefore more important than other hits you might make. If you make choices that create variance, rather than consistency, you introduce more random chance into your performance. This means you've created the chance of flubbing an important hit on a debuffed enemy, in exchange for the chance that you'll get a really good hit later, which may or may not be as important, and which you cannot control. When you choose options that provide consistency, you therefore ensure that your important hits have the best chance of landing with...

i've been told repeatedly that it isn't the reliability of the average that's been talked about. it's about how in any given fight, you may just only roll 1s on 1d6+1 but could roll a 4 on a 1d6.

i've constantly brought up DPR, rounds to kill, and overkill. everything you brought up is considered in those calculations. when DPR is close you have to see which ones are consistent and which ones are prone to overkill to determine the best likely choice, but to get there DPR is a much easier and quicker option than looking over and creating variance graphs.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber

And how do you know which dice option is consistent over the short term of the fight? The thing to know that is over the fights few rounds each of few actions, is that your d20 varies and your d6 varies in average results. It is not about calculating the theoretical minimum or medium or maximum damage of your d6 vs d6+1 or 3d2. It is not about how the average settles in over hundreds of rolls.

Variance considers the fact that it is nearly impossible to consistently hit all strikes,then roll all ones for damage. That is an outlier that is unlikely to occur in your lifetime. It is also impossible to always hit the DPR.

But variance can tell you that +1 on the strike and +1 on your dmg for those few rolls, is likely to have half the people making those rolls not see the benefit of the modifier for an encounter unless it is easy AC. It can tell you should go for a +3 if you want to always win.

Your encounter results WILL vary so simulating over the level across many players gives you an expectation that your build is always better, sometimes better, or rarely better at winning encounters - but even that varies depending on the DC especially because of critical success.

The only way to guarantee performance is to use average dmg for fights and never roll attacks. That is why double dmg on crits is so brutal - the d20 math says they will always crit low AC, and you doubled the dmg bonus modifer which makes results less variable.


3 people marked this as a favorite.
SuperBidi wrote:

Very easy to calculate. You need 22 d20 rolls for a +1 to be statistically significant 90% of the time, and 11 for a +2.

If it's an attack roll, if we consider 2 attacks per round, and 4 rounds of combat, you have it statistically significant after 3 fights with 90% chance. So, just one session.

All the variance calculation in this thread is just plain wrong. The 5000 rolls are just a maths mistake. A +1 is very quickly significant.

There are two different questions being asked, which have different statistical answers.

You're correct, in answering the question: After having made X d20 rolls, when will have I seen a +1 bonus make at least 1 difference 90% of the time.

We assume critical success and critical failure in play, so a +1 bonus matters on 2 values of the d20 (say turning what otherwise would be a 9 into a hit, or what would otherwise be a normal hit with 19 into a critical against a DC 10). There's a 18/20 = 0.9 chance that you don't roll those two numbers. 0.9^22 = 0.098. So 9.8% chance you rolled no 9's or 19's. 1-0.9^22 = 0.902. So over 90% chance you'll have rolled those two numbers once or more in those 22 rolls.

However, the question that other people are asking is, given the number of successes I observe, can I identify what my bonus X is in 1d20+X? How many rolls would I need to make before an outside scientist, knowing nothing of my character sheet, only the GM's descriptions of success and failures and the DC you're rolling against, be able to know what X is to 90%.

If I tell you the DC the player rolled against was 15, then had 11 successes on 22 rolls, can you tell me with 90% confidence what X is? The answer is no.

Or in other words, if someone looked back on your session, or level, or entire campaign, and didn't know what was written on your character sheet, can they tell what your bonus +X was to 90% confidence? For some people, if you couldn't be confident that you had a +5 or +6 bonus in that sense, then perhaps, it shouldn't matter significantly to them.

In order to answer this question, requires looking at the probability distributions, or if they are normal distributions (in the mathematical sense), the mean and standard deviations is sufficient.

These are two very different questions to ask of statistics. Neither calculation is wrong, but they answer different things. One answers will you be able to tell it happened (I had a +1 since I rolled a 9 and saw the difference), and the other is perhaps more philosophical and asking more along the lines of does my personal luck matter more than a +1 bonus. In the short term no, in the long term, possibly or probably, but that number is closer to 400 than 22.

Some people are saying of course DPR tells you what is better (which is true), while other people are saying essentially variance tells you if you care if that better is meaningful to them or not because they wouldn't be able to tell statistically they had a difference written on their sheet (which is also true).

And thats not even including corner cases like picks with fatal and katanas with deadly.

So I claim everyone is correct in their answer, but asking different questions. :)


1 person marked this as a favorite.
Hiruma Kai wrote:
There are two different questions being asked, which have different statistical answers.

That's it. And that's why I say Krazmuze is not "calculating the proper thing". When you calculate your DPR, you want to assess your own build efficiency. You don't want to compare it to anyone else's.

Also, this story of "someone else looking at your efficiency and determining your DPR" is kind of flawed. Around a table, everyone sees what you roll. If you always roll over 15, people won't think that your dagger build is overpowered, everyone will think you're plain lucky. On the other side, if you never roll over 5, people won't blame your build. It's very easy to assess the efficiency of a character when you see what he rolls.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber

Here is a new simulation method of a duel between a STR +4 vs. +3 longsword (for both ATK and DMG bonus, standard lvl2 MAP no feats). Why a duel? The game is a contest, it is you vs. the GM. What matters is does the +1 improve your odds of winning, or is it buried in the noise of the randomness of uniform die.

I am doing a 'your turn my turn' simulation of three strike turns fight to the death (within 100 rounds)

Previously I had looked at 52 rounds worth of total damage, without regard to rounds being exit conditions for encounters - now encounter length is variable to the death.

The STR+4 win:tie:loss record over a million encounters. The tie would go to whoever went first.

AC WIN% TIE% LOSS%
25 62 04 34
20 59 10 31
15 53 23 24
10 41 42 16
5 37 47 15

So very high AC the +1 approaches 2:1 odds to win, at moderate and low AC it depends on initiative - it comes down to who crit first.

This is highly variable on an individual basis, if instead I sample a bakers dozen encounters to level the trend is obscured.

AC WIN% TIE% LOSS%
25 54 0 46
20 54 0 46
15 54 23 23
10 38 31 31
5 54 23 23

that is barely over a coin flip for the individual result unless it is low AC and you won initiative.

AC WIN% TIE% LOSS%
25 77 0 23
20 69 15 15
15 23 62 15
10 61 38 0
5 23 77 0

Well I have no idea who will win in this random sample.

So the variation of a million levels needs to be calculated but random sampling seems to show it is really high.


1 person marked this as a favorite.

So a +1 to hit and damage doesn't guarantee you win but it increases your chances of winning.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber

Well this really shows that duels are a bad idea, you need the action economy of focus fire at high AC to beat even those equal to you, as well as at low AC just to increase the odds the NPC goes last.

I was surprised how quickly you can lose because of initiative or crits, as well as how long it drags on at high AC. I was getting ties at high AC and had to increase from 10 rounds to 100 round before calling it a draw.

Basically it exposes the gamblers fallacy, the extent of an encounter being short runs of die makes them highly variable luck. It does not matter that your later die rolls will balance out your luck when you already lost a bunch of encounters. Just like gambling the best you can do in that case is stem your losses. The short runs of bad luck WILL worsen your win/loss on an individual basis.

The trend of the +1 hit giving you an edge at high AC, but initiative winning at low AC is only observable across many players. Again gamblers fallacy - the only one that wins in Vegas is the house because they play all the players. You cannot play the odds, they can.

So I need to figure out how to do deviation analysis so I can better represent the individual.

I wonder what modifier it takes to tilt the odds. I would suspect a d4+4/3 the modifier carries more weight, and it is very unlikely to overkill in one round - so should be less ties. I would suspect ATK/d8+4/0 would really tilt the odds - simply because the +0 has no constant damage on crits. With a +4 as long as all three crits it does not matter what the damage dice does, the constant damage alone killed your opponent.

A dex ranger may not make sense, needs to be dex and str.


3 people marked this as a favorite.
Pathfinder Lost Omens Subscriber
krazmuze wrote:


Basically it exposes the gamblers fallacy, the extent of an encounter being short runs of die makes them highly variable luck. It does not matter that your later die rolls will balance out your luck when you already lost a bunch of encounters. Just like gambling the best you can do in that case is stem your losses. The short runs of bad luck WILL worsen your win/loss on an individual basis.

your method is relying on the gambler's fallacy for protection, that statistically you're still safe for X number of rounds, a +1 could save you on the first round. the gamber's fallacy is relying on statistics to ensure a win.

a +1 on any given roll has a 5% chance to change the outcome on a simple flat check. on anything with basic saves or strikes it changes what would occur 10-15% of times. a gambler's fallacy is when you try to predict what will happen over a course of a game. a +1 has a 10-15% chance every roll to make a difference.


Bandw2 wrote:
a +1 on any given roll has a 5% chance to change the outcome on a simple flat check. on anything with basic saves or strikes it changes what would occur 10-15% of times. a gambler's fallacy is when you try to predict what will happen over a course of a game. a +1 has a 10-15% chance every roll to make a difference.

That's the thing though, that I think I've come to understand (correct me if I'm wrong krazmuze). Their method does not dispute that the +1 makes a difference to the roll's outcome 5-15% of the time, but instead asks, how often does that difference actually matter to the fight's outcome? The point being made is that, even in those fights where a +1 to hit made a difference and resulted in a hit where there would have been a miss, it doesn't always make a difference to the overall outcome. The fact that you got that extra hit doesn't matter in those fights where you were already too severely behind to come back from that extra hit, for example.

To put it another way, we could say for example, "the +1 affected 10% of the rolls, and of those, only 30% actually resulted in a win where it would have been a loss. Thus the +1 actually only affects 3% of fights and isn't as useful as it looks on the surface." I've just made up simple numbers as illustration, but hopefully that helps my point.


1 person marked this as a favorite.
Pathfinder Lost Omens, Rulebook Subscriber
BellyBeard wrote:
Bandw2 wrote:
a +1 on any given roll has a 5% chance to change the outcome on a simple flat check. on anything with basic saves or strikes it changes what would occur 10-15% of times. a gambler's fallacy is when you try to predict what will happen over a course of a game. a +1 has a 10-15% chance every roll to make a difference.

That's the thing though, that I think I've come to understand (correct me if I'm wrong krazmuze). Their method does not dispute that the +1 makes a difference to the roll's outcome 5-15% of the time, but instead asks, how often does that difference actually matter to the fight's outcome? The point being made is that, even in those fights where a +1 to hit made a difference and resulted in a hit where there would have been a miss, it doesn't always make a difference to the overall outcome. The fact that you got that extra hit doesn't matter in those fights where you were already too severely behind to come back from that extra hit, for example.

To put it another way, we could say for example, "the +1 affected 10% of the rolls, and of those, only 30% actually resulted in a win where it would have been a loss. Thus the +1 actually only affects 3% of fights and isn't as useful as it looks on the surface." I've just made up simple numbers as illustration, but hopefully that helps my point.

That ignores almost every other consequence of combat. Yeah it might no effect win or loss, but it will effect how many resources are consumed most of the time, whether that's consumables, time or spell slots.


1 person marked this as a favorite.
Pathfinder Rulebook, Starfinder Roleplaying Game Subscriber

And it's an increased level of abstraction that takes us further and further away from the actual use for DPR calculations: helping making a decision.

Like... at some point you're going to be arguing that instead of buying a +1 sword, you should buy a horse because it will get you to the objective faster than the +1 sword would shave off of combat time.

DPR calculations lay out which options are mathematically superior to others, given a set of assumptions. Like all practical mathematics, you have to make assumptions, and keep those in mind when making a decision.

One of the assumptions of DPR comparisons is that you're only concerned about expected damage.

If you want to change that assumption to make it harder to make decisions, you're going to have to start building a complex multi-criteria decision making scheme, with varying weights on separate measurable outcomes.


It's really not as bad as you make it out to be. I expect it will boil down to something like "a change of 10% DPR is not noticeable in the short term, and a change of 3% DPR is not noticeable in the long term." That helps decision making, it doesn't make it harder. You could do a similar analysis for increases to your AC. Now you have more information when you make a short term choice like "should I increase DPR or my AC?" You can use this new metric, the "usefulness threshold" or whatever, to tell you that the increase to your DPR is too small to make a noticeable difference, but the increase to AC is above the threshold and should be counted as more valuable. In this very contrived example, maybe the final result is that you decide shield block is a better third action than a third attack for most characters (which we already know, but extrapolate the method to other comparisons).

The point I'm trying to make is that DPR is useful, but that doesn't preclude people from exploring other measurements either. I do wish the OP has a less combative title for a thread to talk about character metrics though.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber
BellyBeard wrote:


To put it another way, we could say for example, "the +1 affected 10% of the rolls, and of those, only 30% actually resulted in a win where it would have been a loss. Thus the +1 actually only affects 3% of fights and isn't as useful as it looks on the surface." I've just made up simple numbers as illustration, but hopefully that helps my point.

Basically the circumstances of the win is more meaningful than the reliability of the die.

Looking at the raw data I see this a lot with crits - because the crits use double damage they have a wide variance - and did not make any difference in winning the fight because they often look like a regular die roll. Only where AC is low so that you crit a lot so that you have greater chance of hitting that max double damage does it matter, and you are really cooked if min double damage alone is enough to one-shot. This happens a lot where the boss can triple crit you all the time, yet you are just scratching them.

I am going to add initiative simulations now that I realize how badly crit first can change who was winning. It is easier to understand win/loss with a variance than win/tie/loss overlaps.

The more important use of this technique is where do the options start to tilt it so you do rack up more wins. I do not think it can answer anything for the close options regardless of this or that being fractional DPR better.


3 people marked this as a favorite.
Hiruma Kai wrote:
There are two different questions being asked, which have different statistical answers.***So I claim everyone is correct in their answer, but asking different questions. :)

That's a nice story. I give an A+ for trying to bring everyone together. I see it differently. I see the DPR people trying to make definitive statements about what is "best" and krazmuze saying, you're not going to roll the dice long enough to actually leverage that benefit.

Your observation here:

HK wrote:
If I tell you the DC the player rolled against was 15, then had 11 successes on 22 rolls, can you tell me with 90% confidence what X is? The answer is no.

..is spot on. If we let the DPR loyalist play a character who had the +1 and then character who did not have the +1, none of them would be able to tell the difference experientially or in long term outcomes.

Quote:
In the short term no, in the long term, possibly or probably, but that number is closer to 400 than 22.

But "possibly" is only possibly if you're actually tracking the numbers. The average person would not be aware after 400 rolls (provided they didn't see the dice and know the target numbers).

The entire point of these DPR dives is for people to try and come away with definitive answers. They are looking for something that they can hang their hat on and feel confident that they made the right decision in going 18 STR and 12 CON and not 16 STR and 14 CON.

I've interpreted Krazmuse's post as telling people that if you're talking about a +1, you're not going to notice a difference in the normal course of play. I believe his objective is to counter the gambler's fallacy which dominates the DPR mindset i.e. that +1 isn't nearly as valuable as you think it is given the circumstances under which the game is played.

So I have to disagree that they are both right, because the people doing and propounding the outcomes of typical DPR analyses aren't doing it for academic reasons, they want to build the "best" character they can and Krazmuze is pointing out that the conventional logic used to decode DPR spreadsheets is built on a fallacy.

But you know this. Apologies for not helping you build the facade, I just think more people would benefit form understanding what krazmuse is saying.


Pathfinder Lost Omens Subscriber
BellyBeard wrote:

It's really not as bad as you make it out to be. I expect it will boil down to something like "a change of 10% DPR is not noticeable in the short term, and a change of 3% DPR is not noticeable in the long term." That helps decision making, it doesn't make it harder. You could do a similar analysis for increases to your AC. Now you have more information when you make a short term choice like "should I increase DPR or my AC?" You can use this new metric, the "usefulness threshold" or whatever, to tell you that the increase to your DPR is too small to make a noticeable difference, but the increase to AC is above the threshold and should be counted as more valuable. In this very contrived example, maybe the final result is that you decide shield block is a better third action than a third attack for most characters (which we already know, but extrapolate the method to other comparisons).

The point I'm trying to make is that DPR is useful, but that doesn't preclude people from exploring other measurements either. I do wish the OP has a less combative title for a thread to talk about character metrics though.

this is all i've been saying before, ROUNDS TO KILL. if a +1 only makes your rounds to kill go from 2.2 to 2.1, who cares a +1 somewhere else is fine. rounds to kill require DPR calculations though.


1 person marked this as a favorite.
N N 959 wrote:
Hiruma Kai wrote:
There are two different questions being asked, which have different statistical answers.***So I claim everyone is correct in their answer, but asking different questions. :)
That's a nice story. I give an A+ for trying to bring everyone together. I see it differently. I see the DPR people trying to make definitive statements about what is "best" and krazmuze saying, you're not going to roll the dice long enough to actually leverage that benefit.

That's not true, and you're misunderstanding what he said. It's possible that you could leverage the benefit of a +1 on the very first roll. He's not saying that the +1 won't grant a benefit. He's saying you won't roll the dice long enough in order for an ignorant outside observer to measure the benefit to a high degree of accuracy. In other words, it won't satisfy a low-attention-span player's need for instant tangible gratification, because random variance is still a very large factor. If you're not one of those low-attention-span players, however, it's irrelevant, because the +1 is still definitively better.

Quote:

But "possibly" is only possibly if you're actually tracking the numbers. The average person would not be aware after 400 rolls (provided they didn't see the dice and know the target numbers).

The entire point of these DPR dives is for people to try and come away with definitive answers. They are looking for something that they can hang their hat on and feel confident that they made the right decision in going 18 STR and 12 CON and not 16 STR and 14 CON.

I've interpreted Krazmuse's post as telling people that if you're talking about a +1, you're not going to notice a difference in the normal course of play. I believe his objective is to counter the gambler's fallacy which dominates the DPR mindset i.e. that +1 isn't nearly as valuable as you think it is given the circumstances under which the game is played.

So I have to disagree that they are both right, because the people doing and propounding the outcomes of typical DPR analyses aren't doing it for academic reasons, they want to...

You don't have to observe the numbers for the +1 to improve your performance. Whether you know you were right or wrong doesn't change whether you were right or wrong.

Also, it's not the gambler's fallacy to say that your character's build choices are made on the scale of a campaign, rather than on the scale of a single fight, or a single level.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber

The problem with DPR is you do not accumulate damage beyond an encounter in the game, but that is exact what average is - you accumulate damage then divide by N. In the game the accumulation resets every time you start a new encounter, and there is simply not enough rolls in an encounter for you to declare a reliable average within that encounter. In order to achieve a reliable average you have to accumulate the good in with a bad so that they wash each other out, but that is not what happens in the game. Instead what happens is you have a good encounter and a bad encounter - instead of reliably increasing your personal average it so that you are increasingly winning - it just makes your win/loss 50/50.

Now the reason the high AC becomes more of a trend to the +1 is that everybody is badly missing so the fight lasts 50 rounds. So now the fact that +1 has better DPR does get exposed because there IS enough rolls that you are reliably achieving the average. This is because the good rounds wash out the bad rounds and will get you that win. But that is not a realistic simulation of the game to go that long...

Now of course if you was obsessed with tracking your personal DPR you could log all your rounds into a tracker and average them at the end of the campaign. But what I am saying is also record your average in each encounter and you will see that it is the local minima/maxima that cause you to win and lose. More damage in level 7 encounter 2 and that washes out the bad damage I did in level 6 encounter 4 has absolutely no bearing how the game plays. What matters is you lost level 6 encounter 4 because you rolled bad. That is it.


1 person marked this as a favorite.
krazmuze wrote:

The problem with DPR is you do not accumulate damage beyond an encounter in the game, but that is exact what average is - you accumulate damage then divide by N. In the game the accumulation resets every time you start a new encounter, and there is simply not enough rolls in an encounter for you to declare a reliable average within that encounter. In order to achieve a reliable average you have to accumulate the good in with a bad so that they wash each other out, but that is not what happens in the game. Instead what happens is you have a good encounter and a bad encounter - instead of reliably increasing your personal average it so that you are increasingly winning - it just makes your win/loss 50/50.

Why does the damage accumulation reset between encounters? Are you talking about overkill?

I could understand an argument that resource conservation is the most important factor in success, and therefore poor rolls early in a fight, which fail to take out enemies, tend to lead to a lot more HP loss than poor rolls later in a fight. In that sense, the contribution of DPR to resource conservation becomes less and less as more enemies are eliminated, and the risk dwindles, until the end of the fight, where DPR no longer matters.

However, if we're analyzing entire encounters, we need to consider the whole party, at which point we're rolling possibly 4x as many dice, which greatly reduces variance.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber

That I agree that you should include the party as the game is won with focus fire, it would increase the number of strikes and thus reliability. But then you need a party builder to truly do it because the game is not optimized for four fighters, it assumes you will run the thief, the magic-user, the cleric and the fighter. I could easily simulate 4v1 fighters and see if it is yet enough to get reliable averages to see the +1 impact but going in reverse 1v4 is a bit too complicated so cannot really do win/loss.

damage accumulation resets because you win the encounter when damage exceeds hit points. The game win/loss is quantized at the encounter level. It matters not that your bad luck corrects itself in the next encounter and you have good luck. All that does is give you a win that is washed out by your loss. The score is now 1 win 1 loss not 57 dmg then 45 dmg for the +1 players tally while the +0 players tally is 1 win 1 loss not 48 dmg then 37 dmg


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber

OK I simulated a 1v1 4x the HP and actions. This is not the same as a 4v4 party dynamic whittling down each other until it is a 1v1

But it should test if 4x the number of actions on each side is enough to observe the +1 overcomes luck. Winners total dmg reached other sides total HP first.

For a million players +4 vs. +3 players record

AC WIN%:TIE%LOSS%
20 72:07:21
10 68:19:14

With more actions it resolved the ties in favor of the +4 vs +3. That improves the odds toward 3:1 if we assume ties are split.

So do not solo, focus fire for more reliable actions to leverage the modifer to get the kill.

For a million players +4 vs. +2 players record...

AC WIN%:TIE%LOSS%
20 88:07:05
10 92:06:03

Certainly is a good bet and explains why the weak/elite adjustment is +/-2.

Still need to sim the deviation bounds for across a levels

and break ties with initiative.

For a million players +4 vs. +4 players record...just to make sure sim works as expected

AC WIN%:TIE%LOSS%
20 47:07:47
10 36:27:36


tivadar27 wrote:

EDIT: Note that I'm good at overstating things. DPR isn't a useless metric, it's just not a useful metric in a vacuum anymore, see rest of post...

PF2 got me to thinking quite a bit about our assumptions about how PF2 "doesn't work" for "theorycrafting" and "whiteboarding". There have been previous comments of a similar vein and I want to state that flat-out, these are wrong. There are a few things coming into play here:
1. Because the numbers are much tighter in 2e, variance plays a much bigger role.
2. Because of the way the action economy works, looking at "full attack every round" for computing efficacy is no longer the way to go about it (I'm looking at you Impossible Flurry!).
3. Because you can no longer one-round shot an enemy, and mobility has improved, DPR is no longer a useful metric by itself. Defenses *really do* matter.

So there's not much to do about the first item on the list here. Previously you could pretty much guarantee a hit on your first attack via attack bonuses and whatnot. Now, the best you can do against a decent opponent is probably closer to 70-75% (buffs/flanking and legendary proficiency). But the rest, well, the rest is manageable, and I think it's worth looking at new approaches to discuss the efficacy of builds. This is still mathfinder, but I think our math just got a lot harder :-P.

Honestly I don't know what makes the most sense, but some thoughts:
1. Look at combinations of 1/2/3 action sets (some of this is happening).
2. Look for action economy "wins" (things like flurry of blows, sudden charge).
3. In evaulating a character numerically, consider standard deviation of AC, and average damage from the "average". Maybe include saving throws in these numbers...

I would probably focus on 2-action sequences in general to evaluate a character's capabilities. 3-action setups are good to take note of, but won't come up nearly as much as full-action sequences in PF1 because of how easy it is for enemies to get out of position. Take special note if your three-action routine gets around this (like having predator's pounce circumvents need for striding separate from attacks in many circumstances). 2-actions because it is what we already expect for spellcasters and it accomodates any one action you spend not doing your main tricks.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber

I simulated all ACs of the 4v4 using intiative as the tie beaker, though I lowered to 1000 players so wins are rounded to 5%.

From 0-18 AC wins for +4 vs. +3 wins are 75-80%, then by 28 AC degrades to chance. At lower AC crits are helping the +3 players a bit, at higher AC only nat20 is not hitting enough for modifiers to overcome luck as there is no crits.

2 strikes vs. 3 strikes there is no difference, the MAP-10 is better spent on utility.

Now consider that +4 players can only have a +3 WIS, while the +3 players have a +4 WIS - now by 28 AC degrades to 5% worse than chance.

Suppose the +4 STR dumped WIS to -1? From 0-18 AC wins increased from 65-80%, now by 28 AC wins dropped to 30% so better initiative wins fights.


Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber

Oops the above posts I only did 4x HP, which led to 4x rounds not 4x actions per rounds, or could be thought of as alternating vs. side initiative. Side initiative is more effective focus fire, alternating initiative is more fair but harder on GM to go back and forth.

101 to 150 of 153 << first < prev | 1 | 2 | 3 | 4 | next > last >>
Community / Forums / Pathfinder / Pathfinder Second Edition / Advice / The new "whiteboarding": why DPR as a metric is broken. All Messageboards

Want to post a reply? Sign in.