# krazmuze's page

Pathfinder Adventure, Adventure Path, Lost Omens, Pathfinder Accessories, Pawns Subscriber. 187 posts. No reviews. No lists. No wishlists.

Favorited by Others
 1 to 50 of 55 << first < prev | 1 | 2 | next > last >>

 1 person marked this as a favorite.
Megistone wrote:

I don't understand how you can be arguing around an obvious misunderstanding: dpr is average damage. Most (all?) abilities in PF2 do damage in a range, not a static one.

Thus, doing 80 average damage in a round could mean something like 4d12+54, or 23d6 (more or less).
Conversely, 70 average damage could be 4d12+44, or 20d6.

Probabilities of killing a 140 HP enemy in two rounds:
98.31% with 4d12+54/round (avg: 80)
96.84% with 23d6/round (avg: 80.5)
52.00% with 4d12+44/round (avg: 70)
51.84% with 20d6/round (avg: 70)

So yes, doing 80 dpr is much more reliable.

You cannot calculate probability of killing if you do not simulate vs probability of being killed. Your DPR is irrelevant when you are downed or you have downed, there is no damage tally being kept - the simulation has an exit condition and resets after each kill. All that matters is who got to who first.

That is where STR is no longer the prime optimization variable, but all the defensive option of the other stats have to be considered as tradeoffs.

When you actually do the tradeoff analysis for rounds to kill(ed) that you find the difference in options is much less than the variance in tables - literally +/-5% for options becomes +/-50% for variation

 1 person marked this as a favorite.

Such a choice of DMG vs AC can only be answered by simulation against specific opponents, which is rarely done because calculating DPR odds is easy. I gave up simulating because nobody was interested in the results, it did not fit their comfort zone of belief (yes you can indeed drop a point of STR to get a point elsewhere) Even simulation is flawed because it depends on the AI you write (as anyone that has ever run any of the classic D&D videogames that used AI to fight knows!)

Even with the simulations there is too much focus on the average result - but this option is 1% more survivable than that option - ignoring that the individual table variance is +/-50%....

The only thing that really matters is will you survive the encounter, it does not matter if you can outdamage your enemy if they go first because they have initiative and crit your weak AC. Any smart dungeon boss is going to tell his minions - hit the big naked person first - it is the job of the GM to metagame to challenge player choices.

With NPCs not being built like PCs, they do not have to make such tradeoffs. They can have higher damage, attack and AC than you do just because the designer wanted it that way.

So it means very likely that your min/maxed design will face a situation that you are not optimized for, requiring your group to carry you off the battlefield.

 4 people marked this as a favorite.

Actually my advice about +3 STR IF you want to focus on melee is solid, it is based on actual combat simulations that ignored DPR odds and focuses instead on players killed,

Now you could say 'but magic weapon spell' don't worry about STR you can buff that, except that is only good for the limited daily boss fights. Which all the encounters in Plaguestone are....

Also the advice was not unsolicited, the group specifically posted that PF2e is spanking their butts what can be done to fix it. So it is not your place to take people to task as annoyingly giving advice when then advice was asked for. The player expressed desire to hit things rather than heal things, people are responding with how to better play melee.

Personally if I was their GM I would allow them to adjust their stats for free until they find the PC plays how they want, a lot better than them rage quitting.

 1 person marked this as a favorite.

So not really a battle cleric, as you are not using the classes medium armor and physical shield, and you lack strength for hitting hard.

For a stand and deliver melee going from +4 to +3 STR is fine if it gets survivable utility elsewhere, below that, you are essentially putting the NPC weak template on your attacks and it inverts your survivability.
STR.PNG

Get your party in on the backup healing so can swap a CHA for STR bonus, otherwise go put on the cloth and go all in to the healbot role.

I think you should look at physical vs. spell shield each level which is better, as that involves tradeoff with your deity weapon.

Seems like you are buffing yourself, but you got to nerf the NPC to get them knocked down to your level - forcing them to lose ATK or AC.

 2 people marked this as a favorite.
Narxiso wrote:
Gaterie wrote:
Hamanu_of_Urik wrote:
The current party is composed of a bard (stays out of combat to buff), melee champion of Cayden Cailen, melee cleric of Gorum (specced into Medicine for Battle Medicine in-combat healing and out of combat treatment), and a Wildshape Druid. The current party level is 2 and we are playing The Fall of Plaguestone module.

3 casters and 1 martial, of course you're having hard time!

Try with this party composition: champion, fighter/bard, fighter/cleric, fighter/druid.

Note: when people want to show casters are balanced with martial, they cite level 4+ spells (and usually level 6+ uncommon spells). you can allow the PC to reroll as casters at level 10 - but no one should play a caster before level 10. The more caster you have at low level, the harder the game is.

Or, you know, change up tactics. Everything I’ve heard of Plaguestone measures it as extremely difficult. How the party handles combat is far more important in this game than any other I’ve played...

The final party that survived plaguestone was a gnome scoundrel, half elf alchemist, human flurry ranger, and a cloth cleric. Not a single hold the line martial, those died early in the adventure - especially the level two opener and closer which is designed to take persistent advantage of those who thinks 'I hit it again and again with my longsword' is the right thing to do.

The only problem the party really had was until the errata came on bulk they had to keep dropping their loot because nobody had STR.

 1 person marked this as a favorite.
thenobledrake wrote:
And while this is complex, it also feels like the only XP system other than a flat "this much per thing, give or take a few because reasons" style of system that I think I'll eventually just have memorized and never have to reference the charts to use.

It is actually easy to derive once you see the pattern, no need to cheat sheet or memorize the table. Big bosses are 40XP times level up, and that is all you need to know.

+4 is 160XP for extreme campaign boss
+3 is 120XP for severe level boss
+2 is 80XP for moderate encounter boss

The lower levels are half XP for two levels down

+4 is 160XP
+2 is 80XP
0 is 40XP
-2 is 20XP
-4 is 10XP

+3 is 120XP
+1 is 60XP
-1 is 30XP
-3 is 15XP

PC over/understaffed is almost the same as PC over/under level, and since it is only used to adjust the encounter and not the award, might as well just do it as PC level up/down so you do not need to ever look at that table since the bestiary options will just get you close anyways.

 1 person marked this as a favorite.
Nefreet wrote:
Carog the Fat wrote:
yeah but by RAW for Society games I can just wiggle my fingers and heal my self

There is no such thing as "Rules as Written". Reading is an interpretive activity. Indeed, two different people can read the same text and come to different conclusions.

It's much more sensible to look at this with degrees of certainty. Does it make any sense to just "wiggle your fingers" and perform miracle treatments? Or should we use the framework for Treat Wounds?

When I look at the Medicine skill, I see that every action requires the use of Healer's Tools (including Treat Wounds, which Battle Medicine references). When I look at Healer's Tools, I see they require two hands to use.

*Could* it be as you describe? Sure. But I don't think that's the stronger, more sensible conclusion when taking everything into consideration.

But then you do have to read between the lines. If RAI was to say Battle Medicine IS a Treat Wounds action, then they could have simply written the sentence that way. Instead you have a sentence that goes out of its way to specifically not say that it is a Treat Wounds action, that it is a Medicine check using Treat Wounds DC but is NOT Treat Wounds.

Using Battle Medicine zero handed nearby is more than offset by the nerf to being daily immunity, which again is another way that it is not a Treat Wounds which is hourly immunity.

Game balance does not have to make sense realistically.

 2 people marked this as a favorite.

You do not want to reset to 1 HP with hero points. It is much better to play dead because your initative moves before your killers, giving an entire turn for your party to draw attention away from the dying while the dying uses their hero points if recovery rolls go poorly. If instead you chose to be a wounded PC with 1 HP you will be the next target as you are still the immediate threat, which then only takes one crit while prone to get to dying 3 and now dead dead on bad roll or another hit when out of hero points.

Part of your problem is Plaguestone does not fit encounter balance recommendations. Severe encounters are supposed to be for level bosses and not back to back opening fights. Moderate encounters should not turn into sequential yard fights, as just two of them becomes an extreme campaign ending fight.

Really the only yard fight is both orc yard fights, and those should be nerfed in qty because they exceed encounter guidelines (although here is a fun reddit post today about strategy for this yard fight

The other dungeon rooms are written that bosses lay in wait while their minions are supposed to be focused on their job which is alchemy and not roaming around looking for a fight.

Magical healing does remove wounded as long as it was a 10m break and healed to full. You are much better off with everyone with any WIS buying medicine tools and using them as unlimited resource rather than using daily spells. You should be finding excuses for 10m healing breaks after rough moderate encounters because as you experience the math assumes bosses can take you down by themselves, no need to help them by not focus breaking.

The entire reason the bosses do so much damage is devs know that no matter how much the rules say you should not spam heals, players will spam heals. So the devs wrote heal spamming with med tools into the rules, defined the 10m break to make it feasible to use them after most fights, then cranked up the boss damage to compensate.

PF2e fights are most optimal when you focus on depriving NPCs of action economy or AC rather than hit hit hit them with damage. This is because PCs usually outnumber NPC actions, so it hurts worse for an NPC to move back into melee than a PC to move out of melee (opportunity attack creatures are rare now). Move out, or flank them, or raise shield, or shove or trip with the right weapons, lots of things to do instead of that third hit.

 1 person marked this as a favorite.
Haffrung wrote:

Thanks for all the feedback.

I've never used CR, EL, encounter-building guidelines, or any of that stuff - just eyeballed encounters and let them play out. I'm not a RAW guy, and my players are cool with that, so we'll have to see if any this stuff presents problem. I'm not sure where the +6 levels thing came from, as I was just commenting on how our groups have typically handled retreating when a combat goes south.

You really should learn it because only like other editions, PF2e is not forgiving when it comes to making encounter mistakes. It is one thing to kill players because they ignored the NPC guide that says run, and decided to last stand when things go badly. It is another thing to throw the +6 at them out of ignorance thinking they have a lucky chance with good tactics - they simply do not.

But all you really need to know to build encounters is that a LVL+4 is an extreme campaign ending boss (160XP) , a LVL+3 is a severe level ending boss (120XP), a LVL+2 is any moderate boss (80XP) and likely needs an encounter break.

You do not even have to memorize these boss XP levels, the only number you need to memorize is those big bosses are 40XP per level over party. The encounter table has a pattern that every two NPC levels down from these bosses is half again the XP, so it becomes easy to figure out how much XP a lacky is. A lacky pair is the same as a solo lacky two levels higher. Once you exceed the 80XP in serial encounters they likely need the 10m-60m break, especially if anyone is wounded.

If you need an elite/weak monster because something is off level but it fits your theme then +/-2 on all its numbers is +/-1 threat level.

You do not even need to know the other than 4PC party adjustment rule, just assume down a player is the same as party down a level, being up a player is the same as party up a level.

It really is so easy that you can run a sandbox on the fly, using relative monster level mental math. Unlike other editions you do not have to worry about broken players (pf1e) and homebrewing bosses (5e) to challenge the players, the math just works. A LVL+4 the party starts dying, at LVL+5 it is TPK.

 1 person marked this as a favorite.
Henro wrote:
Your analysis is very interesting, though it doesn’t appear to me that it proves one should definitely go for a +3 to their attack stat. Very few adventuring days consist of a single moderate encounter after all.

I did not simulate single moderate encounters per day, there is no point to that simulation because sleeping sucks for healing damage. I simulated a bakers dozen moderate encounters on thousands of tables with 10m-60m healing breaks between encounters, I had to do that because if you do not take breaks PK is assured. I did not simulate lows or trivials because nobody dies there if you allow breaks, and none of the adventures have any levels done like that.

If I simulated Extreme or Severe it is not survivable because I put a time limit on healing, there is a reason these are designated as level and campaign bosses - they are supposed to be fights with high cost that you cannot continue on from. Does not matter if you survive 30% vs 35% because of a stat - that just means you last 3 vs. 3.5 encounters an had to make several characters to get thru the level.

And you cannot look at stats in isolation, I also did a stat tradeoff study (+1 STR vs. +1 DEX for an flurry ranger) and the death odds precision was within a percent of each other.

Doing the simulations basically confirmed to me that the encounter difficulties are what they say they are on the label, and the designers came up with system math that allows you to make stat tradeoffs and not just focus on DPR. Yes do not dump your key stats, that is stupid as the +0 STR fighter shows you will not survive your opening fights. But if you want to go MAD there is a lot of utility in doing so, MAD is viable.

These charts do not show the variant death rates of the tables, these charts was the average table. I did not show variances because the charts became unreadable with the area charts overlapping washing out any useful informations. The +3 to +4 strength on average table was 80 to 90% alive difference, a bakers dozen encounters was nowhere near enough to overcome the much larger variances. The min/max death rate was literally 0% to 100% with deviations something like 30%. There is not point in simulation precision when the table experience is not that accurate. This is not a gaussian dice pool wargame where this type of analysis is useful, as this game is founded on a uniform die deciding your fate.

The simulation detailed analysis was if the boss wins initiative and crits with a followup hit, you are eventually going to die because I did not allow healing to full - if that happens when you are wounded and not fully healed - you are dead.

The most important reason to not continue with that line of research? Hero Points makes PK analysis meaningless and it was pointless to simulate them. Dying is easy in this game, it is designed to make that happen often. But actual death really only happens if the GM is being truly evil attacking the just recovered hero point, using metagame knowledge that the attack will finally kill the player because they have no hero point and the odds of recovering from wounds is slim. You cannot simulate that behavior.

 1 person marked this as a favorite.

I already proved that point using combat simulations, but the thread got zero comments because it complete refutes the DPR optimizers and they do not want their way of min/maxing the game shattered. Of course for a melee char not having any STR will get you killed. But +3 vs +4 does not matter.

 1 person marked this as a favorite.
Ckorik wrote:
Fumarole wrote:
it is much preferred over the CLW nonsense.
It's almost exactly the same thing. It's almost exactly the same as a 'short rest' in 5e. It's just another mechanic to do the exact same thing with a different flavor - and the 10 min downtime covers many abilities (focus/repair/etc) that it would still happen if someone had a click stick of healing.

No it is not the same thing...

Short rests in 5e is done with a limited pool of which you can use only half the pool a day. The encounter math is entirely different in that monsters are intended to wear down your HP pool over many encounters before you take that hour long lunch break. People say 5e combat is easy but that is only because they spam the short/long rests and avoid the intended attrition of HP and healing resources.

CLW is essentially unlimited because it is so easy to gold farm that it is indeed the equivalent of the MMO clicky that you spam at no real cost.

So PF2e medicine checks are just as unlimited right? It cost nothing but a bit more than a heal pot and skill investment after all and the healers tools never break. But the check is very limited in time, thanks to the critical success/failure mechanic you can fail and that person is barred from healing for an hour (and worse you can cut them on a fumble).

They are made different by the GM taking advantage of that time constraint. If the party takes all afternoon to heal after every fight, well guess what the boss had time to call in reinforcements. So only if you handwave time and just say everybody healed up full can you say it is the same as pf1e bunch of clicky wands or the lazy 5e that spends more time napping.

The PF2e devs saw the PF1e CLW spawn, fixed it with a time constraint and failure risk, and put it into the rules saying you should heal after every difficult fight. That did make the game easier if looked at isolation, but the devs made combat harder because they added critical ranges with double all damage, and multiple attacks, and more accurate heavier hitting NPCs even at the same level and made leveled bosses worse by padding their numbers.

You need more healing in PF2e simply because combat is much much more deadly, more healing in PF2e does not in any way make combat easier. Without it your party is unlikely to live until lunch time, it does not matter that in 5e you can.

The realism argument is nonsense. HP has never been a reality simulation mechanic since day 1. If the notion of it being a first aid kit bothers you, then call them magical. The rules basically say this when they say that it renews your magical focus pool while you treat wounds. You are channeling divine, primal whatever magical sauce you want to say your bandages have. It does not take 10-60m to apply a bandage, it takes 10-60m to make them magical. It takes so long to trickle charge and leak out some magic while you are filling up.

 1 person marked this as a favorite.
thenobledrake wrote:
But I will say that the "just as bad as the +15" seems unlikely to be true.

The encounter threat table is bounded at +/-4 levels for a very good reason. +4 bosses can already crit you to death in one round without any help so how much overkill do you need?

Young Black Dragon lvl 7 vs. a lvl 1

https://2e.aonprd.com/Monsters.aspx?ID=127

Has ATK +19 with 2d10+9 jaws and +19 Init. It is very likely to go first and likely to sequential destroy your party without even breathing because your HP is likely its average damage. And that is not even considering that its first attack is but a coin toss on if it crits to outright kill you before you even get to move, and those crits recharge the breath weapon.

But lets open with breathing just because it sees your party coming down the hall as you had no clue with its +16 stealth. It does 8d6 normally, but at DC reflex save 25 it is very likely to do double damage. Anyone in that line is very dead.

Its AC is 25 to hit its 125HP, when your attack is +7 requiring an 18 to hit.

Do you really think you can survive that so that you can fight the ancient black dragon...

https://2e.aonprd.com/Monsters.aspx?ID=129

AC39, 325HP +33 jaws doing 3d10+14 +2d6 and 2d6 persistent and breath is 17d6 at DC38.

So back on topic treat wounds is not OP, not when the monster math looks like this.

 1 person marked this as a favorite.
Duskreign wrote:
I find too many people theorycraft without actually trying things. PF2 is a great example of people having many ideas before even giving the game a chance.

A lot of that is migration from 5e that thinks the game should be balanced the same way as the one they are coming from, without recognizing the flaws that system has. It requires understanding how the math while simple to 'fix' has a huge impact on encounter balance and bestiary changing how the game itself plays

Don't need to try deleveled myself because I already did that for five years....I know the cons outweigh the pros for me.

However if I was porting a 5e adventure for sure I will do the gamemastery remove level otherwise it will be too much work to redesign monsters and rebalance encounters. There are many adventures there that are written assuming +/-2 PC level variance for doing encounters is fine - but that swing would be trivial to deadly in PF2e.

 2 people marked this as a favorite.
Ubertron_X wrote:
SuperBidi wrote:
They are not mutually exclusive: You are expected to start every single battle from full health and you are usually expected to rest for 10mn after each encounter.

Of course they are not exclusive but the question is how realistic is such a scenario?

For example we started our last "adventure day (i.e. after resting)" with a severe encounter and my cleric already is out of combat heals after the very first encounter.

So, do we rest another "night", or do we press on with only 10min of medicine after every encounter, or do we rest like 30min to 40min after each encounter, so I can treat everyone at least once?

Because I can not see my group staying at somewhat near full health when I can "only" treat one guy after every encounter, even if we only have easy encounters for the rest of that day (apart from the fact that entering the fray without combat heals is not ideal to begin with).

The rules are pretty clear - even a moderate encounter will likely need a break. Severe encounters are intended for the level boss, which means it assumed you will be going into downtime afterwards, win or lose.

"
Moderate-threat encounters are a serious challenge to the characters, though unlikely to overpower them completely. Characters usually need to use sound tactics and manage their resources wisely to come out of a moderate-threat encounter ready to continue on and face a harder challenge without resting.

Severe-threat encounters are the hardest encounters most groups of characters can consistently defeat. These encounters are most appropriate for important moments in your story, such as confronting a final boss. Bad luck, poor tactics, or a lack of resources due to prior encounters can easily turn a severe-threat encounter against the characters, and a wise group keeps the option to disengage open.
"

You might be playing Plaguestone which completely disregard encounter balance rules....opens lvl2 with two severe encounters back to back.

Every Medicine trained PC should have healers tools for Treat Wounds, relying on cleric is wasting precious break time. Pretty much any class, background or ancestry with WIS bonuses should be taking it. But even so clerics can take feats to improve ability to heal more faster. You should have lots more healing output than one per break.

 1 person marked this as a favorite.
Strill wrote:
My guess is they just wanted a penalty for the sake creating feats to overcome it.

Wild Order druids with Wild Empathy allows making an impression using diplomacy on animals without using language! It is not a somatic or verbal feature so should hold up in animal form.

 1 person marked this as a favorite.

This would make for some fun Paizo friday streams BTW

 1 person marked this as a favorite.

This is only a problem if you think lowering HP is inflicting deadly wounds which has never been the case even in prior editions. PF2e even adds a condition called wounded as a seperate pool for tracking your deadly wounds. If this mundane jealers toosl bothers you just houserule CLW to be more accessible...or make heal a renewable focus spell.

Healing is excessive after encounters because crits are double damage to PCs and way more likely to happen against bosses that are way more accurate than you. Critical damage is much worse than doubled die if nat20. Devs know you can heal more so they countered it with a Bestiary that hits more hard.

The design of encounters is that the way to hurt the players is to kill them, thus any boss can easily take down a player, which makes them wounded, which makes taking them down again that much closer to death, so now crit hit the dying kills them. Even if not proceeding on from that fight into the next one without healing makes things very deadly.

So they do intend you to heal every fight because any boss can take you down every fight. But because 10m is often not enough, you really need 1-2 hours - they counter that healing spam by adding time pressure which the GM can use to put more wounds on them.

It is very different math than D&D 5e where encounters are just scratch attrition and you are not expected to heal until many encounters.

Just add up your encounter XP - two moderates without a break is equivalent to an extreme encounter - which is intended as campaign ending boss fights (very likely to kill a player with high risk of TPK)

 1 person marked this as a favorite.
thejeff wrote:
But the best part is the way low level PCs have a good chance of taking down much higher level enemies and getting all that experience and loot.

You do realize that means the GM has a good chance to take down high level PCs with low level minions and take away all their experience and loot? I am sure your GM thinks that is awesome from their side of the screen. Some want that reality, but others want that fantasy that at level 20 you are nearly a god.

 1 person marked this as a favorite.

I would think they are armed unless it conflicts with their exploration activity.

 1 person marked this as a favorite.
Lightning Raven wrote:

Hey! Nice to see you here.

The things you said in the other thread really made me evaluate how I viewed the data people uses here. In fact, I've been noticing for a while now that the guides that I use to help me make some of my chars miss a lot of the marks regarding some abilities and other stuff simply because they're thought in a different environment (PFS, other tables, etc) or are outright pure theorycraft with zero playtest.

Winning the encounter is about not dying, which is not just about doing damage!

 1 person marked this as a favorite.

You would think it would be there but for Plague it only gets a few, no named monsters, no monsters from within the adventure text. These are only the ones in the adventure toolbox at the end.

That is the only place it could be because any other SRD site lacks the Pathfinder license, OGL does not allow use of named things.

I suspect if you drop them a note, they have been busy at updating the site and probably just need to know not all the adventures are inclusive in their end bestiary and they need to pull them from the adventure text.

 2 people marked this as a favorite.

This has been a good thread on reddit about how there is much more to do than hit/hit/shield

I discovered from it that scroundrel can use Deception (CHA) to feint and can often crit getting flat-footed for sneak attacks into the next round rather than just that round to the point that CHA can replace STR.

The flurry ranger can just keep on hitting, the animal ranger gets an extra hit in with their pet. Fighters can do more physical things like trip and shove with selected weapons. And everyone can shield so always nice if you cannot find something to do, but I rarely use it always something better to do.

This is a game changer to both D&D1-5 and PF1 that has this much freedom of action economy - every other edition restricts greatly what action you can do when. So you will find little agreement that PF2 is boring combat.

As far as the lopsidedness? That is a houserule to save the GM time by rolling group or side initiative. The CRB warns to not use it because it makes things lopsided. The game is optimized so a boss can lopsidedly focus fire their attacks and take you down, making it so all the minions can join that dog pile on their turn is not fun and is boring.

If you want some randomness - pick up the crit hit/fumble decks. Something for spells on every card.

Just these above three things (third action, mixed initiative, crit hit/fumble) will change the game from boring.

The game can be about world changing and walking amongst gods and changing geopolitical landscapes, but first you have to kill the rats in the cellar (or just start at lvl 15). There is a reason every CRPG has that trope, it comes from D&D that started this entire genre many decades ago.

 1 person marked this as a favorite.

Magic Weapon is written for 'you or a willing ally', yet Magic Fang is written only for 'one willing ally'. You are not your own ally. Can you explain why the difference when they both are just similar utility buffs?

 1 person marked this as a favorite.

Using Matlab I simulated bakers dozen lvl2 moderate encounters over level 2 for 1000 tables for various stat changes

average party lvl2 baseline

four PCs STR+3, WIS+2, HP30, AC18, Attack [7 2], Perception 8, melee d8+STR

average lvl2 enemy (medium humanoid)

two NPCs STR+3, WIS+1, HP30, AC18, Attack [10 5], Perception 7, melee d8+STR

Area charts varied for PC stats show average percentage of encounters with zero to four killed players.

mixed initiative, focus fire only at living, death saves for PC, dead for NPC, post encounter one hour Treat Wound each PC, no combat healing, no hero points.

AC chart can be used to get risk improvement for shielding rather than moving. You want to match your opponents AC

AC.PNG

HP is realistic range for all ancestry/classes

HP.PNG

STR is used for attacks and damage bonus, this is most important because of constant critical damage. Being weaker than your opponent is dangerous.

STR.PNG

WIS is used for Initiative and Treat Wounds, dumping WIS is just as bad as dumping HP. (33 max dmg vs 30 max hp)

WIS.PNG

Initiative matters even more with side initiative

SIDE.PNG

and with 10m break, side initiative

10m.PNG

and with 10m break, STR4 NPC, side initiative (36 max dmg vs 30 max hp)

10mSTR4.PNG

 2 people marked this as a favorite.
FowlJ wrote:

Literally nobody has claimed the thing that you're disputing here. Nobody has said that through some magic 1d6+1 will always roll better than 1d6, 100% of the time. It has been said, accurately, that 1d6+1 will always be better than 1d6, in that given the choice between the two there is no reason why 1d6 would be a better option.

You are saying nobody is saying this

"1d6+1 will always roll better than 1d6, 100% of the time"

when it exactly the same thing as saying this.

"that 1d6+1 will always be better than 1d6"

You cannot change the fact that at DC20, d6+1 is only better than d6 for half the players. If you are fine knowing that the +1 improved your crappy rolls by +1, and want to remain ignorant that you are way below average, then great for you.

Does not stop the other thousands of players from coming online and saying that they are not seeing the benefit of the modifier compared to other players - because the fact is that half of them will not see the benefit.

Unlike yourself who is happy that you did a build with average damage boost that you cannot achieve relative to other player, the half the players that are not acheiving the average boost are not happy and they want to know why that is.

So when you see a histogram response, there really is no need for you to comment because you only care about the median. Well that information is on the chart with the 50% line, you have the information you need. Feel free to ignore the 5%, 95% bounds on the chart while others take advantage of it.

Even if you want to ignore all those people, the game is still a contenst against all the NPC your DM plays. Statistics tells you that even with matched AC20 and ATK+8, you would be a fool to take that bet of using a d6+1 when the boss is using d6 and going into it saying you will always win that fight - because half the time the boss WILL win the fight despite your numerical edge you are so proud of having. You should only take that bet if you have a d6+3 or it is an AC10 fight.

 1 person marked this as a favorite.

d6+1 is in fact not always greater than d6 - simply because you are NOT rolling the average every time, but instead are more likely to see a deviation.

sum(randi(6,10,1)+1)=34
sum(randi(6,10,1))=48

Sure a cherry picked two player 10 roll anecdotal...does not tell you anything at all.

But compare many thousands of players over an entire level and look at the average encounter? Surely the tables running d6+1 should never be as bad as the tables running d6 right?

At AC20 for a lvl2 flurry twin kukri ranger the average encounter damage for d6 is 15+/-4 with d6+1 is 19+/-4.

What that means is that 95% of players using d6 scored <=19 while 50% of players using d6+1 scored <=19. The top 5% d6 players are averaging >19 damage while the worst 5% d6+1 players are averaging < 15 damage.

In simpler words just over half of the d6+1 players did as poorly as the d6 players. That is only even 50:50 odds that a d6+1 player will do better than a d6 player.

Therefore it is not possible to say that d6+1 is statistically better.

Now if we talk about a d6+2 only the worst 5% are being beat by the best 5% of the d6.

I have to run a d6+3 to have perfect odds that an average round is always beating the d6.

However if we look at AC10, then all those crits doubling that fixed damage of +1 to +2 - now the +1 is always going to do better. In 5e this would not be the case because only nat20 crits and you do not double modifiers. This is why PF2e is more deadly, the bosses can reliably crit you to death.

 1 person marked this as a favorite.

Just downloaded matlab trial, only \$149 for a home license. I paid more than that for Office and I would rather use Matlab than Excel for doing stats.

I will probably start a new thread for doing build stats that includes the two sigma range for those that would like that information and not interested in debating further why it does not matter....

I am more doing it for my own theorycrafting.

 1 person marked this as a favorite.

@hiruma Yes I was using variance in english use - 'your results will vary' and not meaning the math variable - don't get so hung up on terminology. I could also have said deviation and it would not be entirely correct as well. but I certainly not going to say 'you must take plus or minus twice the standard deviation' every time. But plenty of people understood what I was saying anyways. Would it feel better if it said dice variation?

And I did not want to be bothered with uploading and linking charts simply because I was not supposed to be wasting time at work, and Matlab is what I use at work so I just knocked out some one liners to check my guesses on what I thought might be a +/-1 only to find out it was a +/-3 (d20 equivalence of 15%).

Yes it is easier to understand the chart version if you are not familar with what a normal distribution looks like. I did not integrate the overlap, but what I see it fits my statement that only the (un)lucky will see a difference on a +1 and most people will not feel it is any different and some will even feel they are doing worse.

The bottom line is a +5/6 has little overlap, a +1/2 is mostly overlap. So why are the small bonuses in the game if they do not feel that different in play? The point is stacking them up higher as you level with training and combining them with other small bonuses of different types so that they add up to something that feels like you overcame the dice to make your build feel differnt in play. Leveling up and gaining system mastery. The histogram analysis gave me more appreciation for PF2e compared to D&D5e which takes the same +5 (max) and puts it to work as advantage but leaves it up to your DM to grant it.

Anyways I have a quick and dirty two liner idea on how to incorporate crits but it will have to wait until I have free time at work again. All I need to do is multiply the success array elements that are crits by 2, then multiply that by the damage (which will have its own variation) then take the mean/std (if the histogram is still normal)

The point of making it a one liner matlab was to show it is not that hard to do if you have a matrix math program, it is very easy and quick to knock out 40 million rolls.

 1 person marked this as a favorite.

The everybody I speak of is the million clerics I already simulated, spefically to put the law of large numbers to work and remove the noise from the low number of clerics.

And when you do that analysis you realize it is mathematically the case that accumulating infinite results for a uniform die results in a gaussian distribution. I already showed that a million results returns the exact same mean that the fractional odds analysis will prove. So the million results can have normal statistics applied and obtain precise results. Thus rather than showing the histogram I can calculate the sigma, and it will have to do for illustrating this histogram since this is a text forum

Since I am interested in the 95% range of the histogram I will take 2*std

The trained WIS+4 cleric has a 65+/-15% chance of succeeding at healing their party at each break.

mean(mean((randi([1,20],40,1e6)+7)>=15))
std(mean((randi([1,20],40,1e6)+7)>=15))*2

The trained WIS+3 cleric has a 60+/-15% chance of succeeding at healing their party at each break.

mean(mean((randi([1,20],40,1e6)+6)>=15))
std(mean((randi([1,20],40,1e6)+6)>=15))*2

Thus the variance of the die means that bonus +1 modifier results in a range similar to the average odds calculation as if using a bonus -2 to +4 modifier. There is significant overlap of the 95% range of the histograms, thus only the very (un)lucky will see a difference in these builds.

So lets use histogram analysis to come up with build advice where people will see differences, they will not have to worry about the lesser cleric build outperforming them at the game store and get into arguments about how DPR advice is wrong because they have not had the experience that this was better than that.

The expert WIS+4 cleric has a 75+/-15% chance of succeeding at healing their party at each break over a level

mean(mean((randi([1,20],40,1e6)+9)>=15))
std(mean((randi([1,20],40,1e6)+9)>=15))*2

The trained WIS+0 cleric has a 45+/-15% chance of succeeding at healing their party at each break over a level.

mean(mean((randi([1,20],40,1e6)+3)>=15))
std(mean((randi([1,20],40,1e6)+3)>=15))*2

Now the histograms do not overlap each other except for their few percent tails. Thus for these checks we can conclude it takes a modifier difference of +6 to overcome the d20 and guarantee performance over most everyone else.

Now lets take this further and use this method to look at ATK+7 vs. AC 18? Again we will assume 40 checks - 4 rounds at 10 encounters over the level

ATK 1 is 50+/-15%
mean(mean((randi([1,20],40,1e6))+7>=18))
std(mean((randi([1,20],40,1e6))+7>=18))*2

ATK 2 is 25+/-15%
mean(mean((randi([1,20],40,1e6)+2)>=18))
std(mean((randi([1,20],40,1e6))+2>=18))*2

ATK 3 is 0% (it is not possible to hit)

mean(mean((randi([1,20],40,1e6)-3)>=18))
std(mean((randi([1,20],40,1e6)-3)>=18))*2

Now you see the design of the system has indeed accounted for the histograms - they used the -5 for more attacks because they want you to feel that in your build regardless what your rolls are - the odds for each attack do not overlap. They give you options to narrow this to -2 because that means you will feel like all of them have a similar chance of hitting and it was worth taking those feats.

Lets assume someone else flat-footed them and I got agile weapon and flurry and twin feat on my ranger. Now the histograms are starting to overlap and blurring the odds on the attacks.

ATK 1 is 60+/-15%
mean(mean((randi([1,20],40,1e6))+7>=16))
std(mean((randi([1,20],40,1e6))+7>=16))*2

ATK 2 is 50+/-15%
mean(mean((randi([1,20],40,1e6)+5)>=16))
std(mean((randi([1,20],40,1e6))+5>=16))*2

ATK 3&4 is 40+/-15%
mean(mean((randi([1,20],40,1e6)+3)>=16))
std(mean((randi([1,20],40,1e6)+3)>=16))*2

And this matches the experience I see with the ranger - the worst your first attack can do is 45% and the best the last attacks can do is 55%. So let someone else take care of the utility and defense and you SHOULD be blowing all your actions on attacks.

So bottom line when the variance is greater than the difference in modifiers it will feel similar in play, when the variance is less than the difference in modifier it will feel different in play.

Yes I rounded everything to 5% because this game is not that granular, and it allows us to understand how it relates back to the bonus modifier.

Now this may not be the case that things are gaussian when considering critical effects damage, you would have to study their histograms to know if they are normal gaussian distributions. But that cannot be done with oneliner matlab code.

 1 person marked this as a favorite.
Bandw2 wrote:
krazmuze wrote:

Then do the level simulation and prove me wrong. Every simulation I have seen says they calculated the fractional odds - which is only true for infinite simulation, or I simulated 50000 runs to get a precise average - which is not the reality of any players level.

The fact is that IF the variance is greater than the differences in average, then the build is not more important than the dice.

This is statistics 101

3.7+/-1 and 3.6+/-1.2

you cannot conclude that A is better than B, you instead must conclude that they are not significantly different because the range of averages have significant overlap.

3.7+/-0.1 and 3.4+/-0.15

You absolutely can conclude that A is better than B the averages do not overlap (to whatever confidence you calculated - usually 95% confidence is used) You cannot however conclude by how much as it could be 3.6 vs. 3.55 or it could be 3.8 vs. 3.25

It is this very gamblers fallacy that think the average odds apply to them that makes Vegas rich. The house can play the averages (because they make all the plays) - the player cannot (because they cannot play enough)

right so if you're going to be unlucky one way or unlucky another way, wouldn't you still want to know which is a better build if you're being unlucky?

Sure but my point is you cannot do that unless someone gives you the DPR+/-variance. It turns out for the specific medic example (I will post histograms later) that most people will not see the benefit of the +1. The thing to realize is that average means half the people do worse, half the people do better. The half the people that do worse with the +1 are not doing better than half the people doing better without the +1. That is the gamblers fallacy that people fall into when they think that if they follow DPR advice that they will always do better with the bonus option, that is simply not the case!

I will post histograms that better show this, the best options is where the bonus difference is such that the histograms do not signficantly overlap. This happens when the bonus is large and/or the spread is narrow because you roll it much more.

With the medic example you can only say if you are very lucky with the +1 and the other cleric without +1 are very unlucky are you doing better. This is simply because the variance is greater than the modifier.

 2 people marked this as a favorite.

@Strill

Yes you clearly do not understand the law of large numbers if you think that the precise odds of a uniform dice determines your performance in your RPG. The very simple fact is that your rolls that you will have are way way in the high variance because you will always have a low low number of rolls within a level. This is why I insist that if you want to talk DPR include the +/- variance so that I can see which options are not significantly different. I know that 1+/-2 is not significant and I will pick the option with more flavor that I like, but a 5+/-.5 is worth taking unless I really dislike its flavor.

As I said I did the 40 rolls because someone else says all it takes is a few dozen rolls to overcome the variance of the die. This analysis shows this is clearly not true - the law of large numbers does apply.

The Pathfinder devs do know this - it is why they added level to everything so that you could have a range of outcomes where the dice variance will simply not matter because you will always hit or always fail if you go beyond the threat range. That is because it is not fun that the kobold killed your legendary fighter.

Whereas D&D 5e decided to embrace that variance in the uniformity of the die and did away with the level stepping that 4e had, because random variance makes for more interesting improv story telling. It is fun to remember the time your wizard crit the dragon with their dagger, even though that makes little sense.

Not sure how you can say variance is irrelevant when the two major RPG have decided to take advantage of it in different directions? If the wanted to remove its significance - they would use the dice pools that wargames use.

 1 person marked this as a favorite.

A fight broke out at your gaming store because some charop declared they are badly losing because their idiot cleric did not take an 18 and they do not want to play at such a table

OK lets look at Treat Wounds after every encounter cleric trys to heal everyone once, what percentile is successful? We will just simply say this is 40 rolls for the level since that fits nicely with someone saying that variance does not matter after several dozen rolls....

Indeed charop says you improve the odds 5% by going for that 18.

Trained Cleric with WIS +4

65% = median(sum((randi([1,20],40,1e6)+7)>=15)/40)

Trained Cleric with WIS +3

60% = median(sum((randi([1,20],40,1e6)+6)>=15)/40)

But I do not care about the million players, I care about the table next to mine so lets take some random samples that are close to the median.

Trained Cleric with WIS+4

55% = sum((randi([1,20],40,1)+7)>=15)/40

Trained Cleric with WIS+3

70% = sum((randi([1,20],40,1)+6)>=15)/40

(the actual inner sigma range spans about 15% - it did not take many samples to find these example)

So my +4 behaved like a +2, while my neighbor +3 behaved like a +5! So even a coarse sampling shows the +1 is getting buried by +/-2 variance.

So they charop gets online and complain to Paizo about their broken game, who in response asks stores to survey what is going on at their tables how good/bad does it really get?

25% min success rate for the WIS+4
95% max success rate for the WIS+3

OK that is mind blowing - the worst 'good' clerics failed to heal 30/40 times while the best of the 'bad' clerics healed 38/40 times!

Now to do this properly I would need the stats package in matlab that actually calculates 95% CI but what you do know it this will be slightly less range than the min/max but way more range than the anecdotal sample from a store.

Now lets do a similar analysis for rolling 2d8 40 times

360 = median(sum(randi([1,8],40,1e6)+randi([1,8],40,1e6)))

which is exactly 9*40 as the odds would say

Now lets take a random sample to compare our two tables

max(sum(randi([1,8],40,1)+randi([1,8],40,1)))/360

and all the tables

min(sum(randi([1,8],40,1e6)+randi([1,8],40,1e6)))/360

max(sum(randi([1,8],40,1e6)+randi([1,8],40,1e6)))/360

Roughly I can say that every cleric can be expected to heal within +/-30% off the expected median but typical random sample results might be +/-10%.

Now compound this with all those missed heals variance which I did not consider in my healing calc, as well as the variance of fumbles and crit success? Lunch is long over so I am not going to write that program. But this is enough to convince me that +1 does not matter when you are using uniform die that are very variant.

Go play a wargame that uses dice pools that give more predictable results.

 1 person marked this as a favorite.

Fine - prove me wrong and simulate a levels worth of rolls using actual rolls and not odds.

It is a random process - using the fractional odds tells you the perfect result for an infinite of rolls.

A players results over a level is a finite result that is a sample of the infinite results needed to achieve that perfect odds. If you want a better estimate than one player - you have to simulate many many players over a level. Confidence intervals absolutely do apply.

I have yet to see anyone report on their +/- variance, despite giving a precise fraction that says this option is better than that option.

 1 person marked this as a favorite.

Fine - prove me wrong.

Make a spreadsheet that runs a dozen encounters with a dozen attacks to simulate a level thousands and thousands of times. Report back to me when you have achieved the fractional odds with high enough precision that you trust you can report on the fractional differences between builds.

I will not wait up - because I already know that the people that have done this had to simulate 50,000 rolls to get confidence in their averages that could backup their build differences they was simulating. And even then people tell them they did it wrong because their numbers are slightly off from what they calculated using fractional odds. fractional odds only represent the result of infinite rolls, finite rolls suffer statistical precision.

My feel for the numbers is that over the level you certainly can claim the barbarian is better than the wizard at melee. But this fighter option vs. that ranger option? Not buying it until someone shows me the realistic level simulation with the confidence intervals.

 2 people marked this as a favorite.

I haven't run the numbers here, so I'll believe you here. Though I don't know what you mean by ".1 precision" in terms of this metric. Can you elaborate?

Usually what is used to bound an average estimate is called the '95% CI or Confidence Interval', what that means is that someone else repeats your simulation with the same number of rolls that you are 95% confident that their results will be within the +/- bounds that you reported for your average. This is necessary because fractional odds are only precise for an infinite number of rolls, any finite number of rolls must be reported with +/- bounds.

So if it takes 5000 rolls to achieve a 95% CI of +/-0.1? That precision is meaningless number when rolling dozens of encounters that make up a level, it simply is not possible to achieve.

a QA/MFG engineer would be out of work if they told the boss they sampled a dozen widgets out of the production run of a million and it conforms to the expected norms. They have to use the statistics and determine how many units they actually need to achieve the confidence interval bounds they are comfortable with. Hopefully their process is gaussian which takes fewer numbers than a uniform process to achieve high confidence in their numbers.

I am not saying stop with the DPR - I am saying qualify it with precision bounds using variances across realistic simulations.

 1 person marked this as a favorite.
Squiggit wrote:
krazmuze wrote:
Gamblers fallacy...

Gambler's fallacy refers to the myth of 'maturity of chances'. The idea that if you roll a series of bad rolls your next one is more likely to be a good roll.

That has nothing to do with saying that 10 is larger than 8. That's more like basic arithmetic.

Not if 10 and 8 is your DPR, then they are averages calculated from the fractional odds - so statistics absolutely apply here. While you calculate odds using basic arithmetic, that represents the result over an infinity of rolls. If you have finite number of rolls then you have to simulate it and report the variances.

Vegas makes money off the gamblers fallacy because the gambler believes short term results should achieve the average, so if they are under the odds they will shortly get back over the odds. But only Vegas itself can acheive the odds their qty of rolls is much much much higher over many many gamblers, the individual gambler will always be subject to the variance of shorter runs.

Which is why you need to simulate realistic numbers of rolls to find out if 8 is indeed greater than 10, which can be statistically true if the numbers are properly reported with their variance such as 8+/-6 vs 10+/-2. Now while I do not think the variances are that bad in a level - I do think that dozens of encounters with dozens of rolls each that represents a level is not going to achieve 0.1 variance that makes the claim this option is better than that option true.

 1 person marked this as a favorite.

Sure you mean to say 'you cannot' rather than 'you can'. The entire point that statistics makes is that you cannot say one is better than the other.

Two encounters results just rolled with real die. Flat checks because I do not want to write a fancy simulator. Did not even have to cherry pick to find two runs to prove my point, the first two runs do that.

20 13 4 9 1 3 17 11 17 13 16 20 - AVE 12, two crit success, 1 fumble, 5 > DC15
1 11 5 4 1 14 19 7 14 13 9 12 - AVE 9, no crit success, 2 fumbles, 1 > DC 15

Persist that over a level I doubt it is going to average with 95% confidence that I can say this fighter is better than that ranger. In fact if I took fighter and I was the 2nd run I would be pissed if my supposedly inferior ranger buddy was doing better than me. Gamblers fallacy...that 3 diff in ave, and 4 diff in achieving DC is much greater than most build differences.

I want to see people do simulations that start reporting variations on a per level basis - so that people can conclude when things are not signficantly different - instead of saying this is 0.1 better it is a must always take gold option.

 2 people marked this as a favorite.

Then do the level simulation and prove me wrong. Every simulation I have seen says they calculated the fractional odds - which is only true for infinite simulation, or I simulated 50000 runs to get a precise average - which is not the reality of any players level.

The fact is that IF the variance is greater than the differences in average, then the build is not more important than the dice.

This is statistics 101

3.7+/-1 and 3.6+/-1.2

you cannot conclude that A is better than B, you instead must conclude that they are not significantly different because the range of averages have significant overlap.

3.7+/-0.1 and 3.4+/-0.15

You absolutely can conclude that A is better than B the averages do not overlap (to whatever confidence you calculated - usually 95% confidence is used) You cannot however conclude by how much as it could be 3.6 vs. 3.55 or it could be 3.8 vs. 3.25

It is this very gamblers fallacy that think the average odds apply to them that makes Vegas rich. The house can play the averages (because they make all the plays) - the player cannot (because they cannot play enough)

 1 person marked this as a favorite.

1. Because the numbers are much tighter in 2e, variance plays a much bigger role.

This has always been true for every edition of this game - because it is founded on using few uniform die, it does not use dice pools like wargaming that will give you a more gaussian result where average has meaning.

The tightness of the game just makes it even more true.

I challenge anyone doing these DPR simulations, you need unrealistic number of simulations (tens and hundreds of thousands) to get the average to settle into what fractional odds tell you it should be.

This is the very nature of averaging random trials, is that averaging only settles in after very very very large number of trials.

Instead take that and do say 200 rolls which is realistic amount for a level, then simulate many players doing that level and report on the sigma or quartile variances.

I suspect player luck will dominate results more than DPR, and show these narrow spreads between build choices are meaningless.

 1 person marked this as a favorite.
Zwordsman wrote:

I would probaly also consider some sort of class ability

"Everything in its place"
The alchemist gains 2 bulk that is only applicable to the Alchemist kit, formula book, and daily allotment of items.

(just because this would also make it so an alchemsit basically wasn't restriscted to having to have 12 or 14 str just to operate)

Also consider gifting a "aclhemist tool kit" that only works for the alchemist or something.

2 or 3 bullk+1/3rd starting gold is painful

Adjusting inventory bulk is best way to get 2 bulk and suspect way the errata will go. Since these are inventory items it makes no sense that they are bulky on everyone but the alchemist.

Alchemist Tools are 1 not 2 bulk (because adventure pack is errata that way)
Alchemist Formula Book is L not 1 bulk (because spellbooks are already that way)
Alchemist items in Bandolier total 1 L. (Because full pouch of gold and arrow quiver already works that way)
Alchemy reagants are like any other material component - they have no bulk - so why do the items you make out of them are bulky.

 2 people marked this as a favorite.

I was thinking about the next encounter, the vine lashers are not piercing resistant so the ranger had a fun target practice after the near TPK. All the GM can do is sit there. They could random walk and hope to find someone, but as written they are supposed to wait to slap the party into setting off the gas bushes.

But the blood lashers are weak to fire and that is indeed the ranged solution that can be given with recall knowledge - these are not unique blighted creatures the sidebar says they are found in hot/temperate areas and need light/moisture. So any nature check will do as they are common natural creatures. The treeline has some setbacks that are more than 20' away, while the bushes have 30' sense - the text says they wait until someone is 20' away to attack. Their sense is also imprecise.

And the text clearly says the mutant wolves do not attack they wait for the party to enter their cave - that to me is not a back to back encounter design. Heal up with take 10m, camp out with the ranger as needed - as written you are perfectly safe from the wolves. Their senses do not reach outside the cave, they do not know the party is even there.

So I do not agree that they mistakenly wrote severe encounters back to back. They give means to lessen the severity as well as take break spots. So those lore checks and break times are very important.

 1 person marked this as a favorite.

Well I cannot speak to the other adventure as I have not read it yet - but for Plaguestone ... that is on the players if they blew all their resources on the first fight of the day. Maybe it does take a TPK for them to learn they have very limited dailies and you will not allow camping out after every fight.

Even so for this specific encounter they have a ranger guide they can camp overnite with if they need to. While she is written to not get into combat, she certainly is helping with handouts. I am saying as a GM add up the 'sequential' encounters and give the breaks and even naps where they can be justified that continuing on is not going to add up.

The lashers fight is actually only severe if you melee them. Range they can do nothing but die in place. As I said it took a near TPK to figure this out. The next lasher encounter was very lopsided (and boring to GM as there is nothing you can do). So when you see a severe out of place as the first fight of the day, look for that weakness that was put there to make it not severe. If players are not using recall knowledge give them a freebie to clue them into this game mechanic whose sole purpose is making fights easier.

'PC Ranger you would know from having lived in these woods (or been warned by your Ranger Guide) to steer clear of lashing vines". The fight just became trivial...then let players know next time they need to spend an action roll for that free advice - or choose investigate mode in exploration before hand.

 5 people marked this as a favorite.

The licensed SRD website is full of links. Fact is I am finding this free resource much more useful than the PDF that I paid for. I recommend people not buy the PDF and use the SRD site instead. Be sure to send customer service a note so that Paizo knows what they need to do to get you to buy the product. Not sure what their piracy protection of footnote watermarks with your name have to do with not doing links.

 1 person marked this as a favorite.

What about alchemist needing strength to carry all their stuff even though str is not otherwise useful to them.

While you could leave the tools in a safe spot, I say multiple bandoliers to hold more items and make them negligible bulk. It does not specifically say this, but if the bandolier itself is negligible wearing while light storing, so should the items it holds. They should have just written the general rule that containers (backpack, bandolier) increase your bulk capacity.

 1 person marked this as a favorite.

I edited my post to refine the idea so it is a countdown d20 which reduces the odds rather than 10+lull - please requote so it does not confuse!

You could call it the LulZ die, but lulls is less on the nose. Someone early in the thread coined it, I just stole the name for my idea!

The same lull die idea works for camping overnite in the dungeon. Figure 8-12 hours (camp, eat, sleep, prepare, eat) - and that lull dies gets rolled roughly halfway down. They got around a 50% chance to have the orc guards pounding on the door in the morning.

Of course rolling the random encounter does not mean a fight. Run the NPCs in their own exploration mode to see if they succeed at seeking out the party. Stumbling onto them in an open room with a fire will be a fight, hiding in a side branch of the sewers with high stealth probably not.

 1 person marked this as a favorite.

I think the proposed tension pool mechanic being a pool of die is complicated.

[EDIT better idea to use d20 lull die as the DC].

Use a d20 countdown die for DC itself to track the lulls. The first time they take an hour break, put it on the table at 20.

Let everyone heal on 10m breaks (RAI over RAW), but for each player that takes longer than 10m break to heal, that pushes the lull counter down. Succeed at DC indicated on the lull die and GM does not roll the random encounter after the longer break. If they push on reward them by rolling back the lull die.

You want hard mode - change that die to a d12 or worse.

 1 person marked this as a favorite.

I think you are making complicated gestures so the MAP represents your ability to keep doing continuously for three times two seconds. You might get the first one off but combat activity disrupts you and you mess up the second.

 3 people marked this as a favorite.
Thebazilly wrote:

Oh boy is that the truth. 4 sessions in and all the players have been having a streak of bad luck, while the GM has been rolling hot. (Last night, nobody rolled above a 5 for at least an hour, while the GM rolled 3 critical attacks in a row.)

It was so demoralizing we had to stop the session halfway through.

By the book everyone gets a hero point every session, they can use them to break up the bad streaks.

 2 people marked this as a favorite.