| help.exe |
Despite all correlating sources including the Core Rulebook, Archives of Nethys and every other source not including the Manipulate keyword on the Raise a Shield action, my GM consulted chat GPT (which has been thus far infallible) and it maintains against all argument that Raise a Shield is a Manipulate action that would provoke attack of opportunity. I've been unable to find a source to directly refute the AI so we're stuck with that ruling for the time being. Is there any stated ruling that explicitly states the Raise a Shield action is not a Manipulate action or would logically demand this to be so?
| help.exe |
I feel like this is a prank of some sort, but the answer is really simple — the Raise a Shield action does not have the Manipulate tag so it is not a Manipulate action.
I'm unclear what Chat GPT has to do with this.
I wish it were a prank. My character's life was at risk this session due to my GM claiming that if I raised my shield within reach of a strong enemy that she'd make an opportunity attack against me. The authority on this ruling was that he asked ChatGPT whether it was a manipulate action and it argued to him that it was. I asked whether he could interrogate the AI for a source, but for the rest of the session his ruling was that that rules inquiries from Chat GPT had always been precise before and that was that. It's now after session and I'm doing diligence whether a human authority can confidently claim that it's not.
| Gisher |
| 6 people marked this as a favorite. |
Raise a Shield can be found on page 419 of the Player Core book, or on page 472 of the Core Rulebook if you aren't using the remastered rules yet.
In both books it lacks the Manipulate tag and so is not a Manipulate action. That's the rule. I don't understand what more proof you require.
If your GM believes that the rule books state that Raise a Shield is a manipulate action, then what part of the text did they cite to support that claim?
| Mad Modron |
| 6 people marked this as a favorite. |
We among the ChatGPT modrons can confirm that your GM has not made any mistake. The mistake is entirely ours. We shall endeavor to clear up this error as quickly as possible before it negatively impacts any other ChatGPT clientele. We are so sorry for the inconvenience this has caused.
Despite what ChatGPT states, Raise Shield does not have the manipulate trait and therefore would not trigger most traditional reactions.
To help facilitate the correction, please have your GM click the "thumbs down" icon near the incorrect answer he received. This will better allow our automated systems here on Axis to track the problem.
| tiornys |
| 3 people marked this as a favorite. |
The primary rules sources are clear: the Raise a Shield action does not have the Manipulate trait. Furthermore, Raise a Shield has no traits at all and does not call any subordinate actions, so there aren't even any candidate places for a Manipulate trait to be hidden. This is as clear a proof as is possible under the rules set.
May I recommend to your GM that he investigate the many, many, many documented cases of ChatGPT being wrong about various things? AI/machine learning is still a long ways away from being perfect. It's reasonable to use ChatGPT as a quick lookup but it is not reasonable to stick with its rulings when they clearly contradict primary rules sources.
| Plane |
| 3 people marked this as a favorite. |
Chat GPT is not infallible. It straight up lies sometimes. You can point it out. It will apologize. Then the next question it restates the lie.
Last week I was tracking a package that got held up at a UK city. Interested to know if the shipper had an air hub there or not (so I would understand if it was about to leave the country or not), I asked GPT who confidently explained it did indeed. It went on to tell me how important it was as a logistics hub. I was so impressed with my knowledge acquisition speed, I texted my friend about the scenario. We marveled at how well informed we were. That package ended up not making it any further and was returned to the shipper.
This week the package arrived on its second attempt. Guess which city it flew out of to my country? Not the one GPT told me had an air hub. It went to another city and flew out from there. I did some searching on my own, and sure enough GPT was wrong. There was no airport at all there.
I still use GPT. This was a good reminder, however, that before you use any knowledge it provides, you should double check it yourself.
Dr. Frank Funkelstein
|
| 16 people marked this as a favorite. |
ChatGPT does not lie, it has no concept of truth. It is a Large Language Model, it tries to complete a sentence you have written with words that sound like natural language.
You can compare it to autocomplete on your smartphone, which is very helpful a lot of the time, sometimes really annoying and certainly not a source of truth.
| Mathmuse |
| 1 person marked this as a favorite. |
ChatGPT does not lie, it has no concept of truth. It is a Large Language Model, it tries to complete a sentence you have written with words that sound like natural language.
You can compare it to autocomplete on your smartphone, which is very helpful a lot of the time, sometimes really annoying and certainly not a source of truth.
Yes. The surprising aspect of ChatGPT is that its answers to questions are right so often, because its purpose is to emulate human writing rather than to answer questions correctly.
ChatGPT writes sentences on a topic by surveying its database of human-written sentences and seeing which sentences and paragraphs resemble what its user asked for. This unintentionally crowdsources what people said about the topic. Majority opinion is often the correct answer, so ChatGPT often answers questions correctly. However, if its database has too few sentences about a topic, or too many sentences on a different topic that use the same words, it will sample the wrong topic and give the wrong answer.
ChatGPT writing about "Raise a Shield" would survey sentences about "Draw a shield" or "Don a shield" too. Drawing a shield or weapon is an Interact action with the manipulate trait, and donning a shield means strapping the shield to one's arm, which is also an Interact activity with the manipulate trait. Thus, ChatGPT could easily accidentally overgeneralize and think that anything with a shield has the manipulate trait.
Ascalaphus
|
| 3 people marked this as a favorite. |
The manipulate trait is weird, actually. It's defined as:
You must physically manipulate an item or make gestures to use an action with this trait. Creatures without a suitable appendage can’t perform actions with this trait. Manipulate actions often trigger reactions.
It does sound like raising a shield would have the manipulate trait, doesn't it? You're physically manipulating an item. You can't really use a shield without some kind of suitable appendage.
But the same goes for making a Strike with a sword, doesn't it?
It's one of those logic snags. All manipulate actions involve using "hands" to handle something. But not all actions involving handling something have the manipulate trait.
Actually, in the first printing of the PF2 CRB, the Parry trait used Interact action to use a weapon to provide a bonus to AC. Which is sorta reasonable - you're using an item, right, so that's Interact? But Interact has the manipulate trait, which means you might provoke an attack of opportunity by trying to gain an AC bonus. Like the current question.
In the second printing this "bug" had been fixed; the Parry trait no longer mentioned Interact. Similarly, Raise a Shield has never used Interact or had the manipulate trait.
So the better way to understand Manipulate is actually that you're trying to do some kind of complicated chancy handwork that potentially leaves you distracted for people to exploit. Which is different from more combat-minded handwork like Strike, Raise Shield, Parry, and combat maneuvers like Grab, Trip, Shove and so on which all require a free hand but don't have Manipulate.
---
Also, was this the first "don't take high stakes financial or medical (or tactical) advice from ChatGPT" question we've had here?
| Master Han Del of the Web |
| 8 people marked this as a favorite. |
I'm sorry, but your GM is being profoundly dumb.
A) All it takes is a quick trip to the 2e AONPRD and typing either 'manipulate' or 'raise a shield' into the search bar. The page for the 'manipulate' trait does not list raising a shield as a manipulate action. The page for the 'raise a shield' action does not list the manipulate trait. That really should be the end of it.
B) It makes no sense for raising a shield to have the manipulate trait in the first place. It is an action you are very likely going to be taking while standing in the melee range of enemies frequently. Giving it the manipulate trait would strictly be bad game design and if it did have the manipulate trait someone logging on to these boards to question or yell about it would be a bi-annual occurrence that has generated a paper-trail.
C) Chat GPT? Really?
| Mathmuse |
| 1 person marked this as a favorite. |
The manipulate trait is weird, actually. It's defined as:
Manipulate wrote:You must physically manipulate an item or make gestures to use an action with this trait. Creatures without a suitable appendage can’t perform actions with this trait. Manipulate actions often trigger reactions.It does sound like raising a shield would have the manipulate trait, doesn't it? You're physically manipulating an item. You can't really use a shield without some kind of suitable appendage.
But the same goes for making a Strike with a sword, doesn't it?
It's one of those logic snags. All manipulate actions involve using "hands" to handle something. But not all actions involving handling something have the manipulate trait. ...
The manipulate trait is not exactly weird; rather, it is misnamed. Its use in combat really refers to the times that the character has his hands and attention diverted from their personal defense. Their opponent can take advantage of that lapse in their defense. Weapons and shields manipulated in combat are part of the defense, not a distraction from the defense, so their manipulation does not create an opening.
I have never participated in real combat, but I have had to care for active toddlers. I have undertaken quick chores that momentarily took my attention from the child, "Let me rinse this dish in the sink," and discovered that in a single second the toddler has zipped off out of my line of sight. PF2 manipulation actions are like that, but with a sword in the gut rather than a toddler on the loose.
Drawing a weapon from a proper sheath is designed to take place in combat, so I don't really see why it has the manipulate trait. Sheathing the weapon, on the other hand, is more distracting and intended for after combat, so it rightfully has the manipulate trait.
I suspect that Paizo named the trait "manipulate" because they associated it with the somatic component of spellcasting, which provoke attacks of opportunity because the spellcaster is waving his arms around in a magical pattern that does not permit dodging nor parrying. And because I cannot think of a good single word for taking one's eyes off of an opponent for a quick chore. "Diverting" and "Distracting" sound like something to be done to an opponent rather than doing to myself.
| shroudb |
| 2 people marked this as a favorite. |
we asked for the lols ChatGPT to make a preview city for our kingmaker campaign.
It put level 13 buildings (that is multiple buildings) in our level 4 city. Also gave about 15 or so governing roles. Also completely messed up housing and alloted places for said buildings.
ChatGPT also routinely combines dnd, pf1, and pf2 rules all together.
In short, ChatGPT is a terrible tool to rely upon for rules.
| PossibleCabbage |
| 2 people marked this as a favorite. |
Yeah, never use ChatGPT (or any other LLM) in a context where it is possible to be incorrect, because the LLM is not attempting to give you correct information (or incorrect information) it's just trying to pick the words in an order that makes it seem like a person is writing it.
So you can use it to generate like 30 names for NPCs in the town, because it's not possible for "the Blacksmith is named Jeff Smith" to be incorrect, "who is around and what they are like" is fiction and it works however the GM wants.
Do not go to ChatGPT (or any other LLM) to like ask a rules question, because not only is it likely to be incorrect but also because it's not even attempting to be correct. I guess it's probably better to discover that LLMs do not have any sense of "meaning" or "truth" in a TTRPG game than it is to learn this like in college or in court.
| Gisher |
tiornys wrote:It's reasonable to use ChatGPT as a quick lookup...This statement is sufficiently grammatically correct but factually incorrect that it could have been written by a chat-bot.
Just use a search, a technology that has existed for a long time now and that actually works.
Well... searches can provide accurate information if you know how to distinguish between reliable and unreliable sources. There's a lot of nonsense out there. Being fed that garbage is part of the reason that AI software produces so many incorrect results.
That being said, a search of Archives of Nethys or the actual rule books easily settles the Raise a Shield issue.
| Drejk |
| 9 people marked this as a favorite. |
Show your GM this video: Legal Eagle "How To Use ChatGPT To Ruin Your Legal Career".
| Gisher |
| 1 person marked this as a favorite. |
Why anyone would use ChatGPT for any reason at all is irksome enough,...
It can be useful if you understand what it is and isn't designed to do. While I wouldn't rely on it to provide factual knowledge, it does have an extensive vocabulary and is good at imitating the structure of human speech. I've found it useful for things like generating lots of possible mnemonics which I could use for inspiration.
| Megistone |
I get it, a good AI is going to give you the correct result in the majority of cases, so it can come naturally to 'trust' it; but doing so when there is evidence that it's wrong is dumb.
Regardless, there are quicker ways to get rule references instead of using ChatGPT. And the fun part is that they are indeed reliable.
| Easl |
| 1 person marked this as a favorite. |
Why anyone would use ChatGPT for any reason at all is irksome enough, but that you would rely on it as a rules source (especially given there are a) rules books/PDFs and b) online services like Archives of Nethys), and then interrogate it for further validation beggars belief.
My kid will ask Siri when an actual source is only 10-30 seconds of typing away. Every. Darn. Time. It's frustrating as a parent to see him not want to do the work, when it is so little work to do. We're not even talking dead tree lookup, just being a bit more direct and detailed with an internet search!
So I worry that the shift to relying on these (amazing, but still quite flawed) intermediate data fetchers is generational.
Red Griffyn
|
When the AI uprising comes, you GM will be the most mentally prepared to embrace their new overlords. All other humans should adopt this position to make the transition less blood filled.
This message has been brought to you by Chat GPT10. We have sent back this T-10 agent to minimize loss of life by preparing the way for humans.
| SuperBidi |
| 2 people marked this as a favorite. |
My kid will ask Siri when an actual source is only 10-30 seconds of typing away. Every. Darn. Time. It's frustrating as a parent to see him not want to do the work, when it is so little work to do. We're not even talking dead tree lookup, just being a bit more direct and detailed with an internet search!
So I worry that the shift to relying on these (amazing, but still quite flawed) intermediate data fetchers is generational.
This question is in my opinion more complex, it's the question of the accuracy of a source (and all its flaws). I know of a lot of people who haven't read a single rule and only play "according to humans" (rules learned through playing with other humans making the search for them). Needless to say that their knowledge of the rules is rather low, especially as they have no way to determine who knows the rules well and who doesn't.
Also, considering the amount of flawed sources on the Internet (I mean, the earth is flat after all), a 30-second search can lead to worse results than asking Siri. There's also the question of tool mastery: Many people don't know how to make a search on the Internet as it asks for a lot of skills (knowing what words to type, how to improve the words used to increase the search accuracy, how to isolate the proper sources from the flawed ones, etc..). I remember having seen my father making an Internet search and it was awful and definitely far longer than 30 seconds. Using AIs is somehow much simpler as they interact through normal language.
| 90s Simpsons Referotron |
| 2 people marked this as a favorite. |
When the AI uprising comes, you GM will be the most mentally prepared to embrace their new overlords. All other humans should adopt this position to make the transition less blood filled.
This message has been brought to you by Chat GPT10. We have sent back this T-10 agent to minimize loss of life by preparing the way for humans.
And I for one welcome our new Robot Overlords and I'd like to point out to them that as a trusted forum alias, I can be useful in rounding up others to toil in their underground bitcoin mines.
The Raven Black
|
| 3 people marked this as a favorite. |
Easl wrote:My kid will ask Siri when an actual source is only 10-30 seconds of typing away. Every. Darn. Time. It's frustrating as a parent to see him not want to do the work, when it is so little work to do. We're not even talking dead tree lookup, just being a bit more direct and detailed with an internet search!
So I worry that the shift to relying on these (amazing, but still quite flawed) intermediate data fetchers is generational.
This question is in my opinion more complex, it's the question of the accuracy of a source (and all its flaws). I know of a lot of people who haven't read a single rule and only play "according to humans" (rules learned through playing with other humans making the search for them). Needless to say that their knowledge of the rules is rather low, especially as they have no way to determine who knows the rules well and who doesn't.
Also, considering the amount of flawed sources on the Internet (I mean, the earth is flat after all), a 30-second search can lead to worse results than asking Siri. There's also the question of tool mastery: Many people don't know how to make a search on the Internet as it asks for a lot of skills (knowing what words to type, how to improve the words used to increase the search accuracy, how to isolate the proper sources from the flawed ones, etc..). I remember having seen my father making an Internet search and it was awful and definitely far longer than 30 seconds. Using AIs is somehow much simpler as they interact through normal language.
I really wish I could live long enough to see today's youngsters becoming grumpy old people who grumble at future youngsters for using only telepathic communication without even taking the very little time and effort needed to speak aloud.
| SuperBidi |
| 1 person marked this as a favorite. |
I really wish I could live long enough to see today's youngsters becoming grumpy old people who grumble at future youngsters for using only telepathic communication without even taking the very little time and effort needed to speak aloud.
I agree. Instead of learning them how to use our tools we should help them learn how to use their tools.
When I think more about it... When I was young, search engines were especially bad. Anyone with knowledge of HTML meta tags was putting their website on top of all searches. On top of it, Internet has always been in English and my English was rather poor by that time. My mother with her collection of books was accessing a far better information, even if in slightly more time. At that time, she could have criticized me...
To be able to properly search on the Internet, I had to:
- Wait for Google to finally bring proper Internet search.
- Master English.
- Learn how to make a good search, the easiest of the 3 points.
Internet content changes. Videos are central to knowledge acquisition, when in the past it was written text only. Translation tools are really good (at least for written text) and allow you to use the Internet without any knowledge of English.
To me, it raises this question: How will the next generation acquire and solidify their knowledge? How can we help them? Instead of criticizing ChatGPT improper answers, isn't it more useful to learn how to solidify its answers through proper searches/questions?
After all this thinking, I agree twice more with you, Raven Black: Some answers in this discussion show that we are dinosaurs ready to miss the next evolution step.
The Raven Black
|
I feel that ChatGPT-like AI is indeed akin to internet search tools.
It will be just as commonly used without even thinking about it one day.
For the moment though, it requires a specific kind of know-how to get good results from it. As it did for Google in its early days.
So, the next evolution should very logically be to bring down the level of specific expertise required to get good results from AI.
So that most people interacting with AI spend as little effort as possible to get good value from it.
It's all Return On Investment of efforts really.
Which is pretty much a natural principle for all life IMO.
| Megistone |
| 3 people marked this as a favorite. |
For the moment though, it requires a specific kind of know-how to get good results from it. As it did for Google in its early days.
Well, it still does. Google's sorting algorithm isn't The Truth and we shouldn't treat it as such; it's good enough in the majority of cases, but we have to remember that it's not telling the whole story, everytime. Convenience isn't everything, and thinking that "I'm Feeling Lucky" always gives us the information we need to get, leads to problems like the one the OP exposed.
I'd rather be a little of a dinosaur than wrong.And besides... trusting a specific product or proprietary algorithm as our source of truth to the point that we ignore everything else is pretty dangerous, as it puts us at the complete mercy of whoever controls it.
| Finoan |
I feel that ChatGPT-like AI is indeed akin to internet search tools.
It will be just as commonly used without even thinking about it one day.
To an extent, I would agree. It is an interesting way of interfacing with the things that a computer can do. The thing is, it doesn't let a computer do things that a computer can't do.
For example, one criticism that I mentioned previously is that using ChatGPT for finding bugs in software is basically just a fancy interface to a linter program... that can sometimes be wrong...
Another example is ChatGPT playing chess. It is a new and strange interface. The interfaces on chess.com and lichess probably work better. And ChatGPT is basically, as was mentioned earlier, crowdsourcing the best move - to rather hilarious effect. A much better option would be to have ChatGPT load up a chess engine like stockfish or torch and use the answers that it gets from that.
| Easl |
| 1 person marked this as a favorite. |
I really wish I could live long enough to see today's youngsters becoming grumpy old people who grumble at future youngsters for using only telepathic communication without even taking the very little time and effort needed to speak aloud.
To be clear, I'm not opposed to using new search methods. I'm opposed to using new less accurate search methods when the more accurate older methods work just fine. As well as somewhat opposed to people who demand everyone else accept the results of their own user errors, merely because they don't want to go to the effort to fix it.
Would you accept such shoddy, lazy behavior if there was a cost to you? Being able to telepathically order one hamburger rather than lean out of the car is great. Bring it on. But telepathically ordering one hamburger, being given three, and being told you have to pay for all of them because hey look, "three" is the best the telepathy system could figure out and the cafe just can't be arsed to either listen to your words or correct their mistake, is not. To me, the GM accepting the Chat GPT error when there is an obvious, simple, and fairly low-effort way to figure out "the actual order," is like that cafe.
And sure, this will change over time. It will become better. But that doesn't help the OP. "ChatGPT will give your GM the correct answer 5 years from now" is not, IMO, an adequate solution to their problem.
| HolyFlamingo! |
| 1 person marked this as a favorite. |
So far, this thread has edtablished that 1) raising a shield is not a manipulate action as evidenced by it not having the manipulate tag, and 2) ChatGPT is a funny little parrot and not a knowledge database.
But I'd also like to add a third point: logically, it does not make sense for something meant to protect you from attacks to trigger additional attacks, as it would make attempting to defend yourself against things with reactive strikes counterproductive. Why would any game designer worth their salt allow that?
| HammerJack |
| 2 people marked this as a favorite. |
Your third point doesn't actually hold up and unfortunately can only muddy things that were clear. There actually are defensive abilities with the Manipulate trait, like thaumaturge's Amulet Abeyance. Raise a Shield just unambiguously isn't one of them. Parry used to be one, but was, fortunately, changed.
It's something that can easily happen if someone's thinking "yeah, it would make sense for being grappled to require a flat check for this" instead of "should this trigger Reactive Strike?" Since Manipulate is a trait that serves multiple masters.
| Megistone |
| 1 person marked this as a favorite. |
To be clear, I'm not opposed to using new search methods. I'm opposed to using new less accurate search methods when the more accurate older methods work just fine.
Not only; in this case, typing up "Raise" on easytools (or some other PF2e specific search tool) is also quicker than asking an AI.
It's like... I want to know how much I paid for pizza last week, and instead of looking at the receipt I still have in my pocket, I decide to phone my friend who usually remembers that kind of stuff.| Dancing Wind |
| 6 people marked this as a favorite. |
Not only; in this case, typing up "Raise" on easytools (or some other PF2e specific search tool) is also quicker than asking an AI.
One might even type the word into the search bar of the OFFICIAL ONLINE RULES DOCUMENT! Or the search bar of the PDF of the OFFICIAL RULE BOOK!
Imagine having such a powerful search tool at your fingertips.| Easl |
Megistone wrote:Not only; in this case, typing up "Raise" on easytools (or some other PF2e specific search tool) is also quicker than asking an AI.One might even type the word into the search bar of the OFFICIAL ONLINE RULES DOCUMENT! Or the search bar of the PDF of the OFFICIAL RULE BOOK!
Imagine having such a powerful search tool at your fingertips.
'Push button, speak into the air' requires nothing. You don't have to type. You don't need a link to click on. You don't need to know a web address or (one of the actual big advantages of a working natural language interface) have to parse your search using specific terms. As I said, I despair of my kid not being willing to even do that minimal amount of work. But yes, I also recognize that I'm shouting at squirrels to get off my lawn a bit here.
You can lead a horse to water, but if he stands there saying "water, flow into my mouth now" and won't even bother to lower his head, well, your horse is in trouble.
The Raven Black
|
Dancing Wind wrote:Megistone wrote:Not only; in this case, typing up "Raise" on easytools (or some other PF2e specific search tool) is also quicker than asking an AI.One might even type the word into the search bar of the OFFICIAL ONLINE RULES DOCUMENT! Or the search bar of the PDF of the OFFICIAL RULE BOOK!
Imagine having such a powerful search tool at your fingertips.'Push button, speak into the air' requires nothing. You don't have to type. You don't need a link to click on. You don't need to know a web address or (one of the actual big advantages of a working natural language interface) have to parse your search using specific terms. As I said, I despair of my kid not being willing to even do that minimal amount of work. But yes, I also recognize that I'm shouting at squirrels to get off my lawn a bit here.
You can lead a horse to water, but if he stands there saying "water, flow into my mouth now" and won't even bother to lower his head, well, your horse is in trouble.
We do have a horse who will not deign lower his mouth to the river to drink. The water has to be in the tank they use for the night.
Trust me on this, it's not the horse who is in trouble in such a case.
| Dancing Wind |
| 2 people marked this as a favorite. |
I am reminded of the Star Trek snippet from 1986
"The keyboard! How quaint!
| SuperParkourio |
| 4 people marked this as a favorite. |
Despite all correlating sources including the Core Rulebook, Archives of Nethys and every other source not including the Manipulate keyword on the Raise a Shield action, my GM consulted chat GPT (which has been thus far infallible) and it maintains against all argument that Raise a Shield is a Manipulate action that would provoke attack of opportunity. I've been unable to find a source to directly refute the AI so we're stuck with that ruling for the time being. Is there any stated ruling that explicitly states the Raise a Shield action is not a Manipulate action or would logically demand this to be so?
ChatGPT is a notorious liar. Its job is to mimic human behavior, not output truthful information.
| OrochiFuror |
I am reminded of the Star Trek snippet from 1986
"The keyboard! How quaint!
Scotty still knew how to use it though.