AI-GMs


Pathfinder Second Edition General Discussion

101 to 150 of 181 << first < prev | 1 | 2 | 3 | 4 | next > last >>

1 person marked this as a favorite.

Key word is "as we know it".

Its too soon to say whether the hobby will straight up die because of this. Or if the hobby will grow even bigger than it already is.

A great example is RC cars/planes. There are a whole lot of ways to play with an online car/plane but people still choose to buy and play with RC cars/planes. Those video games didn't destroy that industry, it just gave people even more options.

Same with Art. AI art is not destroying people's want to do art nor people's want to buy art. Its just giving people more options for how to make/get said art.


1 person marked this as a favorite.
Temperans wrote:
Same with Art.

Your RC example is nice, but the art example is one I'd be cautious to use. I don't want to get obliterated by a crowd of angry artists. The art industry is in a very weird spot right now and chances are high (because AIs are cheap and governments slow) that it will massively crumble to a fraction of what it is.


Themetricsystem wrote:
Anyone asserting that they can't get anything constructive, creative, functional, or useful out of it simply doesn't know how to use the tool. It certainly isn't perfect by any measure but those stating it this stuff has no potential are either in full-on denial, ignorant, overconfident of their own talent, or just plain liars.

I would actually agree with that.

With the caveat that as long as the AI is running an algorithm - even a sophisticated one - it is still going to be bound to what an algorithm can do.

So while it will be an awesome tool for a person to use - it will still be a tool that a person has to use.

An algorithm can't tell the difference between a good idea and a bad idea, for example. Rice's Theorem. Seriously, go research it.

Plane wrote:
However, AI will eventually surpass the wildest speculation on this thread.

Only if we can actually create one that is more powerful than an algorithm.

Plane wrote:
Post-Turing test AI will deliver.

A post-Turing Machine AI, absolutely. But a Turing Machine AI is still going to be a Turing Machine AI no matter how believable it is at talking.


5 people marked this as a favorite.
SuperBidi wrote:
Temperans wrote:
Same with Art.
Your RC example is nice, but the art example is one I'd be cautious to use. I don't want to get obliterated by a crowd of angry artists. The art industry is in a very weird spot right now and chances are high (because AIs are cheap and governments slow) that it will massively crumble to a fraction of what it is.

And when photography was popularized it put art in the hands of the common person. This helped lead to and influence art movements like Impressionism. New deeper ways to make art and think about the world.

When digital tools were popularized they again democratized art by making the tools more accessible to more people. We have this whole character drawing industry that never could have existed before, and it has greatly enhanced the lives of many people around the world.

We should fret a little less about the technology itself and lot more about the ultra-capitalist, anti-democratic billionaires who fund this new technology. Take a peek at who's funding OpenAI.


2 people marked this as a favorite.
Ravingdork wrote:

If you can replace the GM with AI, why not the other players as well?

Why spend weeks hunting down and attempting to schedule and organize the bag of cats that are the GM and three other players when you can have an AI get you started in on the action right now.

Tabletop roleplaying, at its core, is a social activity. I see the AI takeover and the inherent laziness of humanity to take the easier path as nothing less than the death of the hobby as we know it.

There's whole role playing games where you can play with friends without any of you running the game. AI GMs, if and when they happen, will just expand that list.


I got into a conversation with ChatGPT last night. I asked it to produce a well-ordering of the real numbers. It provided (<) and we ended up in a situation where the Axiom of Choice was equivalent to its negation.

So the future of AI is bright.


PossibleCabbage wrote:

I got into a conversation with ChatGPT last night. I asked it to produce a well-ordering of the real numbers. It provided (<) and we ended up in a situation where the Axiom of Choice was equivalent to its negation.

So the future of AI is bright.

Is ChatGPT designed around mathematical accuracy? It seems to me like its primary goal is more around answering general knowledge questions conversationally. It's also extremely new so we should expect these kinds of issues to improve over time.


S.L.Acker wrote:
It's also extremely new so we should expect these kinds of issues to improve over time.

I don't think that is the problem.

The problem is that algorithms have no understanding. They can't decide if something 'makes sense' or 'is reasonable'. By definition they always give the correct answer and so have no way of verifying their own work.


4 people marked this as a favorite.
Pathfinder Roleplaying Game Superscriber; Pathfinder Starfinder Roleplaying Game Subscriber

Yeah, ChatGPT does not sound like AI so much as an advanced algorithm.

Too many people are pushing the term "AI" incorrectly as a marketing gimmick.


1 person marked this as a favorite.

Like the fundamental problem with the ChatGPT experience is that it will very confidently state things that are wrong, and will only correct itself when you actually point out that it's wrong. So if you go into a conversation with it about something you really don't already know pretty well, it's very easy to end up with a lot of incorrect ideas!


breithauptclan wrote:
S.L.Acker wrote:
It's also extremely new so we should expect these kinds of issues to improve over time.

I don't think that is the problem.

The problem is that algorithms have no understanding. They can't decide if something 'makes sense' or 'is reasonable'. By definition they always give the correct answer and so have no way of verifying their own work.

I expect that the baseline algorithm will be updated based on current errors and unwanted behaviors. That or data could be selectively pruned to improve results. Both are a lot of work, but so is building ChatGPT in the first place so I expect it to happen.


PossibleCabbage wrote:
Like the fundamental problem with the ChatGPT experience is that it will very confidently state things that are wrong, and will only correct itself when you actually point out that it's wrong. So if you go into a conversation with it about something you really don't already know pretty well, it's very easy to end up with a lot of incorrect ideas!

Sounds like Clever Hans.

S.L.Acker wrote:
I expect that the baseline algorithm will be updated based on current errors and unwanted behaviors. That or data could be selectively pruned to improve results. Both are a lot of work, but so is building ChatGPT in the first place so I expect it to happen.

Um... To clarify, I am not talking about tweaking the algorithm into a better algorithm. I'm talking about the computer science concept of an algorithm.

Every computer program is an algorithm. At least currently. They all have the computing power of a Turing Machine. That's the Church-Turing Thesis for those who want to look it up.

So unless we can create an AI that isn't an algorithm - or a computer program as we currently know them - they are going to be limited by the limits of computability of an algorithm. There are things that an algorithm can't do.


3 people marked this as a favorite.

Idk it being extremely sure of itself despite being wrong sounds very human. We do that all the time even.


3 people marked this as a favorite.
Pathfinder Adventure, Adventure Path, Rulebook Subscriber
Oceanshieldwolf wrote:
VestOfHolding wrote:
Oceanshieldwolf wrote:
Fantasy name generators are an anathema. If you can’t name your own character, why are you playing? Given the choice between certain death and naming your pet, everyone names their pet every time. It isn’t that hard.

Hi, I super struggle with coming up with a name for a character, though I can come up with everything else. Fantasy name generators help me think about possible names to use for every character I've played for years, and I've loved playing all of them.

Please calm down with this unnecessary aggression towards people who don't think the same as you, thanks.

I’m not sure questioning folk as to why they are playing if they can’t came up with a name for the character they are, you know, playing is aggressive. If it is, then yes, we think very differently. But I’m pretty calm about it. Unless you have a cat called Dave.

It comes across as some pretty weird gatekeeping, and ends with a "Come on, it's not that hard". Well, for some people it is. Also the extreme choice between death and naming a pet that makes absolutely no sense here. Overall your response is noticeably more extreme that it should be. It sounds like what you should work on is how your text may come across to other people. You may be calm, but you're implying that people who can't come up with their own character names with no assistance shouldn't play. That's wild.


2 people marked this as a favorite.
breithauptclan wrote:

Um... To clarify, I am not talking about tweaking the algorithm into a better algorithm. I'm talking about the computer science concept of an algorithm.

Every computer program is an algorithm. At least currently. They all have the computing power of a Turing Machine. That's the Church-Turing Thesis for those who want to look it up.

So unless we can create an AI that isn't an algorithm - or a computer program as we currently know them - they are going to be limited by the limits of computability of an algorithm. There are things that an algorithm can't do.

We don't need AI to think or to question the answers that it gives to vastly reduce the number of confidently stated wrong outcomes. There's still a ton of room to improve on the systems we have now for even better results.

It's also odd to assume that AI needs to be perfect when we don't expect the same of another human. Just look at those stupid math memes that go around for examples of "intelligent" beings confidently being wrong about math.


Temperans wrote:
Idk it being extremely sure of itself despite being wrong sounds very human. We do that all the time even.

I think there's a big difference here. Like a person is generally aware of what they do or don't know and are also aware of the stakes of the other party believing you, so you have a model for when to project confidence and when to hedge.

So like if I asked you "can one construct non-measurable sets without Axiom of Choice" and you do not know WTF I am talking about, you would not confidently respond "the existence of non-measurable sets implies the negation of the Axiom of Choice" (which is precisely wrong!)


PossibleCabbage wrote:
Temperans wrote:
Idk it being extremely sure of itself despite being wrong sounds very human. We do that all the time even.

I think there's a big difference here. Like a person is generally aware of what they do or don't know and are also aware of the stakes of the other party believing you, so you have a model for when to project confidence and when to hedge.

So like if I asked you "can one construct non-measurable sets without Axiom of Choice" and you do not know WTF I am talking about, you would not confidently respond "the existence of non-measurable sets implies the negation of the Axiom of Choice" (which is precisely wrong!)

Yes I personally would, however there are someone people who 100% would say that assuming that you have no idea either. There are a whole lot of people whose sole job is to BS other people into giving them money for bad science.

Case and point flat earth.


S.L.Acker wrote:

We don't need AI to think or to question the answers that it gives to vastly reduce the number of confidently stated wrong outcomes. There's still a ton of room to improve on the systems we have now for even better results.

It's also odd to assume that AI needs to be perfect when we don't expect the same of another human. Just look at those stupid math memes that go around for examples of "intelligent" beings confidently being wrong about math.

While that is technically correct, it also misses the point.

When we find a bad GM, we stop playing with them. And when we find a good GM we continue playing with them.

The problem with an AI GM is that they will unexpectedly and seemingly randomly do wrong things.

Which means that companies can't trust it to write an entire campaign without proofreading the entire thing afterwards. Otherwise who knows what nonsense it may have thrown in there randomly. Same with when it is running a game. The players will need to be able to overrule the AI GM when it starts doing something crazy - or preventing something reasonable.

Sovereign Court

6 people marked this as a favorite.

I think the limitations of ChatGPT are better explained by looking at how neural networks work and less by fixating on Turing machine problems.

Neural networks work by pattern recognition across space and time. These words are often used together, in a particular order. This word can occur instead of that word in the same context and leading to the same kind of follow up sentences. It reads lots of text, and that's how it learns grammar, synonyms, and related words and concepts.

One thing that's really hard for neural networks is keeping track of multiple "things" in a story for a longer time. You can see this when people try to have ChatGPT come up with recipes; it asks for lots of ingredients, but then ends up not using some of them, or using them twice (so you run out of them halfway during the recipe).

It's not good at tracking quantities or states of things. It only mimics doing that by first saying one thing, then another, because in the examples it's trained on, that's what happens. It doesn't actually understand that it's manipulating quantities. It's just producing similar-sounding text.

This is the same reason it gets math wrong a lot. It knows which words are used in math text, but it doesn't actually do math.

For a lot of applications though, this kind of dreamy free association word salad can come awfully close to a quality product. If you have a product in a webshop with a lot of properties drawn from the manufacturer spec, you can turn that into a description of the product. But again there are limitations. Either it's going to be just a textual description of raw information that was already in the attributes. Or it's going to get fanciful and come up with stories about what the product can do, what it's like to use it, how it could enhance your life. But that's all just made up, it hasn't actually done any of those things. It's just read a lot of product descriptions and knows what they sound like. It doesn't really understand this product so it could get things quite wrong.

---

So what does it mean for RPGs? I think you could do interesting things with it, but it'd work a lot stronger as a GM assistant than as a replacement GM. I think it'd have trouble understanding what players are really trying to do. (Although many a GM has wondered "why is this player asking all these detailed descripttions about a minor thing, what could they be planning?") But good uses of AI (not necessarily limited to this one) could include:

* Delegate decision making for minions. Sometimes it can feel as if a group of NPCs is behaving too much like a hive mind because they're all run by the same GM. Delegating each mook to a separate AI could make them less predictable.

* Preparing loot, location and people descriptions. If as a GM you have to prepare a lot of them, you can fall into a rut. Note that this is not all that different from having random dice tables with prompts like we've done decades. But the interface could be slicker.

* Coming up with NPCs with motives and plans. "AI, come up with some reasons why the villain is causing trouble. Okay, second reason sounds interesting, elaborate on that."

* Coming up with complications. "AI, I have a fight in the woods between some bandits and the heroes. Suggest some ways to make this fight special."

A lot of this is not that different from using random tables. You could actually use the random table to generate prompt and then ask the AI to fill in the details. Combine it with a good interface, for example a campaign planner phone app so that if you're on the bus, you can effectively brainstorm and collect notes.


1 person marked this as a favorite.
Ascalaphus wrote:
<snip>

I think it may be possible to solve some of these issues with a multi-layered AI. ChatGPT is the interface and compiler as well as the module that chooses what to present but there could be a math module, a conversational memory module, etc. This feels like how AI is going to grow in the future.

It's also a bit like how our minds work with different sections processing different tasks with our consciousness attempting to select the most useful/appropriate outputs from these competing systems.


5 people marked this as a favorite.

The problem with illusion spells is that if you do them well enough, people will actually believe that you can do the impossible - and will then start demanding it of you.


2 people marked this as a favorite.
Eoran wrote:
The problem with illusion spells is that if you do them well enough, people will actually believe that you can do the impossible - and will then start demanding it of you.

I know this is a side tangent. But this is why I loved the shadow spells. Nothing quite like making illusions so good they become real.


2 people marked this as a favorite.

The value of an AI (or algorithm, or whatever) in TTRPG is not the replacement of GM's. For one, to completely replace a GM, the AI (or whatever) would have to completely replicate the functionality of a human. I doubt we will ever reach such capabilities, especially not with electronic-based technologies.

The real value is in reducing/eliminating GM prep.

Imagine if all you had to do was read a 3 paragraph synopsis of an adventure path... and you were now done preparing for the campaign. Some small amount of record keeping would be necessary throughout, and note taking, but the AI would track everything else for you.

Having the option to run a campaign, but not invest multiple hours into it before it even starts sounds like a great option.

Liberty's Edge

2 people marked this as a favorite.
Doug Hahn wrote:
We should fret a little less about the technology itself and lot more about the ultra-capitalist, anti-democratic billionaires who fund this new technology. Take a peek at who's funding OpenAI.

Yes, perfectly stated, I couldn't agree more.

Liberty's Edge

IAs will not replace human beings. They will just automate time-consuming repetitive tasks.

And billionnaires investing their fortunes in visionary innovations is the fastest way for them to lose their billions.

So, no. Not worried in the least.


4 people marked this as a favorite.

Considering how much I love to GM, I would rather have AI players instead :D

Sczarni

Pathfinder Lost Omens, Rulebook Subscriber
SuperBidi wrote:

I just saw the last WOTC plans for our comrades on the other side of the d20 and one of their plans is to develop AI-GMs.

It got me wondering a few things:
Can it actually be a thing? A lot of adventures are quite streamlined and even if there's not tons of content to absorb (which is what AIs do the best) I'm wondering if an AI can actually be good enough to provide at will content to players when there's a dearth of game running. Maybe not for today, but for tomorrow...

How would you accept AI-GMs? Even considering that they won't be as good as human GMs (and I'm sorry to say that but I'm pretty sure an AI can be better than some human GMs), would you accept to play under AI-GMing? Will the lack of human interaction with the GM reduce the pleasure or will it be fine as long as you get along with the players?

Complementary GMing. That's one thing AIs do the best: helping us. A big part of our hobby can be fully automatized. Combats, for example, especially now that a lot of us are moving to VTTs. Just use an AI for combats and other simple scenes and, as a GM, you can use your time and energy on what's important (story, roleplay, character development). Is it something that would appeal to you?

Yes. 100%.

AI will even be capable of CHANGING the story around and "documenting" any changes we wish to make.

For example. I want to go east and explore east. I dont want to follow this campaign module. AI will be able to adjust and create narratives that allow us to go east.

I wouldn't be surprised if, in 5-10 years, that we'll have fully automated AI GM's that have their own distinct voice that helps narrate.

Dark Archive

5 people marked this as a favorite.
Pathfinder Starfinder Roleplaying Game Subscriber

I think anyone who gets too hyped about concept should really check how exactly AI does writing now <_< I don't think it could really take four players worth of input in at same time and keep it fun


1 person marked this as a favorite.
Pathfinder Roleplaying Game Superscriber
Megistone wrote:
Considering how much I love to GM, I would rather have AI players instead :D

Have you tried the Dungeons video game series?

Liberty's Edge

I remember all the hype about medical AIs a few years ago and how they were going to quickly make human doctors obsolete.

Still waiting for the prophecy to come true...


1 person marked this as a favorite.

AI will probably never be useful for something where facts and logic matter, since it will probably always be unable to understand what it does not know and cannot be sure of.

But in terms of fiction if you need it for "come up with a guy" stuff, that should work fine. It's a tool that will reliably "yes, and" you so it might have some applications here.


1 person marked this as a favorite.

Anyway I had a go at the free options out there currently. ChatGPT was able to provide a generic description of a room in a dungeon and provide some suggested monsters.
It said it knew the difference between D&D and Pathfinder and could provide separate definitions. But it had no actual concept in practice. It's monsters suggested were all D&D and were not level appropriate.

It could hold a conversation. It could put together some words. It ignored my spelling mistakes and poor grammar. It 90% knew what I asked. That was really good, and probably about as good as a discussion with a real human.

It was a lot better than I expected. Give it an actual interest in the subject matter, iterate a bit, and I'm sure it could be made to work as a GM.

I did actually take one of the monsters it suggested and inserted into my current Kingmaker campaign.


Yeah, that sounds about like what I would expect too.

Also,

Gortle wrote:
Give it an actual interest in the subject matter,

LOL. Yeah, that's the trick right there, isn't it.


2 people marked this as a favorite.

I recall a Reddit thread where someone was "showcasing" how helpful ChatGPT could be for generating NPC statblocks. They prompted it to create a backstory for a Tiefling Wizard who is a coward. ChatGPT wrote 3 paragraphs about this idea, but it didn't come up with anything new, just restating the prompt in more words.

Or this example, where there is a human player prompting ChatGPT with some ideas, but it still fills in the blanks with the most generic answers imaginable.

The thing that's fun about tabletop RPGs is that there are limitless options and creativity. A computer on the other side of the screen necessarily curtails it.

Also, I'm on team forever GM and I'm either getting Stockholm Syndrome or finally growing into the role because I enjoy it more than playing now.


1 person marked this as a favorite.
Thebazilly wrote:

I recall a Reddit thread where someone was "showcasing" how helpful ChatGPT could be for generating NPC statblocks. They prompted it to create a backstory for a Tiefling Wizard who is a coward. ChatGPT wrote 3 paragraphs about this idea, but it didn't come up with anything new, just restating the prompt in more words.

Or this example, where there is a human player prompting ChatGPT with some ideas, but it still fills in the blanks with the most generic answers imaginable.

The thing that's fun about tabletop RPGs is that there are limitless options and creativity. A computer on the other side of the screen necessarily curtails it.

Also, I'm on team forever GM and I'm either getting Stockholm Syndrome or finally growing into the role because I enjoy it more than playing now.

Yes I agree. This is what it is like currently. If you want some help putting together some generic responses and descriptions it could be useful.


Sixty Symbols, a physics YouTube channel, ran ChatGPT through some word problems and it failed. On a few it got the math right despite misunderstanding the concepts in question, while in another it got the concept right, but then biffed the math by 10x.
So yeah, pretty straightforward compared to RPGing, yet erroneous.
I wonder if it has something to do with mixing up wording on purpose for original conversation? Some sort of wiggle-factor?


3 people marked this as a favorite.

Yep. It is like a humanities lecturer explaining science. It understands how to put words together well. But it doesn't know what the right answer is. It merely guesses off popular responses.

What is also is is a good coder.


Pathfinder Roleplaying Game Superscriber
Castilliano wrote:

Sixty Symbols, a physics YouTube channel, ran ChatGPT through some word problems and it failed. On a few it got the math right despite misunderstanding the concepts in question, while in another it got the concept right, but then biffed the math by 10x.

So yeah, pretty straightforward compared to RPGing, yet erroneous.
I wonder if it has something to do with mixing up wording on purpose for original conversation? Some sort of wiggle-factor?

The reason for this is that ChatGPT does not do math. There's a great explanation by Ascalaphus upthread.


nephandys wrote:
Castilliano wrote:

Sixty Symbols, a physics YouTube channel, ran ChatGPT through some word problems and it failed. On a few it got the math right despite misunderstanding the concepts in question, while in another it got the concept right, but then biffed the math by 10x.

So yeah, pretty straightforward compared to RPGing, yet erroneous.
I wonder if it has something to do with mixing up wording on purpose for original conversation? Some sort of wiggle-factor?
The reason for this is that ChatGPT does not do math. There's a great explanation by Ascalaphus upthread.

Except "bad at math" only covers one of the problems answered wrong by ChatGPT, for which yes, a math sub-system could be created.

For at least two of the problems it got the math correct, but was so off on its interpretation of the physics concepts the prof said he'd give that answer a zero despite the bot getting the correct final answer. With RPG, "correct math answer, but wrong understanding" wouldn't help gaming, as what could be built on/extrapolated from such a shoddy foundation?

And one would expect such a GM-bot would think it was correct, even if totally wackbards, maybe even using its correct math as justification. And how would players reason with it? And if it does solicit input, how might we protect it from being manipulated by wordplay?


3 people marked this as a favorite.

The calculation part of roleplaying games is not supposed to be handled by an AI. VTTs like Foundry are able to automatize rolls, the AI just has to play the monsters.


1 person marked this as a favorite.

The math doesn't have to be handled by AI.

The basic rule adjudications (how many actions spent, what feat was used, etc) can be done by searching a database and cross matching the various rules. Which should be easy enough to do.

Making descriptions is already proven to be easy.

The issue AI has is remembering what has been stated and responding accordingly, which is what is getting worked on.


1 person marked this as a favorite.
Castilliano wrote:

Except "bad at math" only covers one of the problems answered wrong by ChatGPT, for which yes, a math sub-system could be created.

For at least two of the problems it got the math correct, but was so off on its interpretation of the physics concepts the prof said he'd give that answer a zero despite the bot getting the correct final answer. With RPG, "correct math answer, but wrong understanding" wouldn't help gaming, as what could be built on/extrapolated from such a shoddy foundation?

And one would expect such a GM-bot would think it was correct, even if totally wackbards, maybe even using its correct math as justification. And how would players reason with it? And if it does solicit input, how might we protect it from being manipulated by wordplay?

ChatGPT is also a new AI trained on a lot of unvetted data with a massively broad scope. If you made a focused AI that only had to understand a relatively tiny subset of ideas I would expect accuracy and perceived comprehension to rise.

The Sixty-Symbols video also had a lot of, "Wow this is really impressive." mixed in with explanations of where it has a lot of room to grow.


1 person marked this as a favorite.

There was that one military surveillance AI that was trained to detect movement of humans. So the soldiers testing the system used everything from just jumping to going full Solid Snake to bypass it.

AI are great, but they are not miracle machines.


Temperans wrote:

There was that one military surveillance AI that was trained to detect movement of humans. So the soldiers testing the system used everything from just jumping to going full Solid Snake to bypass it.

AI are great, but they are not miracle machines.

Machine vision is orders of magnitude harder than language fluency and knowledge of a finite set of ordered rules. There's also AI that has been trained in gait detection that is extremely hard to fool by behavior that wouldn't draw unwanted attention from human security, so...


1 person marked this as a favorite.
S.L.Acker wrote:
Temperans wrote:

There was that one military surveillance AI that was trained to detect movement of humans. So the soldiers testing the system used everything from just jumping to going full Solid Snake to bypass it.

AI are great, but they are not miracle machines.

Machine vision is orders of magnitude harder than language fluency and knowledge of a finite set of ordered rules. There's also AI that has been trained in gait detection that is extremely hard to fool by behavior that wouldn't draw unwanted attention from human security, so...

I was agreeing with you that it all depends on what the AI is trained on.

If you train it with something, then it will be good at that one thing.


2 people marked this as a favorite.
Temperans wrote:

I was agreeing with you that it all depends on what the AI is trained on.

If you train it with something, then it will be good at that one thing.

My bad. It can be hard to tell with so many people pointing at the AI equivalent of a toddler and mocking them for not being able to do things a fully functional adult takes for granted.


Fair point


Quote:


house of hazards said:
In my opinion, it's only a matter of years before most of the media we consume (books, memes, films, etc.) will be mostly AI-generated. TTRPGs will follow suit, with AI taking a prominent role, just like VTTs have taken an important role since the pandemic.

That is not to say there will not be a place for human GMs. Just like people have continued to draw and paint even though we have cameras, and people still calculate things by hand even though we have calculators, people will keep GMing. But AI will be a huge support, and in many cases I think a human GM might not be needed at all in the future, except for those who want to have that old-school feeling, just like some people right now do not want to use digital dice-rolling tools because they like rolling physical dice.

Already people are generating art and text with AI to support their storytelling. As long as we view AI as a tool this is the way to go in my opinion.

AI can create many things that you cannot imagine, but it will be limited in creativity and emotion that cannot be replaced by humans.


1 person marked this as a favorite.

I just found that: Hidden door.

But it's not released yet.


1 person marked this as a favorite.
Pathfinder Roleplaying Game Superscriber
Lawrencelot wrote:

In my opinion, it's only a matter of years before most of the media we consume (books, memes, films, etc.) will be mostly AI-generated. TTRPGs will follow suit, with AI taking a prominent role, just like VTTs have taken an important role since the pandemic.

That is not to say there will not be a place for human GMs.

Could an Ai run a dungeon for you?

That's basically playing Skyrim or an MMO so the answer is yes.

But it won't have the social aspect.

As someone who this year came back to tRPGs from 20 years in MMOs I can assure you that the AI can do better combat tactics and game-ism - but even AI being used to run a human made story, like Skyrim and FFXIV both do, is just not the same as sitting down with a clumsy human who fumbles through the plot and forgets the rules for trip or something - but is a person you can relate to and get to like.

It might put a dent in paid-GMing.

When you pay a GM the contract isn't 'social', it's financial. So you expect a certain level of product and you're not there to bond with each other. AI will eventually deliver a better game experience than any human GM can, and that will hurt paid GMing.

101 to 150 of 181 << first < prev | 1 | 2 | 3 | 4 | next > last >>
Community / Forums / Pathfinder / Pathfinder Second Edition / General Discussion / AI-GMs All Messageboards

Want to post a reply? Sign in.