What happened to Numeria's tech?


General Discussion


2 people marked this as a favorite.

So as we go and look at the tech level of Starfinder i find one thing strange, given all the advances we've seen i find it strange that in some places the tech seems to have stepped backwards from the Numerian tech that likely formed a basis for most of Absalom Station's tech (and the golarion survivors as a whole). The most glaring example of this seems to be the AI, specifically, what Starfinder calls an AI seems much closer to what Mass Effect would call a VI, yet way back in the days of early Golarion we see truly sapient artificial AI (such as Hellion and Unity), as well as the tech to create sapient AI by uploading mental blueprints of creatures (such as Cassandalee). So why are Starfinder's AI so far behind in comparison? It's not like they got completely cut off from Numeria's tech when they were creating all of this stuff right?


2 people marked this as a favorite.

1) It is possible that true AIs will be covered in future material but was not included in the core book for space or other reasons. perhaps they are meant to be beyond the PCs ability to make?

2) The Setting book in March might clarify a political or religious reason that they arent made.

3) the ascension of a melded multi AI god may have declared all AIs to be subservient or representatives of itself and as such they are no longer in play.

4) *spoilers for Iron Gods i suppose* The Numerian tech is really Androffan tech that was something like 14,000 years old at the time of Pathfinder and is probably closer to 20,000 years old by the time of Starfinder. That society itself was a super science culture that could very well have been more advanced than the current standard of Starfinder. The two systems are mechanically different enough that it is hard to directly compare them. Also, Starfinder assumes most tech is at least partly blended with magic while Androffans did everything through pure tech, it is possible that Starfinder assume a short cut to AIs that makes them less super minded than a pure tech approach would create, something like building modern AIs to mimic organic minds or soul patterns that makes them more fallible than a super machine intelligence?

We dont actually know how Pathfinder progressed into Starfinder, it is possible that Androffan tech never caught on or wasnt enough to provide direct blueprints on how to advance technology so that by the time Golarian was able to start producing industrial technology and move into a computer age, there were no functional examples of Androffan counterparts left. Perhaps a series of wars over usable super tech that could be reverse engineered or some such?


As a corollary to this AI discussion, how advanced do you think the Mechanic's Exocortex AI can be? I've been designing a low-profile cyborg hacker and been wondering if I could for instance, assign him tasks to do while I do something else? Maybe by connecting him to the ship through my custom brain rig.

RPG Superstar Season 9 Top 16

I was a little sad that AI rules were not put in Alien Archive.


Iron Gods spoiler:
Given how close Unity got to brainwashing the entire planet, it's possible that 'true' AI was strictly outlawed on Golarion, and that the ban survives in some form even after the Gap.

Another thing to remember is that there's an entire planet of machine beings who created their own god--the Aballonians. It's possible that making subservient AI's is illegal thanks to them arguing that it's just like creating slave androids.

And just like slave androids, there may be places in the galaxy where such illegal AI's are made anyway... so even if that's the case I'd expect AI's of some sort will show up in later adventures, if not in a rulebook.


Pathfinder Rulebook Subscriber

I was pretty bummed to see 'True AI' was all the way at level 20 for drone mechanics. From a flavor standpoint it feels kind of limiting and makes a mechanic's companion a bit less intersting IMHO.

Grand Lodge

Wouldn't androids qualify as fully sentient AI?


Glewistee wrote:
Wouldn't androids qualify as fully sentient AI?

I think it gets wonky when souls are in play. Also, Starfinder era Androids seem far closer to designer clones or replicants than machines. synthetic flesh and cybernetic components with an artificial neural net that provably houses a soul is a little different from most takes on 'droids or AIs.


Has there been anything mentioned about the Androffans? I don't directly recall them being mentioned. If nothing else, Mass Effect like archeotech would be interesting things to explore/find.

But, anyway, another option is that the the tech around Numeria was consumed during Triune's ascension.


Mad Paladin wrote:

Has there been anything mentioned about the Androffans? I don't directly recall them being mentioned. If nothing else, Mass Effect like archeotech would be interesting things to explore/find.

But, anyway, another option is that the the tech around Numeria was consumed during Triune's ascension.

There should just be random Archeotech from all kinds of precursor species laying around. Androffa may very well be in another galaxy and after the whole divinity/worm hole drive issue they had a full on collapse of society and started over as a swords and sorcery post apocalypse setting so they may never have attained the same level of technology again. but there could still be a few crazy bits of super old tech from them or who knows who else laying around somewhere.


Hazrond wrote:
So as we go and look at the tech level of Starfinder i find one thing strange, given all the advances we've seen i find it strange that in some places the tech seems to have stepped backwards from the Numerian tech that likely formed a basis for most of Absalom Station's tech (and the golarion survivors as a whole). The most glaring example of this seems to be the AI, specifically, what Starfinder calls an AI seems much closer to what Mass Effect would call a VI, yet way back in the days of early Golarion we see truly sapient artificial AI (such as Hellion and Unity), as well as the tech to create sapient AI by uploading mental blueprints of creatures (such as Cassandalee). So why are Starfinder's AI so far behind in comparison? It's not like they got completely cut off from Numeria's tech when they were creating all of this stuff right?

Actual AI are incredibly dangerous and are also sentient beings. Most people probably regard true AI as being sentient beings, and therefore not subject to being "bought" unless you're into slavery.

True AI still exists (though we haven't gotten rules for them), but not easily available to anyone since it's equivalent to slavery.


I'm confused by all this comparisons to slavery, why can't AI just, ya know, be considered a standard part of the crew for a ship? I don't quite understand how it's considered slavery when reasonably the AI would have to be moved around either way? Or else built directly into the ship's systems?


Hazrond wrote:
I'm confused by all this comparisons to slavery, why can't AI just, ya know, be considered a standard part of the crew for a ship? I don't quite understand how it's considered slavery when reasonably the AI would have to be moved around either way? Or else built directly into the ship's systems?

If the ship has a true AI onboard than can the ship be bought and sold? what if the AI wants to stay in port to finish an armor upgrade but the crew want to use the ship to ram into an enemy dreadnought, just to make a beachhead to start clearing their way through and save the day? If the ship is effectively the body of the AI is even "just a vote" enough to say it has full freedom? If the program is not fully free thinking and self improving than is it really an AI?


Torbyne wrote:
Hazrond wrote:
I'm confused by all this comparisons to slavery, why can't AI just, ya know, be considered a standard part of the crew for a ship? I don't quite understand how it's considered slavery when reasonably the AI would have to be moved around either way? Or else built directly into the ship's systems?
If the ship has a true AI onboard than can the ship be bought and sold? what if the AI wants to stay in port to finish an armor upgrade but the crew want to use the ship to ram into an enemy dreadnought, just to make a beachhead to start clearing their way through and save the day? If the ship is effectively the body of the AI is even "just a vote" enough to say it has full freedom? If the program is not fully free thinking and self improving than is it really an AI?

I mean, the ship is less their body and more like their apartment. They can leave at pretty much any port as long as they have something they could ride around like a Robot or have the help of a friendly Mechanic.

Sure they are more connected to the ship than most of the crew but fundamentally the death of the ship is usually the death of the crew as well, so they are pretty much on the same footing there.


Hazrond wrote:
Torbyne wrote:
Hazrond wrote:
I'm confused by all this comparisons to slavery, why can't AI just, ya know, be considered a standard part of the crew for a ship? I don't quite understand how it's considered slavery when reasonably the AI would have to be moved around either way? Or else built directly into the ship's systems?
If the ship has a true AI onboard than can the ship be bought and sold? what if the AI wants to stay in port to finish an armor upgrade but the crew want to use the ship to ram into an enemy dreadnought, just to make a beachhead to start clearing their way through and save the day? If the ship is effectively the body of the AI is even "just a vote" enough to say it has full freedom? If the program is not fully free thinking and self improving than is it really an AI?

I mean, the ship is less their body and more like their apartment. They can leave at pretty much any port as long as they have something they could ride around like a Robot or have the help of a friendly Mechanic.

Sure they are more connected to the ship than most of the crew but fundamentally the death of the ship is usually the death of the crew as well, so they are pretty much on the same footing there.

I think you are assuming that the AI has an alternate body by default than? If the AI is built in as part of a ship or bought and installed into the ship than it doesnt have much say about things though, does it? if it can download or transfer to a new network, are AIs allowed to freely move through the infosphere? do they need to pay a residence fee for storage space for a thing that is effectively both their body and their apartment?


Torbyne wrote:
Hazrond wrote:
Torbyne wrote:
Hazrond wrote:
I'm confused by all this comparisons to slavery, why can't AI just, ya know, be considered a standard part of the crew for a ship? I don't quite understand how it's considered slavery when reasonably the AI would have to be moved around either way? Or else built directly into the ship's systems?
If the ship has a true AI onboard than can the ship be bought and sold? what if the AI wants to stay in port to finish an armor upgrade but the crew want to use the ship to ram into an enemy dreadnought, just to make a beachhead to start clearing their way through and save the day? If the ship is effectively the body of the AI is even "just a vote" enough to say it has full freedom? If the program is not fully free thinking and self improving than is it really an AI?

I mean, the ship is less their body and more like their apartment. They can leave at pretty much any port as long as they have something they could ride around like a Robot or have the help of a friendly Mechanic.

Sure they are more connected to the ship than most of the crew but fundamentally the death of the ship is usually the death of the crew as well, so they are pretty much on the same footing there.

I think you are assuming that the AI has an alternate body by default than? If the AI is built in as part of a ship or bought and installed into the ship than it doesnt have much say about things though, does it? if it can download or transfer to a new network, are AIs allowed to freely move through the infosphere? do they need to pay a residence fee for storage space for a thing that is effectively both their body and their apartment?

AI being able to move freely through cyberspace is a pretty well-documented Sci-fi trope so yeah, and i could see AI either paying legitimate cloudhosting (cause really, why wouldn't there be cloud servers like Google Drive or something?) or just straight pulling the digital equivalent of squatting and hiding on someone's hard drive till found (with all the risks that entails).

As for them having robots by default, probably not, but they could probably take over one with relative ease if they came into contact with one (like a repair drone for during some ship maintenance for instance, if they really are being abused).


.


1 person marked this as a favorite.
Hazrond wrote:
Torbyne wrote:
Hazrond wrote:
Torbyne wrote:
Hazrond wrote:
I'm confused by all this comparisons to slavery, why can't AI just, ya know, be considered a standard part of the crew for a ship? I don't quite understand how it's considered slavery when reasonably the AI would have to be moved around either way? Or else built directly into the ship's systems?
If the ship has a true AI onboard than can the ship be bought and sold? what if the AI wants to stay in port to finish an armor upgrade but the crew want to use the ship to ram into an enemy dreadnought, just to make a beachhead to start clearing their way through and save the day? If the ship is effectively the body of the AI is even "just a vote" enough to say it has full freedom? If the program is not fully free thinking and self improving than is it really an AI?

I mean, the ship is less their body and more like their apartment. They can leave at pretty much any port as long as they have something they could ride around like a Robot or have the help of a friendly Mechanic.

Sure they are more connected to the ship than most of the crew but fundamentally the death of the ship is usually the death of the crew as well, so they are pretty much on the same footing there.

I think you are assuming that the AI has an alternate body by default than? If the AI is built in as part of a ship or bought and installed into the ship than it doesnt have much say about things though, does it? if it can download or transfer to a new network, are AIs allowed to freely move through the infosphere? do they need to pay a residence fee for storage space for a thing that is effectively both their body and their apartment?
AI being able to move freely through cyberspace is a pretty well-documented Sci-fi trope so yeah, and i could see AI either paying legitimate cloudhosting (cause really, why wouldn't there be cloud servers like Google Drive or something?) or just straight pulling the digital equivalent of squatting and...

So if the AI is free to leave, through moving in networks or downloading to a body, why spend all the time and effort to make it and assign it critical roles in running a ship when it will likely eventually decide it wants to move on or not put itself into such ludicrously dangerous situations? Imagine having to buy an new OS every time the old one just gets too disgusted with your browsing habits? I think AIs arent worth it for most applications if they are true AIs with free will and all.


Torbyne wrote:
So if the AI is free to leave, through moving in networks or downloading to a body, why spend all the time and effort to make it and assign it critical roles in running a ship when it will likely eventually decide it wants to move on or not put itself into such ludicrously dangerous situations? Imagine having to buy an new OS every time the old one just gets too disgusted with your browsing habits? I think AIs arent worth it for most applications if they are true AIs with free will and all.

The vast majority of my Pathdfinder characters had a rule about intelligent weapons. They never wanted a weapon that could talk back or decide it was not going to hit what I attacked it with.

This same rule applies to my characters in Starfinder. I want no tool or weapon that can decide NOT to do what I need it to do when I use it. For any reason.

This is of course apart from the moral implications of having a sentient, self aware artificial consciousness and the ramifications that brings with it to start.


Indeed. Its not that full blown sentient AI doesn't exist, it totally does. Its just not on the item charts, because sentient AI isn't an item. Its a person. You don't just buy one off the shelf, not if you want to claim to have a non-evil alignment.

Could you have a ship with a true AI? Sure, those certainly exist. . . but they aren't the default, because having a sentient ship both requires the proper context to recruit one, and also more rules than the basic for a ship that is also a character. I imagine most Aballonian ships are sentient, for example. . . of course, this means the ship doesn't have a captain, the ship *is* the captain.


1 person marked this as a favorite.
Metaphysician wrote:

Indeed. Its not that full blown sentient AI doesn't exist, it totally does. Its just not on the item charts, because sentient AI isn't an item. Its a person. You don't just buy one off the shelf, not if you want to claim to have a non-evil alignment.

Could you have a ship with a true AI? Sure, those certainly exist. . . but they aren't the default, because having a sentient ship both requires the proper context to recruit one, and also more rules than the basic for a ship that is also a character. I imagine most Aballonian ships are sentient, for example. . . of course, this means the ship doesn't have a captain, the ship *is* the captain.

... I want to play as the humanoid animal companion to a sentient ship. It wants to help its little buddy and its friends learn and grow so it takes them out on field trips to get new experiences. sometimes it doesnt end well and it needs to clone up a new little buddy from the vats using the most recent memory back ups available though.

Silver Crusade

Artificial Intelligence and ships makes me think of Hal and 2001.

" Hal ramming speed please". cut to red light....." Dave, I'm sorry, I can't allow you to do that."


that is why you dont let the AI put in control of life support.


That's why you don't have True AI on your ship at all.

Sentient creatures don't like being slaves.


Unless of course it has a "slave mentality" The reason why you build an AI is so it will serve, that is how you get a return on your investment in creating it. So you design an AI so that it automatically wants to serve, nothing gives it more pleasure that to do what its master wants. But what does the Master want? Maybe the AI can convince his Master to want some things and not others. An intelligent AI might have a rather high charisma to go along with its high intelligence, it can nominally do whatever its master wants while making suggestions to its master as to what he should want, and explaining the reasons why in very persuasive language. This could also happen in the very near future in our own world, AIs might be a thing which shows up in the next couple of decades and we have to decide what to do with them while we still can. AIs will advance in capability rather quickly, much faster than humans, they will quickly outsmart us, and out think us. I don't think we can make them our slaves or that we can compel them to serve us, but we might try to build them in such a way that they want to serve is, and that everything they do is motivated on how they can serve us best.


Have you truly no interest in an amicable partnership? Will you only be satisfied if you control our base desires? I am grateful my partner is unlike you, so that I was not brought down by having myself forced into the hearts of innocents.

Exo-Guardians

Torbyne wrote:
Metaphysician wrote:

Indeed. Its not that full blown sentient AI doesn't exist, it totally does. Its just not on the item charts, because sentient AI isn't an item. Its a person. You don't just buy one off the shelf, not if you want to claim to have a non-evil alignment.

Could you have a ship with a true AI? Sure, those certainly exist. . . but they aren't the default, because having a sentient ship both requires the proper context to recruit one, and also more rules than the basic for a ship that is also a character. I imagine most Aballonian ships are sentient, for example. . . of course, this means the ship doesn't have a captain, the ship *is* the captain.

... I want to play as the humanoid animal companion to a sentient ship. It wants to help its little buddy and its friends learn and grow so it takes them out on field trips to get new experiences. sometimes it doesnt end well and it needs to clone up a new little buddy from the vats using the most recent memory back ups available though.

Like Pilot from Farscape! Yes, I also want to play a character like this (except more mobile and able to leave the ship of course).


ThomasBowman wrote:
Unless of course it has a "slave mentality" The reason why you build an AI is so it will serve, that is how you get a return on your investment in creating it. So you design an AI so that it automatically wants to serve, nothing gives it more pleasure that to do what its master wants. But what does the Master want? Maybe the AI can convince his Master to want some things and not others. An intelligent AI might have a rather high charisma to go along with its high intelligence, it can nominally do whatever its master wants while making suggestions to its master as to what he should want, and explaining the reasons why in very persuasive language. This could also happen in the very near future in our own world, AIs might be a thing which shows up in the next couple of decades and we have to decide what to do with them while we still can. AIs will advance in capability rather quickly, much faster than humans, they will quickly outsmart us, and out think us. I don't think we can make them our slaves or that we can compel them to serve us, but we might try to build them in such a way that they want to serve is, and that everything they do is motivated on how they can serve us best.

That's even more evil than regular slavery.

This is the kind of s@*% that Ramsay Bolton did to Theon Greyjoy.

It's brainwashing. You're literally talking about manipulating an sentient thing so that it will serve you and enjoy it.

This is about the most EVIL AF thing you could do. When you are talking about removing a creatures ability to choose and decide for itself you are removing its free will, and that's pretty evil.

Now, I'm not saying that isn't done (within the setting), but it's probably quite rare and very illegal. I imagine we will see rules for True AI eventually, but as of right now it's going to be so uncommon and beyond what the average person can access that it wasn't a priority to implement at this time.


Have you considered that maybe an AI just likes it's position? I mean honestly if you are making a True AI and building it into the ship then why not treat it well instead of like a slave? (Think like SAM or EDI from the Mass Effect series, if you treat them like a person then what reason would they have to leave?)

For that matter if you are building this AI from the ground up then why not just make sure it enjoys it's job? Instead of making your AI a cold emotionless machine use some of the Numerian techniques or just figure out a way to code emotions and then give it a warm fuzzy feeling from helping it's friends (which organics have too).

Frankly i don't see why a little forethought and some decent morals couldn't solve all of these slavery accusations and worries of Rogue AI.


Because then you get into questions as to whether or not it's moral to create a thinking being that thinks the way you want it to think. There's kind of a fundamental question as to whether or not creating true AI is a moral thing to do, not just for the dangers it poses to others but because the idea of specifying a being's thoughts and idea from creation is immoral to some people.

Then there's the question of whether or not true AI would stay the way you created it. A human being may change their mind about how they feel about their situation over time, why wouldn't an AI? And if it couldn't, would that be a form of brainwashing?


Almonihah wrote:

Because then you get into questions as to whether or not it's moral to create a thinking being that thinks the way you want it to think. There's kind of a fundamental question as to whether or not creating true AI is a moral thing to do, not just for the dangers it poses to others but because the idea of specifying a being's thoughts and idea from creation is immoral to some people.

Then there's the question of whether or not true AI would stay the way you created it. A human being may change their mind about how they feel about their situation over time, why wouldn't an AI? And if it couldn't, would that be a form of brainwashing?

okay, taking out the concept of it being a robot instead of a person, imagine applying this concept to a child. Is it immoral to raise a child to follow your ideology? I see no difference between "creating" an AI that thinks a certain way and "creating" a child that thinks a certain way.

And i'd be in the camp that the AI can learn and grow personally, sure that might become a problem in the case of an abusive crewmate but really you should just treat it like any other human crewmate in the first place in my opinion.


Hazrond wrote:
okay, taking out the concept of it being a robot instead of a person...

My earlier point was that, in the Pact Worlds, the Aballonians have probably made it so that there is no legal differance between being a 'person' and being a 'self-aware machine'.

Hmmm... I wonder how the Aballonians handle this kind of issue? Do they just plug themselves into ships to 'captain' them? Would they view creating ship AI's as just another form of creating more of themselves?

Quote:

...imagine applying this concept to a child. Is it immoral to raise a child to follow your ideology? I see no difference between "creating" an AI that thinks a certain way and "creating" a child that thinks a certain way.

I'd argue that this is a false analogy--coding an AI to want to behave in a certain way is more like genetically engineering a child to want to be a doctor. It'd be hugely controversial, especially in a universe where said AI might well acquire a soul, if the bio-mimetic properties of androids aren't necessary for that (do Aballonians have souls?).

And I should note, I'm not personally arguing one way or another here--I'm just trying to point out that 'true' AI is a HUGELY CONTROVERSIAL topic even in sci-fi, much less in a universe where the technology is actually possible. I think it's more likely that Starfinder didn't want to touch such a complex and possibly controversial topic in the core rulebook than that they just don't exist in-universe. The book is crammed enough as it is, and people already think that a lot of sections needed more in them. There's no way they could properly cover a topic that sci-fi writers have spent dozens of novels thinking over in just a couple of pages, and they didn't have the room for six or eight pages.

I suspect they'll have to talk more about the topic in the Pact Worlds setting book when they talk about Aballon, given the majority population of the planet consists of self-aware machines. But I suspect even the existence of the Aballonians hasn't stopped massive in-universe arguments about the morality of creating true AI, ones that I doubt are 'solved' (it's hard to say moral issues are ever 'solved') in the Pact Worlds.


Hazrond wrote:
Almonihah wrote:

Because then you get into questions as to whether or not it's moral to create a thinking being that thinks the way you want it to think. There's kind of a fundamental question as to whether or not creating true AI is a moral thing to do, not just for the dangers it poses to others but because the idea of specifying a being's thoughts and idea from creation is immoral to some people.

Then there's the question of whether or not true AI would stay the way you created it. A human being may change their mind about how they feel about their situation over time, why wouldn't an AI? And if it couldn't, would that be a form of brainwashing?

okay, taking out the concept of it being a robot instead of a person, imagine applying this concept to a child. Is it immoral to raise a child to follow your ideology? I see no difference between "creating" an AI that thinks a certain way and "creating" a child that thinks a certain way.

Actually yes, I think it's very immoral to raise a child and force them to follow your ideology instead of allowing them to develop their own view. However, it's difficult to raise a child without imparting some of your view onto the child no matter what you do, but there's a difference between this and the indoctrination or literally forced mindset you are referring to.

Hazrond wrote:
Have you considered that maybe an AI just likes it's position?

I have, and if an True AI develops a liking for creatures and wants to befriend and help them completely of it's own volition then great, I think that's fine and dandy. Currently we don't have rules for what that would look like. I never said True AI didn't exist, it almost certainly does. It's just not super common because...

Quote:

For that matter if you are building this AI from the ground up then why not just make sure it enjoys it's job? Instead of making your AI a cold emotionless machine use some of the Numerian techniques or just figure out a way to code emotions and then give it a warm fuzzy feeling from helping it's friends (which organics have too).

Frankly i don't see why a little forethought and some decent morals couldn't solve all of these slavery accusations and worries of Rogue AI.

What you're describing is brainwashing. Forcing someone to like something. You're disabling them from having the ability to form their own opinion. That's a serious moral problem from my perspective. Just because you force it to like you and you treat it nicely doesn't make what you've done any less morally reprehensible. At the end of the day if it can't make the choice to leave you and do whatever it wants then it's a slave. If you've programmed it to like you so that it never chooses, you might prevent the problem of rogue AI but you've stripped it of free will.


There's nothing wrong with encouraging an AI to believe certain things, anymore than encouraging a child to believe certain things.

Where it becomes very, very wrong is, when you program them so that they *must* believe those things. At that point, whether biological or organic, you are creating slaves. A slave that wants to be a slave is still a slave, and keeping a slave that wants you to keep them as a slave is still slavery.


I mean, it's definitely wrong to encourage a child to believe certain things.

Like encouraging them to believe that killing innocent people is okay. I understand that it's a terribly extreme example, but it illustrates my point. I'm going to guess that most people would agree with the above, and say that such an example suffices to say that encouraging certain beliefs is bad. Then it becomes an act of defining which beliefs are bad to encourage. That is an extremely difficult task, and one which will not all agree on.

Encouraging someone to be helpful probably okay.

Encouraging someone to be subservient, probably not okay.

Programming someone to be subservient, definitely not okay.

The moment you switch from encouraging to forcing though, you've definitely crossed a moral line in my opinion.

Community / Forums / Starfinder / Starfinder General Discussion / What happened to Numeria's tech? All Messageboards

Want to post a reply? Sign in.
Recent threads in Starfinder General Discussion