True Utilitarian Alignment


Pathfinder First Edition General Discussion

51 to 53 of 53 << first < prev | 1 | 2 | next > last >>

Quandary wrote:
Makhno wrote:

wat

seriously?
Man I don't know how to answer this one just because it seems like such an outlandish view. Seriously? People who are concerned with making the world maximally better for everyone aren't morally motivated for or against anything, just rotely living? Just... what.

yeah, couldn't you program a non-sentient computer to always maximize 'common good'?

or theoretically grow some animal which always does that, just like some animals build complex nests by instinct?
neither of those are able to have an alignment by the normal rules.

but i did post a second, opposing theory...

Um... no, actually, programming a computer to be moral turns out to be extremely difficult (actually impossible in current practice; it's downgraded to "difficult" only in theory). (The field of Friendly Artificial Intelligence deals with this issue, and it's super complicated.)

If you did successfully program a non-sentient computer to maximize good, then the "good" label would attach to the programmer. The relevant moral action is the act of programming the computer.


Quandary wrote:
Makhno wrote:
Ok. So I guess it's mostly Pharasma who would have to be overthrown.

Only problem with that is like I wrote, Paladins aren't deriving their powers from Pharasma, but merely from being LG-max.

Pharasma doesn't have anything to do with them until their soul passes to meet her judgement.
So Alignment has an independent existence from any and all gods, and indeed any specific being or group of beings.
I suppose you could look at Asmodeus and his brother, the first beings[gods],
but AFAIK there are also the outer gods besides them, and beings [aboleths] who independently became conscious and having alignments.
So it seems like an inherent condition of conscious, moral-choice-capable beings.
Pharasma just measures what's there, she didn't cause it.

Well, sure. But the post I was responding to specifically dealt with the question of how the existence of the afterlife affects utility calculations. Eliminate or modify the afterlife aspect of it (the whole "Good [capital-G] people get good [i.e. pleasant] afterlives" bit), and your utility calculation is fixed.


Quandary wrote:
the point of alignment is only beings with moral choice can have it (non-neutral),

So far so good...

Quote:
and that is based on them continually making moral choices in the moment.

... but here we disagree.

Why does "continually making moral choices" enter into it? What are you basing this on?

Quote:
somebody 100% dedicated to this idea of the greater good simply never can have a moral dilemna,

Sure they can face a dilemma — it's just that they actually have an ethical framework which can provide the answer to the dilemma, rather than floundering helplessly going "well, this is hard and I guess I don't know what the right answer is, and maybe there is no right answer :("

Furthermore, incomplete information or computational uncertainty (i.e., the two limitations that put the "bounded" in "bounded rationality") can present a utilitarian with an actually nontrivial dilemma.

I mean, being sure of what your ethical framework is does not somehow lead to certainty of what the right answer is. It would be nice if it did! But the fact is that, as a couple of posters have pointed out, you run into practical difficulties: you may not know all the relevant subjects' utility functions, you may not be sure of the outcomes of your actions, you may not be sure that your understanding of the matter is sufficient to be sure that you're even approaching the situation the right way, etc. etc. It's not actually hard to come up with ethical, or meta-ethical, dilemmas for utilitarians. (For instance, how's this for an example of a meta-ethical dilemma: do you go with rule consequentialism or act consequentialism as the basis of your morality?)

Quote:

but i did post a second, opposing theory...

(perhaps i didn't make it clear enough, it's that these outsiders ARE INDEED EVIL for pursuing grand evil-ness, even if they may do 'good' actions in furtherance of that, i.e. alignment(~self)-conscious utilitarian evil. i also wrote a counter view point that questions THAT approach...

I have to admit that I didn't really understand what you meant with your opposing theories. Could you expand/clarify?

51 to 53 of 53 << first < prev | 1 | 2 | next > last >>
Community / Forums / Pathfinder / Pathfinder First Edition / General Discussion / True Utilitarian Alignment All Messageboards

Want to post a reply? Sign in.
Recent threads in General Discussion