True Utilitarian Alignment


Pathfinder First Edition General Discussion

1 to 50 of 53 << first < prev | 1 | 2 | next > last >>

So I was wondering what alignment would people give a person who had utilitarian morality someone who believed what was right/good was what helped the most people. So killing innocent plague victims to stop the infections spreading to countless others would be a good action in their opinion because more people were saved than hurt.

Now such a character would likely commit evil acts he would also likely commit good acts.


He would also commit chaotic or lawful acts according to what he sees more beneficial? Because if that's the case, True Neutral is the way to go.


Sounds pretty neutral to me.


NE


I could see it as almost any alignment.

Ideally it would be True Neutral, but the problem with "true" utilitarian morality is that there is hardly any objective scale. As such, the alignment of the character in question would possibly influence how he or she values different options.

As I see such a character, it would balance toward lawful, believing in a system that can determine right or wrong.

Good might get problematic, as arguing that a terrible act benefits the greater good, doesn't necessarily remove the stain of the act itself. Commiting evil acts regularly, even in the interest of other people, makes it difficult to maintain a good alignment.

For a character, who truly believes (without doubt) he is the judge in which choices are right or wrong, I could see him be LE or NE. Basically allowing his system, whether objective or not, to determine the fate of anyone he encounters.


Depends on how you and your GM view the alignments. Some people believe in the absolute morality of murder = evil. But then anyone who kills a bandit becomes evil.

I had a druid that believed in doing what made sense. He proposed culling the herd in a similar situation as you gave because that's the natural and smart thing to do in cases of disease outbreaks. GM slapped me with Chaotic Neutral and it works. He didn't like laws because they applied constrictions that interfered with the greatest benefit to the most people and he was neutral because he was not motivated by good or evil, but instead by pragmatism.

LE also works as a "I am the law!" position in which you are enforcing your own view on how things should be regardless of others. Evil doesn't necessarily mean you're kicking kittens for the fun of it, but that you are taking away other's choice and free will; subverting it for your own reasons and gains.


Yes true neutral seems about right he doesn't respect absolute morality or laws that don't maximise happiness/well-being. Though you could also call him lawful in do far that his actions are not random but based on a calculus.


Benefitting the many, willing to sacrifice the few? Sounds lawful neutral to me, but it could be played as lawful evil too. Definitely lawful though.


I think a lot of people will say True Neutral, which could certainly work, but I am of the opinion that such a coldly pragmatic character could also tend toward Lawful Neutral or Lawful Evil (and in fact would be better described by either of those alignments).

He'd have an inability (or at least a severe disinclination) to make exceptions, and he's clearly putting the collective welfare over individual freedom, so any Chaotic is out.

Good is out because he's not really altruistic in his approach to utilitarianism; sure he'll give his meal to a starving orphan if it somehow benefited the group, but if said starving orphan is just a mouth to feed whereas the character in question is about to help fight a battle, the kid's SOL.

That leave us with NE, LE, TN, and LN.

the PRD wrote:

Neutral Evil: A neutral evil villain does whatever she can get away with. She is out for herself, pure and simple. She sheds no tears for those she kills, whether for profit, sport, or convenience. She has no love of order and holds no illusions that following laws, traditions, or codes would make her any better or more noble. On the other hand, she doesn't have the restless nature or love of conflict that a chaotic evil villain has.

Some neutral evil villains hold up evil as an ideal, committing evil for its own sake. Most often, such villains are devoted to evil deities or secret societies.

Neutral evil represents pure evil without honor and without variation.

The character is not Neutral Evil because he's not out for himself or the dogma of some evil deity, he's out for the collective well-being. He doesn't kill for "profit, sport, or convenience" he kills because it has to be done.

the PRD wrote:

Neutral: A neutral character does what seems to be a good idea. She doesn't feel strongly one way or the other when it comes to good vs. evil or law vs. chaos (and thus neutral is sometimes called “true neutral”). Most neutral characters exhibit a lack of conviction or bias rather than a commitment to neutrality. Such a character probably thinks of good as better than evil—after all, she would rather have good neighbors and rulers than evil ones. Still, she's not personally committed to upholding good in any abstract or universal way.

Some neutral characters, on the other hand, commit themselves philosophically to neutrality. They see good, evil, law, and chaos as prejudices and dangerous extremes. They advocate the middle way of neutrality as the best, most balanced road in the long run.

Neutral means you act naturally in any situation, without prejudice or compulsion.

True Neutral is iffy because, while certainly not the "balanced road" type, his striving for what most benefits the collective would supersede any bias he might have (toward friends or enemies for example). Then again, the whole "not personally committed to upholding good in any abstract or universal way" is outright contradictory to his behavior.

That leave us with Lawful Neutral and Lawful Evil. He's not really selfish or cruel (too inefficient), so I'd lean toward LN, but I haven't slept yet so I can't really form a more thorough argument for that stance.

Just my 2 coppers.


1 person marked this as a favorite.

I am flabbergasted at the prevailing attitude in this thread that a utilitarian approach to ethics is by nature impossible to be good. The ethical debate between teleology and deontology is by no means finished, and to presume that a consequentialist approach to morality and ethics is by its nature incapable of being righteous or good is both arrogant and narrow-minded.

OP, it could run the gamut, though it depends on how faithfully the character in question follows this approach. If he/she is willing to do whatever is best for the most in every situation regardless of the effect to themselves then I'd say likely NG, possibly LG. If his approach is clouded or corrupted by bias or imperfect application, or if he uses it simply as an excuse rather than a motive force, then the other alignments are all possible.

Honorable Goblin wrote:

Good is out because he's not really altruistic in his approach to utilitarianism; sure he'll give his meal to a starving orphan if it somehow benefited the group, but if said starving orphan is just a mouth to feed whereas the character in question is about to help fight a battle, the kid's SOL.

He is certainly altruistic in his approach; he gives himself no more value than anyone else, and is willing to do what he believes to benefit the world the most regardless of his own reputation, image, or well-being, nor that of any individual that he may have affection for. He does what he feels is best, denying himself the luxuries of acting on sympathy or emotion, and in doing so arguably sacrifices far more for the sake of his moral code than anyone who simply lets benevolent whimsy direct their actions.

Quote:
He didn't like laws because they applied constrictions that interfered with the greatest benefit to the most people and he was neutral because he was not motivated by good or evil, but instead by pragmatism.

But he WAS motivated by good, that is the entire point, that he worked toward the good of the world. What is "good" and what is "nice" are not and have never been synonymous, just as "terrible" actions and "evil" actions are likewise not synonyms.

Quote:
Now such a character would likely commit evil acts he would also likely commit good acts.

How exactly would be operating for the good of everyone be an evil act, unless we are speaking of such things as the arbitrary "always evil" labels on stuff like raising undead?


Sumutherguy wrote:

I am flabbergasted at the prevailing attitude in this thread that a utilitarian approach to ethics is by nature impossible to be good. The ethical debate between teleology and deontology is by no means finished, and to presume that a consequentialist approach to morality and ethics is by its nature incapable of being righteous or good is both arrogant and narrow-minded.

OP, it could run the gamut, though it depends on how faithfully the character in question follows this approach. If he/she is willing to do whatever is best for the most in every situation regardless of the effect to themselves then I'd say likely NG, possibly LG. If his approach is clouded or corrupted by bias or imperfect application, or if he uses it simply as an excuse rather than a motive force, then the other alignments are all possible.

Honorable Goblin wrote:

Good is out because he's not really altruistic in his approach to utilitarianism; sure he'll give his meal to a starving orphan if it somehow benefited the group, but if said starving orphan is just a mouth to feed whereas the character in question is about to help fight a battle, the kid's SOL.

He is certainly altruistic in his approach; he gives himself no more value than anyone else, and is willing to do what he believes to benefit the world the most regardless of his own reputation, image, or well-being, nor that of any individual that he may have affection for. He does what he feels is best, denying himself the luxuries of acting on sympathy or emotion, and in doing so arguably sacrifices far more for the sake of his moral code than anyone who simply lets benevolent whimsy direct their actions.

Quote:
He didn't like laws because they applied constrictions that interfered with the greatest benefit to the most people and he was neutral because he was not motivated by good or evil, but instead by pragmatism.
But he WAS motivated by good, that is the entire point, that he worked toward the good of the world. What is "good" and what is...

I was going with the arbitrary labels and D&D absolutist morality where certain evil acts are evil regardless of context. Take the example of stopping infections spreading by killing or ghettoizing plague victims, those victims are innocent and therefor in D&D morality killing them is wrong even if does stop the spread of plague. Similarly assassinating warmongering leaders or corrupt officials that are legally selected will always be chaotic.


Wind Chime wrote:
I was going with the arbitrary labels and D&D absolutist morality where certain evil acts are evil regardless of context. Take the example of stopping infections spreading by killing or ghettoizing plague victims, those victims are innocent and therefor in D&D morality killing them is wrong even if does stop the spread of plague. Similarly assassinating warmongering leaders or corrupt officials that are legally selected will always be chaotic.

Ah, i see. Probably starting LN then, given the inflexible nature of mechanical ethics in pathfinder, and probably undergoing numerous alignment changes to any of the possible alignments over the course of a campaign, as he/she acts for the greater good and gets slapped with various good/evil/chaotic/lawful labels on each act.

I am now imagining a utilitarian BBEG rebelling against the gods for creating and enforcing an absolutist system of moral judgement that declares him to be evil simply because he is too consequentialist for them.


A "good" person is willing to put themselves on the line for others. They also have an innate value of the rights of the innocent. As such, given the possibility of isolating individuals to contain a virulent illness, a Good person would choose quarantine or even exile over outright execution. Yeah, it presents the possibility of others being infected, but it's better to risk a possible bad outcome than to definitely and purposefully kill an innocent person. That's what precludes the described actions from being Good. Remember that in Pathfinder, "Good" and "Evil" aren't culturally subjective. There is no such thing as "It'd be good by the standards of Humans, but Orcs consider it 'good' in their culture to kill the weak." Everything and everyone is judged by "humanish" standards of morality and ethics; Humans are "Good" and whatever passes for commonplace in Orc society is inherently Evil. It's a silly system, but there it is.

So he's, inherently, not "good". In a way, that could say that Utilitarianism, at least applied to matters of life and death, is, in a way, "not good"; at least by absolutist D&D alignment nonsense.


Anyone who is seeking the most help for the most people is pretty much by definition "good" instead of "neutral" or "evil".

Since "lawful" or "chaotic" would potentially restrict their options for seeking that most help, then they would pretty much have to be "neutral" on the law-chaos axis.

So that pretty much leaves neutral-good as the alignment that would seek to help the most people.


I'd argue chaotic good. Trying to do good, but not being bound by any rules beyond that.
This person will in theory murder innocent people if that's what makes the world a better place, but how often is that situation going to arise? He wouldn't sacrifice plague victims to prevent the spread of the plague unless there was no other alternative.
Refusing to kill one innocent to save the lives of a hundred innocents is lawful behavior - continuing to follow a rule even when doing so is likely to make the world a worse place.


MurphysParadox wrote:

Depends on how you and your GM view the alignments. Some people believe in the absolute morality of murder = evil. But then anyone who kills a bandit becomes evil.

I had a druid that believed in doing what made sense. He proposed culling the herd in a similar situation as you gave because that's the natural and smart thing to do in cases of disease outbreaks. GM slapped me with Chaotic Neutral and it works. He didn't like laws because they applied constrictions that interfered with the greatest benefit to the most people and he was neutral because he was not motivated by good or evil, but instead by pragmatism.

LE also works as a "I am the law!" position in which you are enforcing your own view on how things should be regardless of others. Evil doesn't necessarily mean you're kicking kittens for the fun of it, but that you are taking away other's choice and free will; subverting it for your own reasons and gains.

Killing is not always the same thing as murder ...


Lawful "something", the second part of the alignment (the place on the good-evil axis) would be subject to circumstances.


Not enough info.

Too much hinges on the characters definition of "Greater Good".

What if the character considers governmental stability a greater good than individual freedom?

What if the character considers quantity of life(living to an old age) a greater good than quality of life(being able to enjoy the life you have)?

I could go on, but before I made the call, I would have the player define exactly what they consider "Greatest Good".


True Neutral is the "choosing not to choose" alignment per the PFRPG system. I would say it's neutral good. Your character's motivations are ultimately for the betterment of all. That's quintessential good. How you go about that though might be lawful if that means working with the system or chaotic if the system would otherwise bog things down and undermine your goal which is neutral.


I agree with everyone saying neutral good. The entire point of utilitarianism is achieving the greatest good.

To that end, some responses:

Quote:
What if the character considers governmental stability a greater good than individual freedom?

Utilitarianism aggregates utility over some set of individuals. It is impossible to consider "governmental stability" to be inherently good, as it's not a property or state of an individual. A utilitarian would only consider governmental stability to be good if it led to greater good, for more people, than the alternative.

Quote:
What if the character considers quantity of life(living to an old age) a greater good than quality of life(being able to enjoy the life you have)?

This is a question that some professional ethicists do struggle with, but utilitarianism generally solves it by using the utility function of the individual themselves to calculate utility for that individual. Thus what matters isn't whether the utilitarian considers quality or quantity to be superior, but what each person in question prefers.

Quote:
Remember that in Pathfinder, "Good" and "Evil" aren't culturally subjective. There is no such thing as "It'd be good by the standards of Humans, but Orcs consider it 'good' in their culture to kill the weak."

Utilitarianism is in no way dependent on cultural subjectivity.

Quote:
I am now imagining a utilitarian BBEG rebelling against the gods for creating and enforcing an absolutist system of moral judgement that declares him to be evil simply because he is too consequentialist for them.

I've had the same idea, except the person wouldn't be a BBEG, but a hero. I think it's a great one. I think that a really good person (by modern consequentialist standards — i.e. the correct standards) would, if thrust into a Pathfinderesque universe, would indeed do something like this. That could be a great campaign — rebel against, and ultimately aim to destroy, the gods, and rearrange the universe to remove absolutist "D&D alignment style" morality.

Ah Love! could thou and I with Fate conspire
To grasp this sorry Scheme of Things entire,
Would not we shatter it to bits -- and then
Re-mould it nearer to the Heart's Desire!

Now THAT would be truly epic.


As an addendum, I'd like to note that the way alignment is described in D&D and Pathfinder has drifted toward the simplistic. Compare, for instance, NG alignment as described in the PF Core Rulebook...

CRB wrote:

Neutral Good: A neutral good character does the best that a good person can do. He is devoted to helping others. He works with kings and magistrates but does not feel beholden to them.

Neutral good means doing what is good and right without bias for or against order.

... with NG as described by Gary Gygax in the 1st edition Dungeon Master's Guide:

1e DMG wrote:
NEUTRAL GOOD: Creatures of this alignment see the cosmos as a place where law and chaos are merely tools to use in bringing life, happiness, and prosperity to all deserving creatures. Order is not good unless it brings this to all; neither is randomness and total freedom desirable if it does not bring such good.

A tad more nuanced, no? And to me, it certainly sounds very much like utilitarianism.

(Then again, all the alignment descriptions in the 1e DMG sound a lot more sensible — and a lot more like real-world philosophies/worldviews — than the simplistic, black-and-white perspective we've got in our modern game.)


Makhno wrote:


To that end, some responses:

Quote:
What if the character considers governmental stability a greater good than individual freedom?

Utilitarianism aggregates utility over some set of individuals. It is impossible to consider "governmental stability" to be inherently good, as it's not a property or state of an individual. A utilitarian would only consider governmental stability to be good if it led to greater good, for more people, than the alternative.

Governmental stability is a positive psychological influence that results in a net good for then entire group. IE If a government is stable, people can make long term plans, they are less likely to horde resources, etc. If that stability comes at the expense of having to suppress individual freedoms... To get to a finer point, how do you calculate the value of making X people Y% happier vs having to kill one person?

Makhno wrote:


Quote:
What if the character considers quantity of life(living to an old age) a greater good than quality of life(being able to enjoy the life you have)?

This is a question that some professional ethicists do struggle with, but utilitarianism generally solves it by using the utility function of the individual themselves to calculate utility for that individual. Thus what matters isn't whether the utilitarian considers quality or quantity to be superior, but what each person in question prefers.

That is great if you have "utility for that individual", but that doesn't really answer the question. That is just taking the easy answer. The real conflict arises when you either don't have individual utility function, or the utility functions of different individuals are at odds with each other.

What typically happens in practice is this.
1. When a specific utility equation is missing, the utilitarian inserts their personal utility equation.
2. When there is a conflict between to competing utility functions, the utilitarian will lean towards the one that is closer to their personal utility function.

This is why the utilitarian's definitions of the greatest good are so important. When the greatest good is clearly defined, they will do it. When that definition gets fuzzy, they will go with what they think is best. This opens up a lot of potential to do a lot of evil in the name of doing the greatest good.

Makhno wrote:

I agree with everyone saying neutral good. The entire point of utilitarianism is achieving the greatest good.

IF the utilitarian was all knowing and all powerful, then yes, they would be neutral good. They would know the exact path to bring about the most good for the most people and have the power to make it happen. Practically speaking, we are talking about someone who is neither all knowing nor all powerful, and thus they will make mistakes. Their ultimate alignment depends on the details of those mistakes.

Utilitartian: Good news everybody, we found a new source of food that will feed everyone.
Reality: Um, that source of food happens to be the eggs of a sentient spieces, and now they are pissed off and want to wipe you out.
Utilitarian: Oops?


Charender, no matter WHAT alignment you are, you can only deal with the universe according to the knowledge you have. A lawful good paladin can be tricked into doing things that bring evil, and a chaotic evil demon can be tricked into doing things that bring good.

Alignment is about motivation and intent, not omniscience.


Adamantine Dragon wrote:

Charender, no matter WHAT alignment you are, you can only deal with the universe according to the knowledge you have. A lawful good paladin can be tricked into doing things that bring evil, and a chaotic evil demon can be tricked into doing things that bring good.

Alignment is about motivation and intent, not omniscience.

Hence why I stated that their personal utility functions(IE their motivations and intent) matter. Those are what they are going to fall back on when their limited knowledge fails.

Liberty's Edge RPG Superstar 2013 Top 16

Makhno wrote:
I agree with everyone saying neutral good. The entire point of I've had the same idea, except the person wouldn't be a BBEG, but a hero. I think it's a great one. I think that a really good person (by modern consequentialist standards — i.e. the correct standards) would, if thrust into a Pathfinderesque universe, would indeed do something like this. That could be a great campaign — rebel against, and ultimately aim to destroy, the gods, and rearrange the universe to remove absolutist "D&D alignment style" morality.

Cf. Milton's Satan.

You do realize that by declaring modern consequentialist ethics the "correct standard" you are advocating an absolutist morality?

Consequentialism is an approach to ethics, not the only correct approach to ethics.


Adamantine Dragon wrote:

Anyone who is seeking the most help for the most people is pretty much by definition "good" instead of "neutral" or "evil".

Since "lawful" or "chaotic" would potentially restrict their options for seeking that most help, then they would pretty much have to be "neutral" on the law-chaos axis.

So that pretty much leaves neutral-good as the alignment that would seek to help the most people.

The reductio ad absurdum of utilitarianism is killing 49% of the people to make the other 51% happy.

I suppose depending on the scale used for measuring good vs. evil this could be argued. But on a personally moral level, definately evil.


Funky Badger wrote:
Adamantine Dragon wrote:

Anyone who is seeking the most help for the most people is pretty much by definition "good" instead of "neutral" or "evil".

Since "lawful" or "chaotic" would potentially restrict their options for seeking that most help, then they would pretty much have to be "neutral" on the law-chaos axis.

So that pretty much leaves neutral-good as the alignment that would seek to help the most people.

The reductio ad absurdum of utilitarianism is killing 49% of the people to make the other 51% happy.

I suppose depending on the scale used for measuring good vs. evil this could be argued. But on a personally moral level, definately evil.

This presupposes that 51% of people would be happy murdering or allowing the murder of the other 49%. The likelihood of such a culture is so vanishingly infintesimal that it is, as you say, absurd on the face of it.


Charlie Bell wrote:
Cf. Milton's Satan.

Reasoning from fictional evidence is hardly a good way to reach correct conclusions.

Quote:
You do realize that by declaring modern consequentialist ethics the "correct standard" you are advocating an absolutist morality?

Nope.

Ethics and meta-ethics are not the same thing.

I'm definitely advocating an "absolutist" meta-ethics, but as far as my ethics go, consequentialism cannot sensibly be described as "absolutist" — deontology would most closely fit that description.

Quote:
Consequentialism is an approach to ethics, not the only correct approach to ethics.

I disagree, obviously, but I'd like to ask that we not get into that discussion here. I do acknowledge that the statement I made is not universally accepted.


Funky Badger wrote:

The reductio ad absurdum of utilitarianism is killing 49% of the people to make the other 51% happy.

I suppose depending on the scale used for measuring good vs. evil this could be argued. But on a personally moral level, definately evil.

That reductio only works if the utility functions of the 51% don't have terms in them for the 49%. That is pretty unrealistic. Otherwise, the reductio either fails, or you have to introduce increasingly convoluted and improbable caveats to your hypothetical.

More broadly, one can construct all sorts of reductios if we don't confine ourselves to beings with human moral values/preferences/intuitions, but doing so proves little.

Liberty's Edge RPG Superstar 2013 Top 16

Makhno wrote:
Charlie Bell wrote:
Cf. Milton's Satan.
Reasoning from fictional evidence is hardly a good way to reach correct conclusions.

I provided an existing example of the very kind of anti-hero you described. Since we are talking about a fictional universe, how do you figure fictional references aren't appropriate?


Charender wrote:
Governmental stability is a positive psychological influence that results in a net good for then entire group. IE If a government is stable, people can make long term plans, they are less likely to horde resources, etc. If that stability comes at the expense of having to suppress individual freedoms...

Yes, and...?

Like, you do realize that this is exactly the tradeoff you have to make in order to have... any government? At all? And that it's the tradeoff that actual human societies do, in fact, all make? Do you consider... pretty much all people... evil? If you're an extreme anarchist-libertarian, your point is justified — otherwise...

Quote:
To get to a finer point, how do you calculate the value of making X people Y% happier vs having to kill one person?

Carefully and with great difficulty.

(Yes, "how do you, in practice, calculate ..." is the problem of utilitarianism. It happens to be a special case of the general problem of bounded rationality. Such problems are solvable, though pretty much never perfectly solvable; you approximate the best solution, knowing that you're doing better than you otherwise would, but not as well as you in theory could be doing.)

Quote:

What typically happens in practice is this.

1. When a specific utility equation is missing, the utilitarian inserts their personal utility equation.
2. When there is a conflict between to competing utility functions, the utilitarian will lean towards the one that is closer to their personal utility function.

This is why the utilitarian's definitions of the greatest good are so important. When the greatest good is clearly defined, they will do it. When that definition gets fuzzy, they will go with what they think is best. This opens up a lot of potential to do a lot of evil in the name of doing the greatest good.

That is a failure mode of utilitarianism that, being known, is not terribly difficult to correct for, if you plan carefully and construct suitable social checks/balances.

Quote:
stuff about lack of perfect knowledge

AD answered this one. Intent is what counts.


Charlie Bell wrote:
Makhno wrote:
Charlie Bell wrote:
Cf. Milton's Satan.
Reasoning from fictional evidence is hardly a good way to reach correct conclusions.
I provided an existing example of the very kind of anti-hero you described. Since we are talking about a fictional universe, how do you figure fictional references aren't appropriate?

Er, sorry. That was me totally misinterpreting that line of your post. Yeah, that definitely is a good example.


Adamantine Dragon wrote:
Funky Badger wrote:
Adamantine Dragon wrote:

Anyone who is seeking the most help for the most people is pretty much by definition "good" instead of "neutral" or "evil".

Since "lawful" or "chaotic" would potentially restrict their options for seeking that most help, then they would pretty much have to be "neutral" on the law-chaos axis.

So that pretty much leaves neutral-good as the alignment that would seek to help the most people.

The reductio ad absurdum of utilitarianism is killing 49% of the people to make the other 51% happy.

I suppose depending on the scale used for measuring good vs. evil this could be argued. But on a personally moral level, definately evil.

This presupposes that 51% of people would be happy murdering or allowing the murder of the other 49%. The likelihood of such a culture is so vanishingly infintesimal that it is, as you say, absurd on the face of it.

The point is, utilitarian thinking leads ultimately to this example.

There have been plenty of reigimes that were more than happy to extermniate a lower percentage of their population for the greater good (the Khmer Rouge for example, were they neutral good?)

Judged on a personal moral scale, they were all evil too.


Actually, I just realized that Utillitarism has a bigger problem in PF.

Gods and the afterlife provably exist. What happens to the utility functions when you realize that anyone who faithfully serves a good god is going to something more of less like Heaven? They are quite literally better off dead. How do you calculate the utility of that?


Makhno wrote:
Funky Badger wrote:

The reductio ad absurdum of utilitarianism is killing 49% of the people to make the other 51% happy.

I suppose depending on the scale used for measuring good vs. evil this could be argued. But on a personally moral level, definately evil.

That reductio only works if the utility functions of the 51% don't have terms in them for the 49%. That is pretty unrealistic. Otherwise, the reductio either fails, or you have to introduce increasingly convoluted and improbable caveats to your hypothetical.

More broadly, one can construct all sorts of reductios if we don't confine ourselves to beings with human moral values/preferences/intuitions, but doing so proves little.

I thought we were discussing utilitarianism as the ethical framework?

In which case, all beings in the example are utilitarian in outlook.

Its an even more obviously evil philosophy if the subjects don't even believe in it...


It seems like the utilitarian's alignment would depend entirely on what the circumstances of their cause and their fight were. If the guy considered the establishment to be against the greater good, he'd fight against it, and he'd probably wind up chaotic. If he saw it as beneficial and fought to uphold it, he'd end up lawful. He'd almost certainly be neutral on the good-evil axis, preferring to do good deeds but willing to do evil in service to the Greater Good.

I'd say the utilitarian viewpoint would almost be defined by not caring about their alignment, or more specifically, not caring what the mysterious objective alignment spirits thought about their actions. Doing whatever one feels needs doing in support of the greater good of the many, and not caring whether they call you a saint or a devil for it.


A thought occurs to me: This question reminds me of Light Yagami from Death Note.

Spoiler:
He finds himself in possession of a shinigami's notebook of death which can cause a person to die just by writing their name with their face in mind. He's a law student and his father is one of the higher-ups in the police force so he sees first-hand how many criminals go free and feels that the justice system is little more than a farce, so he decides to take it upon himself to kill off criminals for the betterment of the world.

You might want to watch the series and/or read the manga to help answer your question.


Funky Badger wrote:
Adamantine Dragon wrote:
Funky Badger wrote:
Adamantine Dragon wrote:

Anyone who is seeking the most help for the most people is pretty much by definition "good" instead of "neutral" or "evil".

Since "lawful" or "chaotic" would potentially restrict their options for seeking that most help, then they would pretty much have to be "neutral" on the law-chaos axis.

So that pretty much leaves neutral-good as the alignment that would seek to help the most people.

The reductio ad absurdum of utilitarianism is killing 49% of the people to make the other 51% happy.

I suppose depending on the scale used for measuring good vs. evil this could be argued. But on a personally moral level, definately evil.

This presupposes that 51% of people would be happy murdering or allowing the murder of the other 49%. The likelihood of such a culture is so vanishingly infintesimal that it is, as you say, absurd on the face of it.

The point is, utilitarian thinking leads ultimately to this example.

There have been plenty of reigimes that were more than happy to extermniate a lower percentage of their population for the greater good (the Khmer Rouge for example, were they neutral good?)

Judged on a personal moral scale, they were all evil too.

This is a flawed example, for it assumes that the happiness of a person is equal in value to the life of another in the eyes of a utilitarian. you must remember that different factors possess different values as deigned by the utilitarian making judgement, and it would be nigh-impossible to justify equating making one person, or even hundreds of people, happy with utterly ending the life of another. This is why most utilitarians, when questioned, would proclaim the Khmer Rouge to be evil, as well as any similar historical examples of regimes/dictators exterminating minorities to enrich/"purify"/etc. the remainder.

I do agree with others in this thread however, that it is vitally important to at least have a rough sketch of the hierarchy of said individual's value system, and any "trump" values that they may have.


Kazaan wrote:

A thought occurs to me: This question reminds me of Light Yagami from Death Note.

** spoiler omitted **

You might want to watch the series and/or read the manga to help answer your question.

And in the process turns into what I'd call chaotic evil.


Natch wrote:

It seems like the utilitarian's alignment would depend entirely on what the circumstances of their cause and their fight were. If the guy considered the establishment to be against the greater good, he'd fight against it, and he'd probably wind up chaotic. If he saw it as beneficial and fought to uphold it, he'd end up lawful. He'd almost certainly be neutral on the good-evil axis, preferring to do good deeds but willing to do evil in service to the Greater Good.

I'd say the utilitarian viewpoint would almost be defined by not caring about their alignment, or more specifically, not caring what the mysterious objective alignment spirits thought about their actions. Doing whatever one feels needs doing in support of the greater good of the many, and not caring whether they call you a saint or a devil for it.

Exactly. The idea is that the utilitarian says: "This so-called cosmic force that we, for some reason, have decided to call capital-G 'Good' is not actually synonymous with what is ethically good, and ditto for evil, respectively. I'm going to do the right thing, ethically speaking, regardless of what the cosmic forces think."


Kazaan wrote:

A thought occurs to me: This question reminds me of Light Yagami from Death Note.

** spoiler omitted **

You might want to watch the series and/or read the manga to help answer your question.

You might want to read further, for this pretense is first never fully followed as he murders only those imprisoned (but for one exception), acting for the good of no-one but his own ego, and then abandons such motives entirely and attempts to set himself up as shadow-dictator of the world.

I'll give the whole thing points for trying to portray the dilemma of a utilitarian approach to morality, but it tipped its hand far too early and corrupted the main vessel of it's own ethical experiment incredibly quickly.


Funky Badger wrote:
Adamantine Dragon wrote:
This presupposes that 51% of people would be happy murdering or allowing the murder of the other 49%. The likelihood of such a culture is so vanishingly infintesimal that it is, as you say, absurd on the face of it.

The point is, utilitarian thinking leads ultimately to this example.

There have been plenty of reigimes that were more than happy to extermniate a lower percentage of their population for the greater good (the Khmer Rouge for example, were they neutral good?)

Judged on a personal moral scale, they were all evil too.

But... utilitarian thinking does not, in fact, lead to that example. That's AD's point. I mean, utilitarian thinking can get you to do all manner of bad things... if you're stupid or insane. I hope you don't think that the Khmer Rouge is actually a result of otherwise-sane people altruistically applying utilitarianism.

Funky Badger wrote:
Makhno wrote:
Funky Badger wrote:

The reductio ad absurdum of utilitarianism is killing 49% of the people to make the other 51% happy.

I suppose depending on the scale used for measuring good vs. evil this could be argued. But on a personally moral level, definately evil.

That reductio only works if the utility functions of the 51% don't have terms in them for the 49%. That is pretty unrealistic. Otherwise, the reductio either fails, or you have to introduce increasingly convoluted and improbable caveats to your hypothetical.

More broadly, one can construct all sorts of reductios if we don't confine ourselves to beings with human moral values/preferences/intuitions, but doing so proves little.

I thought we were discussing utilitarianism as the ethical framework?

In which case, all beings in the example are utilitarian in outlook.

Its an even more obviously evil philosophy if the subjects don't even believe in it...

What?

Why would the "subjects" have to be utilitarian in outlook? What does it matter what they believe?

I think one of us is very confused...


Buri wrote:
Kazaan wrote:

A thought occurs to me: This question reminds me of Light Yagami from Death Note.

** spoiler omitted **

You might want to watch the series and/or read the manga to help answer your question.

And in the process turns into what I'd call chaotic evil.

Of course, Light is hardly a sane, rational, mentally balanced person (by the end). The psychology of what happens when a person gains power over other people is well-documented. Light's actual motivation drifts pretty far from utilitarianism very quickly.


I would say the mooted character could be any Alignment depending on his actual actions, being told that general paradigm doesn't tell us the actual sum total of moral actions he's taken: the same paradigm could lead to different actions in different circumstances, and your alignment is a measurement of actual actions taken (and planned ones/motivations/etc), so the alignment could vary. That said, and assuming he actually is dedicated to the good of all (and not just his own intellectual concept of that), then there is a high likely hood to have committed more good acts than evil ones... Possibly making him Good (past the Neutral cut-off), and since I would split the difference for Law/Chaos (he's very pragmatic after all), I would say it's MOST LIKELY that the character is NG or just N. That said, they COULD be almost any alignment, especially if you count that self-delusion and actual dedication to their own distorted concepts (as per above poster) could be 'seen' by the character (and even some others around them who fall for it) as exemplifying the same moral approach. To distinguish between such a dedication to narrow ideals (ala Khmer Rouge), the character would need to be able to allow for/ commit Good actions which AREN'T aligned with their personal preferences, in-group, etc., recognizing that their own intellectual concepts have no special "Goodness" but may just be a subset of what is possible within 'Goodness', as well as consciously reflecting on whether all of their ideals truly are Good or should be replaced with other ideals. [/alignmentdebate]


Charender wrote:

Actually, I just realized that Utillitarism has a bigger problem in PF.

Gods and the afterlife provably exist. What happens to the utility functions when you realize that anyone who faithfully serves a good god is going to something more of less like Heaven? They are quite literally better off dead. How do you calculate the utility of that?

Well, I'll tell you what I would do if I was the utilitarian in question.

I would say:

"This opposition of forces, this "Good" vs. "Evil" (and we really should find less misleading names for them, I would add), is a cosmic tyranny imposed on mortals by gods, who are all evil in the conventional sense. There's two opposing factions of them, roughly, and they all want the power that comes with worship... so they've set up this system of afterlives, to force mortals to behave in a certain way. And what must be done is that this cosmic tyranny must be deposed! All gods (all who participate in this system, anyway) must be slain... and then we mortals can go ahead and do things that are actually good, and useful and beneficial."

Although the postscript to this is that life -> afterlife is not quite a strict unidirectional influence, in D&Desque worlds... petitioners can become celestials, mortals can travel to the afterlife and talk to them, all sorts of strange things can happen... plus, a god's influence depends on the worship of mortals, so how is that affected if all the mortals die? I mean, I'm just thinking out loud in this part, but I suspect it's a bit more complicated than just deciding that people would be better off dead.


in golarion, a god's power does not depend on the worship of mortals.
in golarion, only one god is in charge of herding souls to different 'afterlives' depending on their alignment.
outside of Pharasma, the other gods have pretty much nothing to do with it.
but alignment is not merely the arbitrary measurement standards of Pharasma,
because we have things like Paladins, epitomes of LG-ness, and they don't draw their powers from Pharasma.
as is stated, only beings with moral choice can have an alignment (besides TN),
so it seems that alignment is an inherent part of reality for beings with moral choice.

to the original topic, i think you could argue that if the character really was successfully/honestly pulling off this paradigm, that they should be TN, since they are not really engaging with moral choice in the moment, but are just 'rotely' living a role as an animal would build nests, etc. they are not morally motivated for/against anything, but are mechanically fulfilling some goal, that the goal is overall goodness isn't really any more a moral choice than any other long-term goal, the idea of moral choice is intrinsicly faced with the PRESENT moral choice, not utilitarian pursual of goals.

on the other hand, you have being like evil outsiders who goal is evil-ness, and if acting nice (working with paladins, doing good things) is compatable with that, they have no problem. but perhaps you can see that as the beginning of a potential to 'fall' into goodness by such beings, so who knows...

any character who really truly is solely motivated by such grand alignment concerns is just borderline or actually insane, if you are roleplaying sane characters they will have a much more realistic and 'human' scale of motivations which will 'equal out' to some alignment or another, but the 'self conscious' approach to alignment just is not 'natural' in my view. (yes, paladins probably can and do skirt the edges of this ;-))


Been watching Fate/Zero I presume?


Quandary wrote:

in golarion, a god's power does not depend on the worship of mortals.

in golarion, only one god is in charge of herding souls to different 'afterlives' depending on their alignment.
outside of Pharasma, the other gods have pretty much nothing to do with it.
but alignment is not merely the arbitrary measurement standards of Pharasma,
because we have things like Paladins, epitomes of LG-ness, and they don't draw their powers from Pharasma.
as is stated, only beings with moral choice can have an alignment (besides TN),
so it seems that alignment is an inherent part of reality for beings with moral choice.

Ah... yeah, I am less knowledgeable about Golarion cosmology than others. Ok. So I guess it's mostly Pharasma who would have to be overthrown.

Quote:
to the original topic, i think you could argue that if the character really was successfully/honestly pulling off this paradigm, that they should be TN, since they are not really engaging with moral choice in the moment, but are just 'rotely' living a role as an animal would build nests, etc. they are not morally motivated for/against anything, but are mechanically fulfilling some goal, that the goal is overall goodness isn't really any more a moral choice than any other long-term goal, the idea of moral choice is intrinsicly faced with the PRESENT moral choice, not utilitarian pursual of goals.

wat

seriously?

Man I don't know how to answer this one just because it seems like such an outlandish view. Seriously? People who are concerned with making the world maximally better for everyone aren't morally motivated for or against anything, just rotely living? Just... what.


kyrt-ryder wrote:
Been watching Fate/Zero I presume?

I don't know if this was directed at me, but if so, I'm afraid I don't know what that is. :\


Makhno wrote:
Ok. So I guess it's mostly Pharasma who would have to be overthrown.

Only problem with that is like I wrote, Paladins aren't deriving their powers from Pharasma, but merely from dedication to LG-max.

Pharasma doesn't have anything to do with them until their soul passes to meet her judgement.
So Alignment has an independent existence from any and all gods, and indeed any specific being or group of beings.
I suppose you could look at Asmodeus and his brother, the first beings[gods],
but AFAIK there are also the outer gods besides them, and beings [aboleths] who independently became conscious and having alignments.
So it seems like an inherent condition of conscious, moral-choice-capable beings.
Pharasma just measures what's there, she didn't cause it.

Quote:

wat

seriously?
Man I don't know how to answer this one just because it seems like such an outlandish view. Seriously? People who are concerned with making the world maximally better for everyone aren't morally motivated for or against anything, just rotely living? Just... what.

yeah, couldn't you program a non-sentient computer to always maximize 'common good'?

or theoretically grow some animal which always does that, just like some animals build complex nests by instinct?
neither of those are able to have an alignment by the normal rules.
the point of alignment is only beings with moral choice can have it (non-neutral),
and that is based on them continually making moral choices in the moment.
somebody 100% dedicated to this idea of the greater good simply never can have a moral dilemna,
that is what creatures with moral choice or supposed to face,
if this character really has honestly enacted this philosophy, and can do so indefinitely, then they don't face that.
otherwise, there is the possibility that this goal of theirs is just an intellectual conceit,
and thus what solely matters is the real moral balance of their actions.
that doesn't mean that in the end an action can't be good because it's over-all beneficial for more souls,
but that's also because the game just leaves it up to the GM to determine the overall alignment value of any action.
some spells have [evil]/[good]/etc tags, but the broader context of the action will also affect that action's alignment value.

so like i said, that philosophy could be espoused and followed by characters of differing alignments, depending on what they actually do. they don't necessarily have to do lots of short-term evil actions to support the long term good. but they could...

-----------------------------------------------------------

but i did post a second, opposing theory...
(perhaps i didn't make it clear enough, it's that these outsiders ARE INDEED EVIL for pursuing grand evil-ness, even if they may do 'good' actions in furtherance of that, i.e. alignment(~self)-conscious utilitarian evil. i also wrote a counter view point that questions THAT approach...

1 to 50 of 53 << first < prev | 1 | 2 | next > last >>
Community / Forums / Pathfinder / Pathfinder First Edition / General Discussion / True Utilitarian Alignment All Messageboards

Want to post a reply? Sign in.