yujiri.xyz
Protagonism
How ethical reasoning works
The idea of a formal moral system is to consciously understand the rules by which conscience operates. My approach is to specify a set of moral values, and an action's *merit* (its effect on how good a person the agent is) is based on the extent to which it promotes these values and the amount of temptation overcome to do it. No temptation or negative temptation (the good choice is the same as the selfish choice) means the act is not virtuous at all. Precisely:
- Let us say an action's *value* is the extent to which it promotes moral values minus the extent to which it harms them (an action can have both good and bad consequences).
- If an action has positive value, its merit is the temptation overcome (the extent to which you had to sacrifice selfish interests to do it).
* One might object that this means it's morally better to do a worse but still good action if it's harder. That loophole is plugged by the axiom of necessary motivation: such a thing isn't possible and if you think it is it's because you have an emotional incentive to take the less good action, such that it isn't actually the harder one.
Axiom of necessary motivation
- If an action has negative value, its merit is its value times the ratio of its value to the temptation. If this formula seems bizarre, consider:
* Raising the magnitude while keeping the ratio the same raises the guilt (causing 2X suffering to an innocent person in exchange for 2X reward is twice as bad as causing X suffering to an innocent person in exchange for X reward);
* Very high temptation (like being threatened with torture) can render a horrible act almost blameless, and likewise in reverse.
Obviously, to compare virtue and vice, these formulas require us to establish a "baseline" 1:1 ratio of viciousness to temptation. This ratio is derived from peacefully causing X suffering to another (this could be done by saying something hurtful for example) to gain X benefit to yourself when the two of you are offset in faring just so that factors about inequality would balance out exactly. As per the virtue formula, any good act that requires a sacrifice of X magnitude balances this exactly, regardless of how much effect it has.
I am a consequentialist, meaning sometimes the most moral thing to do is something "evil" that will lead to a greater good. However, I suspect that the value of a future consequence is inversely proportional to how far in the future it is. The reason I suspect this is because it seems that otherwise, one should devote all of their effort to accruing power so they can do more good in the future (or pass on the power to someone who will), and never do good things in the short-term, because the world has a long future and a positive feedback loop (the more power good people have, the easier it is to get more).
Consequentialism
Positive feedback loops
And now the value list:
- Happiness - good behavior aims to alleviate suffering.
- Peace - good behavior aims not to affect others without their consent, provided they afford others the same option.
Argument for this being a prime value
Longer exploration
- Agency - good behavior aims to give people the ability to take meaningful actions. (I call this agency instead of freedom because to many people, particularly libertarians, the definition of *freedom* is closer to *peace*, not agency.)
- Truth - good behavior aims to inform people.
Note that all of these values involve a subject - someone who's suffering is alleviated, etc. The following rules affect the priority of a subject:
- Merit - the admirable are more important than the ignoble.
- Equality - the unfortunate are more important than the fortunate.
- Fairness - benefactors are more important than freeloaders. (It's more ignoble to not help someone who's helped you in the past than to not help someone you've helped in the past.)
contact
subscribe via RSS