either it’s true that you can write an axiom that says “sentient beings should always consent to anything that is done to them” or you can write an axiom that says “you should always do what will bring about the most happiness or at least distress”
those axioms are in conflict with one another. it’s not that there’s only bad choices. it’s that you’ve given yourself conflicting standards.
Neither of those are axioms I hold. The axiom “all sentient beings are morally relevant” does not specify how to go from there, and I am not convinced that any one ethical framework is “the one”. There are some things that all the ones I’m aware of converge on with a sentientist perspective, but there are weird cases as well like whether to euthanize stray animals where they don’t converge
either it’s true that you can write an axiom that says “sentient beings should always consent to anything that is done to them” or you can write an axiom that says “you should always do what will bring about the most happiness or at least distress”
those axioms are in conflict with one another. it’s not that there’s only bad choices. it’s that you’ve given yourself conflicting standards.
Neither of those are axioms I hold. The axiom “all sentient beings are morally relevant” does not specify how to go from there, and I am not convinced that any one ethical framework is “the one”. There are some things that all the ones I’m aware of converge on with a sentientist perspective, but there are weird cases as well like whether to euthanize stray animals where they don’t converge
detente