Is it ethically justifiable to use others solely as a means to an end in pursuit of other moral objectives?
When an action generates social benefits surpassing the social costs, is it morally obligated for one to undertake that action?
At what point do our duties of beneficence outweigh our duties of non-maleficence? Does such a point exist?
Should you enter the experience machine? Should everyone enter the experience machine?
Is an outcome where everyone is wireheaded optimal?
If we are to choose between maintaining the status quo and a lottery with greater expected utility albeit at the cost of creating the possibility of ending everything, what choice must we make?
Should animals with higher cognitive abilities receive greater moral consideration than those with less sophisticated mental capacities?
How specific does the categorical imperative get?
Is it warranted to scrutinize individuals for what may be deemed "immoral preferences"?
Should consequentialists employ utilitarian calculations in their everyday decisions?
If respecting the preferences of the unconscious leads to diminished happiness overall but greater utility (concerning preferences), should we still respect those preferences?
If someone has mostly good intentions but does mostly bad things, are they a good person?
If someone has mostly bad intentions but does mostly good things, are they a good person?
How should incentives play into the previous two scenarios?
If someone has mostly good intentions and does mostly good things, is that still enough to make them a good person?
How does infinite ethics impact our moral landscape?
Should our moral intuitions guide our ethical theories or should our ethical theories guide our moral intuitions?
In cases where two competing ethical frameworks yield similar decisions the majority of the time, can they be considered substantially distinct in practice?
If moral nihilism were true, what would you do?
Assuming you experience every moment of consciousness from inception to conclusion, what should you do?