a deontological system places the morality in the action itself, so you know before you do it whether its the right thing to do. ontological systems change the morality of the action depending on the results in the future.
what if we need trump to be elected in order to escape earth before the sun goes nova? it’s an unknowable proposition, but are you willing to risk all of humanity on voting for biden?
If you can convince me voting for Trump will give greater expected value then I’ll do it, but such absurd possibilities like you said usually come with an exact inverse that cancels out its expected value.
Should I let that butterfly flap its wings? What if it causes a tornado somewhere?! Or, what if it not flapping causes a tornado somewhere?! Both are equally plausible, so there’s no point in choosing my actions based on them.
I think you understand the problem of the unknowableness of the effects of our actions, and subsequently how absurd it is to use that as a basis of our morality.
I’m not trying to get you to vote for trump, I’m trying to get you to choose a useful moral framework.
the uncertainty shifts within the framework from whether my actions will have a good out come to whether i know what actions are moral. i suppose it’s possible that i might not know, but the categorical imperative is pretty easy to apply, so my confidence is much higher than i imagine is possible for any action within a utilitarian frame: you are totally dependent on unknowable circumstances to determine the morality of past actions.
> I’m a utilitarian if you couldn’t tell.
oh my. how do you deal with the fact that the future is unknowable, so the morality of all actions is also unknowable?
I account for that, obviously. Expected value is a good approximation.
to be clear, you acknowledge that you can’t know which actions are moral under your system, but you still rely on it to make moral actions?
There’s always uncertainty, yes. I suppose other moral systems claim they’re infallible but those people are just kidding themselves.
a deontological system places the morality in the action itself, so you know before you do it whether its the right thing to do. ontological systems change the morality of the action depending on the results in the future.
what if we need trump to be elected in order to escape earth before the sun goes nova? it’s an unknowable proposition, but are you willing to risk all of humanity on voting for biden?
If you can convince me voting for Trump will give greater expected value then I’ll do it, but such absurd possibilities like you said usually come with an exact inverse that cancels out its expected value.
Should I let that butterfly flap its wings? What if it causes a tornado somewhere?! Or, what if it not flapping causes a tornado somewhere?! Both are equally plausible, so there’s no point in choosing my actions based on them.
I think you understand the problem of the unknowableness of the effects of our actions, and subsequently how absurd it is to use that as a basis of our morality.
I’m not trying to get you to vote for trump, I’m trying to get you to choose a useful moral framework.
This is useful though. Pretending there’s no uncertainty is just kidding yourself.
the uncertainty shifts within the framework from whether my actions will have a good out come to whether i know what actions are moral. i suppose it’s possible that i might not know, but the categorical imperative is pretty easy to apply, so my confidence is much higher than i imagine is possible for any action within a utilitarian frame: you are totally dependent on unknowable circumstances to determine the morality of past actions.