Cross-posted to the EA Forum The future might be very big, and we might be able to do a lot, right now, to shape it. You might have heard of a community of people who take this idea pretty seriously — effective altruists, or ‘EAs’. If you first heard of EA a few years ago, and haven’t really followed it since, then you might be pretty surprised at where we’ve ended up.
I was involved in EA around 2014, and a principle that was mentioned back then seems to have disappeared from the EA lexicon since–that is, one sign of a good intervention is that it dominates another currently funded intervention. That is, for every dollar, it probabilistically improves upon every goal of that dollar to move it to the new intervention from the old one. Things like moving funds from bednets to x-risk cannot dominate because in at least some worlds the x-risk dollar is literally burned; it does nothing. I don't think this was just an epistemic principle but also a practical one in that when trying to convince somebody to move their money from one place to another, if they are honest about the reasons they are donating, demonstrating dominance is very *convincing* in a way I've found modern EA has stopped caring so much about, instead pursuing like-minded whales or homegrown earners-to-give (no commentary about the wisdom of that now).
This principle seems way too restrictive be useful beyond a few narrow applications.
Suppose you can roll a D6 or a D20 and receive the outcome in dollars. The D20 reduces the probability mass on 1 through 6, but surely you'd rather roll the D20!
Even holding fixed the possible outcomes (1 through 6), suppose you can distribute the probability mass however you want for this game. Surely you place it all on 6, even at the cost of lower probability of rolling 1 through 5!
Maybe when we were exploring low hanging fruit in global health and development it was common to find the dominance you describe, but I struggle to imagine it being desirable at this point.
I was involved in EA around 2014, and a principle that was mentioned back then seems to have disappeared from the EA lexicon since–that is, one sign of a good intervention is that it dominates another currently funded intervention. That is, for every dollar, it probabilistically improves upon every goal of that dollar to move it to the new intervention from the old one. Things like moving funds from bednets to x-risk cannot dominate because in at least some worlds the x-risk dollar is literally burned; it does nothing. I don't think this was just an epistemic principle but also a practical one in that when trying to convince somebody to move their money from one place to another, if they are honest about the reasons they are donating, demonstrating dominance is very *convincing* in a way I've found modern EA has stopped caring so much about, instead pursuing like-minded whales or homegrown earners-to-give (no commentary about the wisdom of that now).
This principle seems way too restrictive be useful beyond a few narrow applications.
Suppose you can roll a D6 or a D20 and receive the outcome in dollars. The D20 reduces the probability mass on 1 through 6, but surely you'd rather roll the D20!
Even holding fixed the possible outcomes (1 through 6), suppose you can distribute the probability mass however you want for this game. Surely you place it all on 6, even at the cost of lower probability of rolling 1 through 5!
Maybe when we were exploring low hanging fruit in global health and development it was common to find the dominance you describe, but I struggle to imagine it being desirable at this point.