Discussion about this post

User's avatar
PSJ's avatar

I was involved in EA around 2014, and a principle that was mentioned back then seems to have disappeared from the EA lexicon since–that is, one sign of a good intervention is that it dominates another currently funded intervention. That is, for every dollar, it probabilistically improves upon every goal of that dollar to move it to the new intervention from the old one. Things like moving funds from bednets to x-risk cannot dominate because in at least some worlds the x-risk dollar is literally burned; it does nothing. I don't think this was just an epistemic principle but also a practical one in that when trying to convince somebody to move their money from one place to another, if they are honest about the reasons they are donating, demonstrating dominance is very *convincing* in a way I've found modern EA has stopped caring so much about, instead pursuing like-minded whales or homegrown earners-to-give (no commentary about the wisdom of that now).

Expand full comment
JC's avatar
Mar 12Edited

Wow that paper in footnote 5 is amazing! Fanaticism (in the strong sense used in that paper, where you should accept Pascal's-Mugging-like scenarios) seems clearly wrong, but they point out some bizarre consequences of denying it!

What do you make of this? How do you feel about extremely small probabilities of extremely high-value events (like a 10^-100 chance of 3||||3 blissful lives)? should we put all EA resources towards those types of scenarios?

Expand full comment
2 more comments...

No posts