One problem with professional altruism is that the altruists take their cut, and of course they can't be expected to do their Very Important Work from some strip mall.
Would not it save many words to say that EA is about the power to direct charitable giving to recipients selected those who control the algorithm? Ceding control to our "betters" surrenders our own agency. I will think for myself, thank you.
Sure, but who wants to save words? :) If it were just self-interested charlatans using charity for personal enrichment, it would be more manageable. There are plenty of earnest do-gooders involved, as well, and they're probably harder to rein in. Always remember Hanlon's Razor: "Never attribute to malice that which can be explained by stupidity."
It's been a consistent source of amazement to me the way scholasticism has enjoyed such a renaissance among our smartest people. I usually imagine it stems from the huge shift of their career aspirations from science and engineering (the "cool" career fields of the 40s through 80s) to computer programming. If your view of reality is framed by experience steeped in computer programming, it seems much more likely that you would find it plausible that one can simply reason your way to the truth, in any area, and that there is no need to elaborately privilege observation and experience, the way those fuddy-duddy fraidy-cat empiricists of the mid 20th century might've.
I mean, that's what's great about programming a computer: it *is* remorselessly logical, and entirely the work of the human mind, so you absolutely *can* reason your way to the solution of any problem (in computer programming) whatsoever. There is never a point where you have to throw up your hands and just say welp we gotta *measure* that, no amount of theorizing will help.
Still...the willingness to think that pure theory can solve (or even mostly solve) social problems on a vast scale is an amazing act of pure faith, fully equivalent to a medieval monk embracing the Nicene Creed.
I have also found that those whose careers are in programming and coding believe that they can model their way to the solution to any problem involving humans. And, when their solution/model does not work it seems to be the fault of the messy people and not the model. Frustration then ensues.
Life is an analog experience. It’s messy. That’s the beauty of it.
A good example of excessive faith in computer models would be the failure of the various Value At Risk models in predicting the burst of the housing bubble in 2008. Faulty assumptions in, faulty predictions out.
"Still...the willingness to think that pure theory can solve (or even mostly solve) social problems on a vast scale is an amazing act of pure faith, fully equivalent to a medieval monk embracing the Nicene Creed." ... ... Fantastic analogy.
There are things we must do cooperatively: defend the border, keep civil peace, enforce contracts, preserve premises of free markets. That is called "government."
Once the core is funded, we may add a humane social-safety-net for deserving needy. The option-price to preserve skills, true diversity, social cohesion.
For those limited, enumerated tasks. we set a hard cap on what we let "government" take from the economy, all-in. That means cash taxes, PLUS unfunded mandates, borrowing (tax on unborn). costs of all law, regulations, rules, burden of compliance and all the rest.
Considered correctly, most "EA" must be in private hands, NOT part of regrettably-necessary grab by "government." When politicians can redistribute, they will always abuse it to buy votes and pay off cronies. No exceptions, ever.
Problem is that the borderland between government and private sector gets messy in EA. Ostensibly, they're private, but then you have cases like Sam Bankman-Fried, bankrolling politics while directing other people's charity to his favored causes--which often happen to coincide with the political causes. And then, those politicians use the levers of government to favor the charities preferred by those funding the political side. The question becomes your "No exceptions, ever," has any substantial political support among those who actually decide such things.
Dealing with totalitarians (or their heedless followers). When the other side starts by screaming "ALL!" the correct opening bid is "Nothing!" Then you carry on.
So too with "no ecxceptions."
The great folly of normal folks is to assume sincerity on the other side.
Likewise, recall the story of the farmer and the mule,
The mule falling in the well? If so, I had to Google it. Somehow, I hadn't heard it before. Very nice story. It brings to mind some stories from my own family. I shall use it it one of my pieces. Thanks!
I have an adult child who has just taken a job at a leading Effective Altruism organization. Empathy is not the problem. She is so loaded with empathy that when a Bernie Sanders comes along and says "let's just have a program or twenty that helps these poor suffering people" she jumps at the chance to relieve human suffering by acts of political will and central planning, because she cannot bear to witness the suffering and do nothing. Sometimes the central planners get it right, e.g. if the government of a poor small country decides to eradicate persistent polio with universal vaccination it's likely to accomplish a lot of good at small relative cost.
But she and I have had profound arguments over my view that most human well-being is best accomplished through the decentralized organization of a free economy. The problem (in my opinion) is not her empathy, it's her view that experts can divine what's best for all. (I recall one fraught election season when she said people like her (graduate degree in social sciences) were more qualified to pick leaders than people like me (largely self-taught software engineer).
Some analyst in Oxford may correctly calculate that the most cost-effective way to reduce preventable deaths in some African country is to provide mosquito netting to prevent malaria; but in that country is a village whose residents have to walk an hour for water, and they know that drilling a well would benefit them more than mosquito netting. The EA central planner is not offering them a choice among price signals to optimize their own welfare.
That overwhelming desire to relieve suffering sound to me to be more sympathy than empathy--two radically different concepts. Using your example, someone with sympathy might, from the goodness of their heart, say, "These poor villagers are dying of malaria. The PhDs at GiveWell tell me that this is their biggest problem, so let's get them some netting!" Someone with empathy might say, "Malaria is a problem for this village, but I notice that they're also rather weary from hauling water. Perhaps I should ask them what bothers them most." Sympathy entails a fortunate looking down upon an unfortunate. With empathy, the well-meaning individual sees the unfortunate as an equal with agency as valid as his own.
Your conversation with your daughter reminds me of a lunch I once had. My companions were an American bank officer with a graduate degree from Oxford or Cambridge and his British boss, whom I'm not sure had a college degree at all. The boss was brilliant and savvy and explosively personable. The American was pompous, devoid of people skills, and a walking disaster area. In the course of our lunch, the American prattled on about someone's utopian proposal to give different quantities of votes to different citizens, according to their level of educational attainment. His boss started smiling and glared at me side-eyed. I said, "Well, I agree with that proposal, but I'm just not sure that all the PhDs will be willing to have zero votes." His boss started laughing uncontrollably, but silently. The comment went completely over the head of the American, whom I was ridiculing.
Jun 8, 2023·edited Jun 8, 2023Liked by Robert F. Graboyes
The wisdom of crowds doesn't obviously apply where every individual guess is expensive. Most individual guesses deviate from the optimum (assuming there is such a thing) to a greater or lesser degree. It's surely at least worth trying to do better when lives are at stake?
Certainly there is value in different groups trying a plurality of approaches rather than central planning. But then I'm glad someone is bothering to pay attention to what worked in retrospect and trying to learn lessons in a somewhat systematic way.
I've never really been on board with the "earn to give" argument though. It feels more like a post-rationalisation for amoral career-paths to me.
I'm one of those socially-stunted software engineers. Not because it's the best way to be a good altruist, but because I enjoy it and have an aptitude for it. I'm probably too selfish to choose my career based on a calculation of maximum altruism. I wouldn't want to do a job I enjoyed less so I could give more money away. On the other extreme, I'm sure there would be lots of opportunities for personal growth if I went and built a school or something. But I'd be a fabulously unproductive school-builder.
I want to do as much good as I can with my giving, and I really don't know how. Being guided by experts examining the evidence is surely a sensible approach? For all the difficulties of measurement and prediction, I strongly suspect there is an opportunity to help more people by gathering and studying the evidence on what works well. The approach saves lives in medicine, and over time I really believe it will save lives in charity and public policy too.
This is a long, superb, thoughtful response. So much so that I intend to quote it in my next essay (probably today). As I said in my piece--perhaps not adamantly or often enough--"Both the goal and the mathematical approach of EA appeal to me as an economist, but I’m always aware of economists’ chronic overconfidence in the ability of mathematical tools, created and operated by a clerisy of 'experts,' to optimize over complex human behavior." My concern is more with the specifics of implementation than it is with the concept itself.
One problem with professional altruism is that the altruists take their cut, and of course they can't be expected to do their Very Important Work from some strip mall.
Gotta have that mahogany, too. :)
Would not it save many words to say that EA is about the power to direct charitable giving to recipients selected those who control the algorithm? Ceding control to our "betters" surrenders our own agency. I will think for myself, thank you.
Sure, but who wants to save words? :) If it were just self-interested charlatans using charity for personal enrichment, it would be more manageable. There are plenty of earnest do-gooders involved, as well, and they're probably harder to rein in. Always remember Hanlon's Razor: "Never attribute to malice that which can be explained by stupidity."
It's been a consistent source of amazement to me the way scholasticism has enjoyed such a renaissance among our smartest people. I usually imagine it stems from the huge shift of their career aspirations from science and engineering (the "cool" career fields of the 40s through 80s) to computer programming. If your view of reality is framed by experience steeped in computer programming, it seems much more likely that you would find it plausible that one can simply reason your way to the truth, in any area, and that there is no need to elaborately privilege observation and experience, the way those fuddy-duddy fraidy-cat empiricists of the mid 20th century might've.
I mean, that's what's great about programming a computer: it *is* remorselessly logical, and entirely the work of the human mind, so you absolutely *can* reason your way to the solution of any problem (in computer programming) whatsoever. There is never a point where you have to throw up your hands and just say welp we gotta *measure* that, no amount of theorizing will help.
Still...the willingness to think that pure theory can solve (or even mostly solve) social problems on a vast scale is an amazing act of pure faith, fully equivalent to a medieval monk embracing the Nicene Creed.
I have also found that those whose careers are in programming and coding believe that they can model their way to the solution to any problem involving humans. And, when their solution/model does not work it seems to be the fault of the messy people and not the model. Frustration then ensues.
Life is an analog experience. It’s messy. That’s the beauty of it.
"Life is an analog experience. It’s messy. That’s the beauty of it." ... ... Two beautiful quotes in a row. (See "Nicene Creed" above)
A good example of excessive faith in computer models would be the failure of the various Value At Risk models in predicting the burst of the housing bubble in 2008. Faulty assumptions in, faulty predictions out.
Yup! Dangers of extrapolation.
"Still...the willingness to think that pure theory can solve (or even mostly solve) social problems on a vast scale is an amazing act of pure faith, fully equivalent to a medieval monk embracing the Nicene Creed." ... ... Fantastic analogy.
There are things we must do cooperatively: defend the border, keep civil peace, enforce contracts, preserve premises of free markets. That is called "government."
Once the core is funded, we may add a humane social-safety-net for deserving needy. The option-price to preserve skills, true diversity, social cohesion.
For those limited, enumerated tasks. we set a hard cap on what we let "government" take from the economy, all-in. That means cash taxes, PLUS unfunded mandates, borrowing (tax on unborn). costs of all law, regulations, rules, burden of compliance and all the rest.
Considered correctly, most "EA" must be in private hands, NOT part of regrettably-necessary grab by "government." When politicians can redistribute, they will always abuse it to buy votes and pay off cronies. No exceptions, ever.
Problem is that the borderland between government and private sector gets messy in EA. Ostensibly, they're private, but then you have cases like Sam Bankman-Fried, bankrolling politics while directing other people's charity to his favored causes--which often happen to coincide with the political causes. And then, those politicians use the levers of government to favor the charities preferred by those funding the political side. The question becomes your "No exceptions, ever," has any substantial political support among those who actually decide such things.
Dealing with totalitarians (or their heedless followers). When the other side starts by screaming "ALL!" the correct opening bid is "Nothing!" Then you carry on.
So too with "no ecxceptions."
The great folly of normal folks is to assume sincerity on the other side.
Likewise, recall the story of the farmer and the mule,
The mule falling in the well? If so, I had to Google it. Somehow, I hadn't heard it before. Very nice story. It brings to mind some stories from my own family. I shall use it it one of my pieces. Thanks!
I have an adult child who has just taken a job at a leading Effective Altruism organization. Empathy is not the problem. She is so loaded with empathy that when a Bernie Sanders comes along and says "let's just have a program or twenty that helps these poor suffering people" she jumps at the chance to relieve human suffering by acts of political will and central planning, because she cannot bear to witness the suffering and do nothing. Sometimes the central planners get it right, e.g. if the government of a poor small country decides to eradicate persistent polio with universal vaccination it's likely to accomplish a lot of good at small relative cost.
But she and I have had profound arguments over my view that most human well-being is best accomplished through the decentralized organization of a free economy. The problem (in my opinion) is not her empathy, it's her view that experts can divine what's best for all. (I recall one fraught election season when she said people like her (graduate degree in social sciences) were more qualified to pick leaders than people like me (largely self-taught software engineer).
Some analyst in Oxford may correctly calculate that the most cost-effective way to reduce preventable deaths in some African country is to provide mosquito netting to prevent malaria; but in that country is a village whose residents have to walk an hour for water, and they know that drilling a well would benefit them more than mosquito netting. The EA central planner is not offering them a choice among price signals to optimize their own welfare.
Fantastic post, thanks. A couple of points:
That overwhelming desire to relieve suffering sound to me to be more sympathy than empathy--two radically different concepts. Using your example, someone with sympathy might, from the goodness of their heart, say, "These poor villagers are dying of malaria. The PhDs at GiveWell tell me that this is their biggest problem, so let's get them some netting!" Someone with empathy might say, "Malaria is a problem for this village, but I notice that they're also rather weary from hauling water. Perhaps I should ask them what bothers them most." Sympathy entails a fortunate looking down upon an unfortunate. With empathy, the well-meaning individual sees the unfortunate as an equal with agency as valid as his own.
Your conversation with your daughter reminds me of a lunch I once had. My companions were an American bank officer with a graduate degree from Oxford or Cambridge and his British boss, whom I'm not sure had a college degree at all. The boss was brilliant and savvy and explosively personable. The American was pompous, devoid of people skills, and a walking disaster area. In the course of our lunch, the American prattled on about someone's utopian proposal to give different quantities of votes to different citizens, according to their level of educational attainment. His boss started smiling and glared at me side-eyed. I said, "Well, I agree with that proposal, but I'm just not sure that all the PhDs will be willing to have zero votes." His boss started laughing uncontrollably, but silently. The comment went completely over the head of the American, whom I was ridiculing.
The wisdom of crowds doesn't obviously apply where every individual guess is expensive. Most individual guesses deviate from the optimum (assuming there is such a thing) to a greater or lesser degree. It's surely at least worth trying to do better when lives are at stake?
Certainly there is value in different groups trying a plurality of approaches rather than central planning. But then I'm glad someone is bothering to pay attention to what worked in retrospect and trying to learn lessons in a somewhat systematic way.
I've never really been on board with the "earn to give" argument though. It feels more like a post-rationalisation for amoral career-paths to me.
I'm one of those socially-stunted software engineers. Not because it's the best way to be a good altruist, but because I enjoy it and have an aptitude for it. I'm probably too selfish to choose my career based on a calculation of maximum altruism. I wouldn't want to do a job I enjoyed less so I could give more money away. On the other extreme, I'm sure there would be lots of opportunities for personal growth if I went and built a school or something. But I'd be a fabulously unproductive school-builder.
I want to do as much good as I can with my giving, and I really don't know how. Being guided by experts examining the evidence is surely a sensible approach? For all the difficulties of measurement and prediction, I strongly suspect there is an opportunity to help more people by gathering and studying the evidence on what works well. The approach saves lives in medicine, and over time I really believe it will save lives in charity and public policy too.
This is a long, superb, thoughtful response. So much so that I intend to quote it in my next essay (probably today). As I said in my piece--perhaps not adamantly or often enough--"Both the goal and the mathematical approach of EA appeal to me as an economist, but I’m always aware of economists’ chronic overconfidence in the ability of mathematical tools, created and operated by a clerisy of 'experts,' to optimize over complex human behavior." My concern is more with the specifics of implementation than it is with the concept itself.