About
This is a blog on artificial intelligence and "Social Science++", with an emphasis on computation and statistics. My website is brenocon.com.
Blogroll
Blog Search
-
Archives
Indicators of a crackpot paper
From Scott Aaronson: Ten Signs a Claimed Mathematical Breakthrough is Wrong.
I’ve often wondered how to decide whether a paper or book is worth my time to read. People who like a certain paper or book can always tell me that I shouldn’t judge it until I read it. But I need some estimate of its quality/creativity/interestingness before I start. I’ve decided to be biased towards shorter things and more well-known things. I don’t see any good alternative, unfortunately.
What is experimental philosophy?
Suppose the chairman of a company has to decide whether to adopt a new program. It would increase profits and help the environment too. “I don’t care at all about helping the environment,” the chairman says. “I just want to make as much profit as I can. Let’s start the new program.” Would you say that the chairman intended to help the environment?O.K., same circumstance. Except this time the program would harm the environment. The chairman, who still couldn’t care less about the environment, authorizes the program in order to get those profits. As expected, the bottom line goes up, the environment goes down. Would you say the chairman harmed the environment intentionally?
in one survey, only 23 percent of people said that the chairman in the first situation had intentionally helped the environment. When they had to think about the second situation, though, fully 82 percent thought that the chairman had intentionally harmed the environment. There’s plenty to be said about these interestingly asymmetrical results.
…
It’s part of a recent movement known as “experimental philosophy.”
This is pretty interesting. I’m wondering why it’s “philosophy” though. Isn’t this just experimental psychology, applied to topics of intention and theory of mind? And if you want to do it, wouldn’t a psych program be better training for learning how to read fMRI papers and experimental design? But maybe a philosophy degree makes you smarter. (That’s how I understood Richard Rorty’s great review of Marc Hauser’s book on moral psychology.)
Here’s a good overview of a variety of work in the field; here are some more thoughts on what “x-phi” is. I suspect it’s a thing special to analytic philosophy, which embroiled itself in all sorts of topics that rely heavily on appeals to intuition, but where empiricism might work a bit better. (E.g. any actual improvements in cognitive science should make philosophy of mind less important.)
Data-driven charity
Some ex-hedge fund analysts recently started a non-profit devoted to evaluating the effectiveness of hundreds of charities, and apparently have been making waves (NYT). A few interesting reports have been posted on their website, givewell.net — they make recommendations for which charities where donors’ money is used most efficiently for saving lives or helping the disadvantaged.
(Does anyone else have interesting data on charity effectiveness? I’ve heard that evaluations are the big thing in philanthropy world now, and certainly the Gates Foundation talks a lot about it.)
Obviously this sort of evaluation is tricky, but it has to be the right approach. The NYT article makes them sound like they’re a bit arrogant, which is too bad; on the other hand, any one who makes claims to have better empirical information than the established wisdom will always end up in that dynamic. (OK, so I love young smart people who come up with better results than a conservative, close-minded establishment. Or at least I’m a sucker for that story.)
This particular methodological criticism (from the article) struck me as odd:
“I think in general it’s a good thing,” said Thomas Tighe, president and chief executive of Direct Relief International, an agency that GiveWell evaluated but did not recommend. Like others in the field, however, Mr. Tighe has reservations about GiveWell’s method, saying it tends to be less a true measure of a charity’s effectiveness than simply a gauge of the charity’s ability to provide data on that effectiveness.
I think it’s fine to penalize an organization for failing to provide data on its effectiveness. Isn’t the burden of proof on them, to show that they’re actually doing something useful? I guess it comes down to whether you believe empirical evaluation is necessary for organizational effectiveness. I believe this wholeheartedly.
The GiveWell people have an interesting argument that altruistic actions have a particularly poor feedback loop, which kills learning/optimization; therefore, you need to undertake explicit evaluative efforts. From their blog:
Now imagine an activity that consists of investing without looking at your results. In other words, you buy a stock, but you never check whether the stock makes money or loses money. You never read the news about whether the company does well or does poorly. How much would you value someone with this sort of experience – buying and selling stocks without ever checking up on how they do? Because that’s what “experience in philanthropy” (or workforce development, or education) comes down to, if unaccompanied by outcomes evaluation.
The peculiar thing about philanthropy is that because you’re trying to help someone else – not yourself – you need the big expensive study, or else you literally have no way of knowing whether what you did worked. And so, no way of learning from experience.
I really like this point — which is easier to notice, that you’re bankrupt or that someone else is? That your own business is doing well/badly, or that your beneficiaries are doing well/badly? Self-regarding actions get automatic evaluation but altruistic actions don’t, presumably because, even if we care enough to give to others, we do not care enough to expend energy evaluating their outcomes down the line. But we really care about our own personal outcomes. Yet another example of human preferences being more selfish than altruistic; well, what’s new?
Race and IQ debate – links
William Saletan, a writer for Slate, recently wrote a loud series of articles on genetic racial differences in IQ in the wake of James Watson’s controversial remarks. It prompted lots of discussion; here is an excellent response from Richard Nisbett, a leading authority in the field on the environmentalist side of the debate.
More academic articles: Rushton and Jensen’s 2005 review of evidence for genetic differences; and what I’ve found to be the most balanced so far, the 1995 APA report Inteligence: Knowns and Unknowns which concludes for all the heated claims out there, the scientific evidence tends to be pretty weak.
Blog world: Funny title from Brad DeLong; and another Slate response to Saletan and Rushton/Jensen.
The politics of the race and intelligence question is a huge distraction from trying to find out the actual truth of the matter. But I suppose the political implications are why it attracts so much attention — for good or bad.
The most interesting thing I learned is the Flynn Effect: IQ’s as measured by standardized tests have been consistently rising, in all populations throughout the world ever since IQ tests were invented. This implies some sort of non-genetic determiners — perhaps education and environmental factors — can have a very large effect on intelligence. Here is a good overview from Ulric Neisser, the lead author on the APA report.
How did Freud become a respected humanist?!
Freud Is Widely Taught at Universities, Except in the Psychology Department:
PSYCHOANALYSIS and its ideas about the unconscious mind have spread to every nook and cranny of the culture from Salinger to “South Park,” from Fellini to foreign policy. Yet if you want to learn about psychoanalysis at the nation’s top universities, one of the last places to look may be the psychology department.
A new report by the American Psychoanalytic Association has found that while psychoanalysis — or what purports to be psychoanalysis — is alive and well in literature, film, history and just about every other subject in the humanities, psychology departments and textbooks treat it as “desiccated and dead,” a historical artifact instead of “an ongoing movement and a living, evolving process.”
I’ve been wondering about this for a while, ever since I heard someone describe Freud as “one of the greatest humanists who ever lived.” I’m pretty sure he didn’t think of himself that way. If you’re a crappy scientist but a decent writer, does that mean you get to be reincarnated as a humanist next? To my mind this doesn’t bode well for the humanists, or for new potential Freuds in this regard.
The article duly notes that psychoanalysis as it lives in humanities academia is completely different than clinical psychoanalysis. Clinical psychoanalysis is now discredited because of its lack of empirical grounding. I guess outside of psych departments that’s not an obstacle, thus psychoanalysis for gender studies and the like.
Some of the sentiments expressed in the article really irritate me though, like:
“Some of the most important things in human life are just not measurable,” he said, like happiness or genuine religious feeling.
Give me a break. There are great measurements of subjective happiness. It’s even gone far enough to start studying its relation to welfare economics and policy implications. Sure, some of the brain work is at a pretty early stage, but measuring these things — and pragmatically using this knowledge in the real world! — can be done.
Freud, though, is particularly useful for gaining insights into questions of human existence. “There will be the discovery of problems that the standard ways don’t address,” he said, and then “there will be a swing back to Freud.”
I’ll be waiting.
Actually that 2008 elections voter fMRI study is batshit insane (and sleazy too)
A much more slashing commentary from Slate:
An op-ed from Sunday’s New York Times, “This Is Your Brain on Politics,” proposes to answer what must be the most vexing question of modern American politics: What’s going on inside the head of a swing voter? The authors—a team of neuroscientists and political consultants—ran 20 of these undecided volunteers through a brain scanner and showed them pictures and video of the major candidates from both parties. The results, laid out both in print and an online slide show, purport to give us some insight as to how the upcoming primaries will play out: “Mitt Romney may have some potential,” the researchers conclude, and Hillary Clinton seems to have an edge at winning over her opponents.Don’t believe a word of it. To liken these neurological pundits to snake-oil salesmen would be far too generous. Their imaging study has not been published in any science journal, nor has it been vetted by experts in the field; it can’t rightly be called an “experiment,” since the authors weren’t testing any particular hypothesis; and the arbitrary conclusions they draw from the data aren’t even consistent with their own previous research.
And they’re funded by a sleazy neuromarketing consultant agency that convinces Fortune 500 companies they need brain scan focus groups! Their own employee writes glowing New York Times op-eds about their work without disclosing the connection. And the study itself is terrible.
The Slate article is well worth reading. It highlights all the classic mistakes in flaky cognitive neuroscience, like cooking up totally different psychological stories from the same brain data just to fit your desired hypothesis. 21st century phrenology, baby!
Conclusion: Slate 1, Times 0.
Pop cog neuro is so sigh
A good anti-pop-cognitive-neuroscience rant on Language Log:
In closing, there is a larger issue here, beyond the validity of a specific study of voter psychology. A number of different commercial ventures, from neuromarketing to brain-based lie detection, are banking on the scientific aura of brain imaging to bring them customers, in addition to whatever real information the imaging conveys. The fact that the UCLA study involved brain imaging will garner it more attention, and possibly more credibility among the general public, than if it had used only behavioral measures like questionnaires or people’s facial expressions as they watched the candidates. Because brain imaging is a more high tech approach, it also seems more “scientific” and perhaps even more ‘objective.” Of course, these last two terms do not necessarily apply. Depending on the way the output of UCLA’s multimillion dollar 3-Tesla scanner is interpreted, the result may be objective and scientific, or of no more value than tea leaves.
Fightin’ the good fight. Maybe it’s hopeless. Perhaps “it’s hard to avoid the inexorable rise of cognitive neuroscience as the dominant dicourse of the next decade.” Sigh. Doing lots of statistical analysis of human behavior just seems like a better use of time to me.
Authoritarian great power capitalism
Before I forget — a while back I read a terrific Foreign Affairs article, The Return of Authoritarian Great Powers. The argument is, just a century or so ago, states based on authoritarian capitalism were very powerful in the world; e.g. imperial Japan and Germany. They got plenty of the economic benefits of capitalism but not so much the democratic effects people like to talk about today. (And there are interesting points that the failure of fascism in the second world war was contingent and not inherent to the ideology.) The author argues this looks like the future: Russia and China are becoming economically strong world powers but keeping solidly non-democratic ways of governance. The period of liberal democracy we live in, with all its overhyped speculation about the inevitable spread democracy and free market capitalism — say, an “end of history” — might just be that, a moment caused by the vagaries of 20th century history.
After I read the article last June, I actually saw Mr. End of History himself, Francis Fukuyama, speak at the good ol’ Long Now seminar series. He pointed out several challenges to liberal democracy, admitting:
China and Russia will be a test of his thesis, Fukuyama said. They are getting wealthier. If they democratize in the next twenty years, he’s right. If they remain authoritarian, he’s wrong.
But this only posed it as a test, not addressing this particular point — that authoritarian capitalism could be efficient and powerful enough to beat out liberal democracy’s hegemony. Maybe it’s secondary to clashes of civilizations or environmental catastrophe, but it seems something’s there.
neo institutional economic fun!
[Like medieval Christian societies,] Islamic societies similarly found ingenious ways to circumvent the usury ban. The primary one was the double sale. In this transaction, the borrower would get, for example, both 100 dinars cash and a small piece of cloth valued at the absurdly high price of 15 dinars. In a year he would have to pay back 100 dinars for the loan of the cash and 15 for the cloth. These debts were upheld by Sharia courts.
The cognition of rules and ethics sure is complex. I’d love to read the reasoning from those courts.
From from Gregory Clark’s rather intense review of Avner Greif‘s new-ish book on institutional economics. (Clark’s the one who wrote that interesting but weird evolutionary argument about European economic development.) If you’re in to the little worlds of institutional, evolutionary, and behavioral economics, Greif’s work is really interesting. Along with other institutional (or is it neo-institutional?) economists, he works to understand the functioning of markets, contracts, and laws as stable equilibria of games — that is, how individual incentives and behaviors can lead to different organizational efficiencies and therefore economic growth.
Greif’s book expands the notion of institutions as systems “of rules, beliefs, norms, and organizations that together generate a regularity of (social) behavior.” He wants to bring in cognition! Not be strictly bound to individualistic rational choice! And he even does nifty mathematical backflips to put an interesting spin on various models of cooperation, contract enforcement, and the like. It’s all very exciting, but as Clark (convincingly) points out, highly theoretical, a bit vague and difficult to test. I read Greif a few years back when I was a lot more willing to work through pages of equations just for their own sake — I guess you can’t keep that up forever.
In any case, economists wanting to do something with cognition is a great, but failing to do so unfortunately common. (You can view the entire field of behavioral economics in that way — they went ahead and modeled lots of interesting systemic effects (the biases of heuristics & biases), but the substantive bases aren’t necessarily there. On the other hand, it’s not clear if anyone has good substantive explanations for any human decision-making or social behavior.) I wonder if there’s any hope to usefully understanding social behavior as the interaction of cognitive agents. Things might just be too complex.