Don’t have time to read much right now, but received word about a neat-looking paper: Uncomputability: The Problem of Induction Internalized by Kevin Kelly.
Kevin Kelly’s website has an awesome statement that mirrors thoughts I’ve been having for the last few years — the incredible importance of computational constraints applied to reasoning and rationality:
Kuhn teaches that a single, deep success suffices to keep a competing paradign on the table. Not surprisingly, computational learning theory shows its superiority over ideal theories of rationality when we trade in our ideal agents for more realistic, computable agents. The foundation of the deep success is a strong structural analogy between the halting problem and the problem of inductive generalization, allowing for a unified treatment of both, from the ground up. One consequence of the approach is that one can often show that computable agents are forced to choose between ideal rationality and finding the right answer. I say “so much the worse for ideal rationality”. Another is that there are learning problems that cannot be solved by computational means unless the Humean barrier between theorem proving and the external, empirical data is torn down.
Right on.
To make this post worthwhile, here is an insightful Simpsons clip.
(thanks to Shawn)