Like everyone, I’ve been just starting to look at the new, tentative, proof that P != NP from Vinay Deolalikar. After reading the intro, what’s most striking is that probabilistic graphical models and mathematical logic are at the core of the proof. This feels like a machine learning and artificial intelligence-centric approach to me — very different from what you usually see in mainstream CS theory. (Maybe I should feel good that in my undergrad I basically stopped studying normal math and spent all my time with this weird stuff instead!)

He devotes several chapters to an introduction to graphical models — Ising models, conditional independence, MRF’s, Hammersley-Clifford, and all that other stuff you see in Koller and Friedman or something — and then logic and model theory! I’m impressed.

updated version.

http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_updated.pdf

Thanks, updated post. Fast-paced news….

I would actually phrase it as a statistical physics-centric approach — at least the part that uses probabilistic graphical models. ML/AI borrowed much of that (very useful) machinery from the physics community.

Right, statistical physics is the root of everything. I personally learned the material starting from machine learning and artificial intelligence courses. The borrowing happened 10-20 years ago, as far as I can tell. And that depends on what you count as a significant borrowing… I guess the Hinton and Sejnowski early 1980′s stuff on Boltzmann machines was already adopting this sort of thing, and that then became part of the neural networks literature, which was cutting edge machine learning at the time.

http://rjlipton.wordpress.com/2010/08/09/issues-in-the-proof-that-p%E2%89%A0np/

http://michaelnielsen.org/polymath1/index.php?title=Deolalikar%27s_P!%3DNP_paper

I found the attempt close in spirit to the Mulmuley programme of attacking P/NP via Algebraic Geometry (approach summarized last year by Lance Fortnow in CACM).

Graphical models find natural generalization in certain toric algebraic varieties (eg. Strumfels et al. work on “algebraic statistics”), and sheaves are natural setting for model theory.

Frankly I can’t see how ML per se fits in here, but the general direction seems more plausible now when two independent perspectives in phrased different languages end up resembling each other.