<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: The $60,000 cat: deep belief networks make less sense for language than vision</title>
	<atom:link href="http://brenocon.com/blog/2012/07/the-60000-cat-deep-belief-networks-make-less-sense-for-language-than-vision/feed/" rel="self" type="application/rss+xml" />
	<link>http://brenocon.com/blog/2012/07/the-60000-cat-deep-belief-networks-make-less-sense-for-language-than-vision/</link>
	<description>cognition, language, social systems; statistics, visualization, computation</description>
	<lastBuildDate>Tue, 25 Nov 2025 13:11:20 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	
	<item>
		<title>By: Dawen Liang</title>
		<link>http://brenocon.com/blog/2012/07/the-60000-cat-deep-belief-networks-make-less-sense-for-language-than-vision/#comment-186137</link>
		<dc:creator>Dawen Liang</dc:creator>
		<pubDate>Sat, 06 Oct 2012 05:11:38 +0000</pubDate>
		<guid isPermaLink="false">http://brenocon.com/blog/?p=1336#comment-186137</guid>
		<description><![CDATA[Just get chance to read your blog. A lot of interesting stuff :) so I subscribed your blog in my Google Reader -- under the category of ML, not Stat though (Andrew Gelman is there). If you resist, I could possibly change XD]]></description>
		<content:encoded><![CDATA[<p>Just get chance to read your blog. A lot of interesting stuff :) so I subscribed your blog in my Google Reader &#8212; under the category of ML, not Stat though (Andrew Gelman is there). If you resist, I could possibly change XD</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Xinfan Meng</title>
		<link>http://brenocon.com/blog/2012/07/the-60000-cat-deep-belief-networks-make-less-sense-for-language-than-vision/#comment-164148</link>
		<dc:creator>Xinfan Meng</dc:creator>
		<pubDate>Fri, 06 Jul 2012 01:17:18 +0000</pubDate>
		<guid isPermaLink="false">http://brenocon.com/blog/?p=1336#comment-164148</guid>
		<description><![CDATA[I can also recognize the frame in {cat, tree, rescue, fireman} immediately; I guess such things are pretty common now. I agree that directly drawing analogy between CV and NLP might be dangerous; but you are drawing your own analogy (character  pixel. etc.), right?]]></description>
		<content:encoded><![CDATA[<p>I can also recognize the frame in {cat, tree, rescue, fireman} immediately; I guess such things are pretty common now. I agree that directly drawing analogy between CV and NLP might be dangerous; but you are drawing your own analogy (character  pixel. etc.), right?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: nope</title>
		<link>http://brenocon.com/blog/2012/07/the-60000-cat-deep-belief-networks-make-less-sense-for-language-than-vision/#comment-164036</link>
		<dc:creator>nope</dc:creator>
		<pubDate>Thu, 05 Jul 2012 07:20:56 +0000</pubDate>
		<guid isPermaLink="false">http://brenocon.com/blog/?p=1336#comment-164036</guid>
		<description><![CDATA[The space of words is absurdly high dimensional.  This paper deals with a mere 120,000 dimensions (200x200 pixel rgb images).  The number of words in English isn&#039;t really well defined but it&#039;s certainly more than 120k.  And that&#039;s just to represent one word!  The space of sentences is exponentially larger.

Words aren&#039;t dimensionality reduction, they&#039;re just a particularly interpretable &quot;basis&quot; (really a frame but that&#039;s not important).]]></description>
		<content:encoded><![CDATA[<p>The space of words is absurdly high dimensional.  This paper deals with a mere 120,000 dimensions (200&#215;200 pixel rgb images).  The number of words in English isn&#8217;t really well defined but it&#8217;s certainly more than 120k.  And that&#8217;s just to represent one word!  The space of sentences is exponentially larger.</p>
<p>Words aren&#8217;t dimensionality reduction, they&#8217;re just a particularly interpretable &#8220;basis&#8221; (really a frame but that&#8217;s not important).</p>
]]></content:encoded>
	</item>
</channel>
</rss>
