<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: quick note: cer et al 2010</title>
	<atom:link href="https://brenocon.com/blog/2010/04/quick-note-cer-et-al-2010/feed/" rel="self" type="application/rss+xml" />
	<link>https://brenocon.com/blog/2010/04/quick-note-cer-et-al-2010/</link>
	<description>cognition, language, social systems; statistics, visualization, computation</description>
	<lastBuildDate>Tue, 25 Nov 2025 13:11:20 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	
	<item>
		<title>By: brendano</title>
		<link>https://brenocon.com/blog/2010/04/quick-note-cer-et-al-2010/#comment-25010</link>
		<dc:creator>brendano</dc:creator>
		<pubDate>Thu, 15 Apr 2010 14:58:26 +0000</pubDate>
		<guid isPermaLink="false">http://anyall.org/blog/?p=812#comment-25010</guid>
		<description><![CDATA[But then you need a phrase-&gt;dependency extractor for every language.  They&#039;re always rule-based.  Maybe that&#039;s not too hard?  Comparing on CoNLL starts becoming more problematic, but maybe that&#039;s not the point.]]></description>
		<content:encoded><![CDATA[<p>But then you need a phrase->dependency extractor for every language.  They&#8217;re always rule-based.  Maybe that&#8217;s not too hard?  Comparing on CoNLL starts becoming more problematic, but maybe that&#8217;s not the point.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Daniel Cer</title>
		<link>https://brenocon.com/blog/2010/04/quick-note-cer-et-al-2010/#comment-24996</link>
		<dc:creator>Daniel Cer</dc:creator>
		<pubDate>Thu, 15 Apr 2010 06:02:52 +0000</pubDate>
		<guid isPermaLink="false">http://anyall.org/blog/?p=812#comment-24996</guid>
		<description><![CDATA[I suspect the results may reflect just how good we are at producing phrase structure parses for English. Rather than just focusing on other dependency formalisms, I think the more interesting question might be do the results generalize to other languages.]]></description>
		<content:encoded><![CDATA[<p>I suspect the results may reflect just how good we are at producing phrase structure parses for English. Rather than just focusing on other dependency formalisms, I think the more interesting question might be do the results generalize to other languages.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mihai</title>
		<link>https://brenocon.com/blog/2010/04/quick-note-cer-et-al-2010/#comment-24993</link>
		<dc:creator>Mihai</dc:creator>
		<pubDate>Thu, 15 Apr 2010 05:41:43 +0000</pubDate>
		<guid isPermaLink="false">http://anyall.org/blog/?p=812#comment-24993</guid>
		<description><![CDATA[Yes. Hence the gold POS tags in the experiment.]]></description>
		<content:encoded><![CDATA[<p>Yes. Hence the gold POS tags in the experiment.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: brendano</title>
		<link>https://brenocon.com/blog/2010/04/quick-note-cer-et-al-2010/#comment-24992</link>
		<dc:creator>brendano</dc:creator>
		<pubDate>Thu, 15 Apr 2010 05:07:55 +0000</pubDate>
		<guid isPermaLink="false">http://anyall.org/blog/?p=812#comment-24992</guid>
		<description><![CDATA[Cool.

Your paper only talks about the CoNLL experiment.  For the experiment shown on the webpage, where do the Stanford Dependencies data come from?  The SD extractor run on gold penn treebank parses?]]></description>
		<content:encoded><![CDATA[<p>Cool.</p>
<p>Your paper only talks about the CoNLL experiment.  For the experiment shown on the webpage, where do the Stanford Dependencies data come from?  The SD extractor run on gold penn treebank parses?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mihai</title>
		<link>https://brenocon.com/blog/2010/04/quick-note-cer-et-al-2010/#comment-24990</link>
		<dc:creator>Mihai</dc:creator>
		<pubDate>Thu, 15 Apr 2010 04:41:40 +0000</pubDate>
		<guid isPermaLink="false">http://anyall.org/blog/?p=812#comment-24990</guid>
		<description><![CDATA[This is a partial answer to your critique (and also a shameless pitch). Here (http://www.surdeanu.name/mihai/ensemble/) I compared dependency parser performance for Stanford dependencies and CoNLL-2008. The context is slightly different: I used only linear SVMs and I&#039;m interpolating several shift-reduce models. But I think the main observation holds: the performance of dependency parsers is very close on CoNLL-2008 and Stanford dependencies. 

I can&#039;t easily evaluate constituent parsers on CoNLL-2008 dependencies because the tokenization is different and the generation of syntactic dependencies for this corpus was not fully automated, but, extrapolating a bit, I don&#039;t think we would see numbers too different than in the Cer et al. paper on CoNLL-2008.]]></description>
		<content:encoded><![CDATA[<p>This is a partial answer to your critique (and also a shameless pitch). Here (<a href="http://www.surdeanu.name/mihai/ensemble/" rel="nofollow">http://www.surdeanu.name/mihai/ensemble/</a>) I compared dependency parser performance for Stanford dependencies and CoNLL-2008. The context is slightly different: I used only linear SVMs and I&#8217;m interpolating several shift-reduce models. But I think the main observation holds: the performance of dependency parsers is very close on CoNLL-2008 and Stanford dependencies. </p>
<p>I can&#8217;t easily evaluate constituent parsers on CoNLL-2008 dependencies because the tokenization is different and the generation of syntactic dependencies for this corpus was not fully automated, but, extrapolating a bit, I don&#8217;t think we would see numbers too different than in the Cer et al. paper on CoNLL-2008.</p>
]]></content:encoded>
	</item>
</channel>
</rss>

<!-- Dynamic page generated in 0.015 seconds. -->
<!-- Cached page generated by WP-Super-Cache on 2026-04-07 07:54:15 -->
