<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Log-normal and logistic-normal terminology</title>
	<atom:link href="http://brenocon.com/blog/2011/05/log-normal-and-logistic-normal-terminology/feed/" rel="self" type="application/rss+xml" />
	<link>http://brenocon.com/blog/2011/05/log-normal-and-logistic-normal-terminology/</link>
	<description>cognition, language, social systems; statistics, visualization, computation</description>
	<lastBuildDate>Tue, 25 Nov 2025 13:11:20 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	
	<item>
		<title>By: Logistic Normal Distribution &#124; Homepage of Chao Jiang</title>
		<link>http://brenocon.com/blog/2011/05/log-normal-and-logistic-normal-terminology/#comment-1956536</link>
		<dc:creator>Logistic Normal Distribution &#124; Homepage of Chao Jiang</dc:creator>
		<pubDate>Mon, 16 Mar 2015 18:04:46 +0000</pubDate>
		<guid isPermaLink="false">http://brenocon.com/blog/?p=933#comment-1956536</guid>
		<description><![CDATA[[...] Following is a diagram from Brendam O&#8217;Connor. [...]]]></description>
		<content:encoded><![CDATA[<p>[...] Following is a diagram from Brendam O&#8217;Connor. [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Brendan O'Connor</title>
		<link>http://brenocon.com/blog/2011/05/log-normal-and-logistic-normal-terminology/#comment-258335</link>
		<dc:creator>Brendan O'Connor</dc:creator>
		<pubDate>Wed, 16 Jan 2013 16:49:38 +0000</pubDate>
		<guid isPermaLink="false">http://brenocon.com/blog/?p=933#comment-258335</guid>
		<description><![CDATA[Kai, great find.  I had no idea.  Thanks a lot!]]></description>
		<content:encoded><![CDATA[<p>Kai, great find.  I had no idea.  Thanks a lot!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Kai Brügge</title>
		<link>http://brenocon.com/blog/2011/05/log-normal-and-logistic-normal-terminology/#comment-258226</link>
		<dc:creator>Kai Brügge</dc:creator>
		<pubDate>Wed, 16 Jan 2013 11:48:40 +0000</pubDate>
		<guid isPermaLink="false">http://brenocon.com/blog/?p=933#comment-258226</guid>
		<description><![CDATA[Actually the logistic-normal distribution has a wikipedia page under the more intuitive name logit-normal distribution:
http://en.wikipedia.org/wiki/Logit-normal_distribution]]></description>
		<content:encoded><![CDATA[<p>Actually the logistic-normal distribution has a wikipedia page under the more intuitive name logit-normal distribution:<br />
<a href="http://en.wikipedia.org/wiki/Logit-normal_distribution" rel="nofollow">http://en.wikipedia.org/wiki/Logit-normal_distribution</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Bob Carpenter</title>
		<link>http://brenocon.com/blog/2011/05/log-normal-and-logistic-normal-terminology/#comment-65116</link>
		<dc:creator>Bob Carpenter</dc:creator>
		<pubDate>Thu, 26 May 2011 18:54:28 +0000</pubDate>
		<guid isPermaLink="false">http://brenocon.com/blog/?p=933#comment-65116</guid>
		<description><![CDATA[The real pain about these transforms is that you need to multiply the densities by the absolute value of the determinant of the Jacobian to properly normalize.   And the Jacobian needs to be full rank.  Luckily, in one dimension, that&#039;s just the absolute value of the derivative of the inverse transform (e.g., if it&#039;s a log transform, exp(x) is the inverse andso the absolute derivative is just exp(x)).  

This is killing me in the use of Hamiltonian Monte Carlo, in which every parameter needs to be unbounded to eliminate overstepping boundaries in the leapfrog steps (this is much easier in something like slice sampling for Gibbs sampling).   We need distributions like Beta(theta&#124;alpha,beta) transformed with the simplex parameter theta going to a K-1-dimensional unbounded basis (basically the inverse softmax with one value pegged to 0.0) and alpha and beta log transformed.]]></description>
		<content:encoded><![CDATA[<p>The real pain about these transforms is that you need to multiply the densities by the absolute value of the determinant of the Jacobian to properly normalize.   And the Jacobian needs to be full rank.  Luckily, in one dimension, that&#8217;s just the absolute value of the derivative of the inverse transform (e.g., if it&#8217;s a log transform, exp(x) is the inverse andso the absolute derivative is just exp(x)).  </p>
<p>This is killing me in the use of Hamiltonian Monte Carlo, in which every parameter needs to be unbounded to eliminate overstepping boundaries in the leapfrog steps (this is much easier in something like slice sampling for Gibbs sampling).   We need distributions like Beta(theta|alpha,beta) transformed with the simplex parameter theta going to a K-1-dimensional unbounded basis (basically the inverse softmax with one value pegged to 0.0) and alpha and beta log transformed.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
