scipy.stats.Normal.

logentropy#

Normal.logentropy(*, method=None)[source]#

Logarithm of the differential entropy

In terms of probability density function \(f(x)\) and support \(\chi\), the differential entropy (or simply “entropy”) of a random variable \(X\) is:

\[h(X) = - \int_{\chi} f(x) \log f(x) dx\]

logentropy computes the logarithm of the differential entropy (“log-entropy”), \(log(h(X))\), but it may be numerically favorable compared to the naive implementation (computing \(h(X)\) then taking the logarithm).

Parameters:
method{None, ‘formula’, ‘logexp’, ‘quadrature}

The strategy used to evaluate the log-entropy. By default (None), the infrastructure chooses between the following options, listed in order of precedence.

  • 'formula': use a formula for the log-entropy itself

  • 'logexp': evaluate the entropy and take the logarithm

  • 'quadrature': numerically log-integrate the logarithm of the entropy integrand

Not all method options are available for all distributions. If the selected method is not available, a NotImplementedError will be raised.

Returns:
outarray

The log-entropy.

See also

entropy
logpdf

Notes

If the entropy of a distribution is negative, then the log-entropy is complex with imaginary part divisible by \(\pi\). For consistency, the result of this function always has complex dtype, regardless of the value of the imaginary part.

References

[1]

Differential entropy, Wikipedia, https://en.wikipedia.org/wiki/Differential_entropy

Examples

Instantiate a distribution with the desired parameters:

>>> import numpy as np
>>> from scipy import stats
>>> X = stats.Uniform(a=-1., b=1.)

Evaluate the log-entropy:

>>> X.logentropy()
(-0.3665129205816642+0j)
>>> np.allclose(np.exp(X.logentropy()), X.entropy())
True

For a random variable with negative entropy, the log-entropy has an imaginary part equal to np.pi.

>>> X = stats.Uniform(a=-.1, b=.1)
>>> X.entropy(), X.logentropy()
(-1.6094379124341007, (0.4758849953271105+3.141592653589793j))