stats.kde

Module Contents

Classes

gaussian_kde(self,dataset,bw_method=None) Representation of a kernel-density estimate using Gaussian kernels.
class gaussian_kde(dataset, bw_method=None)

Representation of a kernel-density estimate using Gaussian kernels.

Kernel density estimation is a way to estimate the probability density function (PDF) of a random variable in a non-parametric way. gaussian_kde works for both uni-variate and multi-variate data. It includes automatic bandwidth determination. The estimation works best for a unimodal distribution; bimodal or multi-modal distributions tend to be oversmoothed.

dataset : array_like
Datapoints to estimate from. In case of univariate data this is a 1-D array, otherwise a 2-D array with shape (# of dims, # of data).
bw_method : str, scalar or callable, optional
The method used to calculate the estimator bandwidth. This can be ‘scott’, ‘silverman’, a scalar constant or a callable. If a scalar, this will be used directly as kde.factor. If a callable, it should take a gaussian_kde instance as only parameter and return a scalar. If None (default), ‘scott’ is used. See Notes for more details.
dataset : ndarray
The dataset with which gaussian_kde was initialized.
d : int
Number of dimensions.
n : int
Number of datapoints.
factor : float
The bandwidth factor, obtained from kde.covariance_factor, with which the covariance matrix is multiplied.
covariance : ndarray
The covariance matrix of dataset, scaled by the calculated bandwidth (kde.factor).
inv_cov : ndarray
The inverse of covariance.

evaluate __call__ integrate_gaussian integrate_box_1d integrate_box integrate_kde pdf logpdf resample set_bandwidth covariance_factor

Bandwidth selection strongly influences the estimate obtained from the KDE (much more so than the actual shape of the kernel). Bandwidth selection can be done by a “rule of thumb”, by cross-validation, by “plug-in methods” or by other means; see [3], [4] for reviews. gaussian_kde uses a rule of thumb, the default is Scott’s Rule.

Scott’s Rule [1], implemented as scotts_factor, is:

n**(-1./(d+4)),

with n the number of data points and d the number of dimensions. Silverman’s Rule [2], implemented as silverman_factor, is:

(n * (d + 2) / 4.)**(-1. / (d + 4)).

Good general descriptions of kernel density estimation can be found in [1] and [2], the mathematics for this multi-dimensional implementation can be found in [1].

[1](1, 2, 3) D.W. Scott, “Multivariate Density Estimation: Theory, Practice, and Visualization”, John Wiley & Sons, New York, Chicester, 1992.
[2](1, 2) B.W. Silverman, “Density Estimation for Statistics and Data Analysis”, Vol. 26, Monographs on Statistics and Applied Probability, Chapman and Hall, London, 1986.
[3]B.A. Turlach, “Bandwidth Selection in Kernel Density Estimation: A Review”, CORE and Institut de Statistique, Vol. 19, pp. 1-33, 1993.
[4]D.M. Bashtannyk and R.J. Hyndman, “Bandwidth selection for kernel conditional density estimation”, Computational Statistics & Data Analysis, Vol. 36, pp. 279-298, 2001.

Generate some random two-dimensional data:

>>> from scipy import stats
>>> def measure(n):
...     "Measurement model, return two coupled measurements."
...     m1 = np.random.normal(size=n)
...     m2 = np.random.normal(scale=0.5, size=n)
...     return m1+m2, m1-m2
>>> m1, m2 = measure(2000)
>>> xmin = m1.min()
>>> xmax = m1.max()
>>> ymin = m2.min()
>>> ymax = m2.max()

Perform a kernel density estimate on the data:

>>> X, Y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
>>> positions = np.vstack([X.ravel(), Y.ravel()])
>>> values = np.vstack([m1, m2])
>>> kernel = stats.gaussian_kde(values)
>>> Z = np.reshape(kernel(positions).T, X.shape)

Plot the results:

>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots()
>>> ax.imshow(np.rot90(Z), cmap=plt.cm.gist_earth_r,
...           extent=[xmin, xmax, ymin, ymax])
>>> ax.plot(m1, m2, 'k.', markersize=2)
>>> ax.set_xlim([xmin, xmax])
>>> ax.set_ylim([ymin, ymax])
>>> plt.show()
__init__(dataset, bw_method=None)
evaluate(points)

Evaluate the estimated pdf on a set of points.

points : (# of dimensions, # of points)-array
Alternatively, a (# of dimensions,) vector can be passed in and treated as a single point.
values : (# of points,)-array
The values at each point.
ValueError : if the dimensionality of the input points is different than
the dimensionality of the KDE.
integrate_gaussian(mean, cov)

Multiply estimated density by a multivariate Gaussian and integrate over the whole space.

mean : aray_like
A 1-D array, specifying the mean of the Gaussian.
cov : array_like
A 2-D array, specifying the covariance matrix of the Gaussian.
result : scalar
The value of the integral.
ValueError
If the mean or covariance of the input Gaussian differs from the KDE’s dimensionality.
integrate_box_1d(low, high)

Computes the integral of a 1D pdf between two bounds.

low : scalar
Lower bound of integration.
high : scalar
Upper bound of integration.
value : scalar
The result of the integral.
ValueError
If the KDE is over more than one dimension.
integrate_box(low_bounds, high_bounds, maxpts=None)

Computes the integral of a pdf over a rectangular interval.

low_bounds : array_like
A 1-D array containing the lower bounds of integration.
high_bounds : array_like
A 1-D array containing the upper bounds of integration.
maxpts : int, optional
The maximum number of points to use for integration.
value : scalar
The result of the integral.
integrate_kde(other)

Computes the integral of the product of this kernel density estimate with another.

other : gaussian_kde instance
The other kde.
value : scalar
The result of the integral.
ValueError
If the KDEs have different dimensionality.
resample(size=None)

Randomly sample a dataset from the estimated pdf.

size : int, optional
The number of samples to draw. If not provided, then the size is the same as the underlying dataset.
resample : (self.d, size) ndarray
The sampled dataset.
scotts_factor()
silverman_factor()
set_bandwidth(bw_method=None)

Compute the estimator bandwidth with given method.

The new bandwidth calculated after a call to set_bandwidth is used for subsequent evaluations of the estimated density.

bw_method : str, scalar or callable, optional
The method used to calculate the estimator bandwidth. This can be ‘scott’, ‘silverman’, a scalar constant or a callable. If a scalar, this will be used directly as kde.factor. If a callable, it should take a gaussian_kde instance as only parameter and return a scalar. If None (default), nothing happens; the current kde.covariance_factor method is kept.

New in version 0.11.

>>> import scipy.stats as stats
>>> x1 = np.array([-7, -5, 1, 4, 5.])
>>> kde = stats.gaussian_kde(x1)
>>> xs = np.linspace(-10, 10, num=50)
>>> y1 = kde(xs)
>>> kde.set_bandwidth(bw_method='silverman')
>>> y2 = kde(xs)
>>> kde.set_bandwidth(bw_method=kde.factor / 3.)
>>> y3 = kde(xs)
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots()
>>> ax.plot(x1, np.ones(x1.shape) / (4. * x1.size), 'bo',
...         label='Data points (rescaled)')
>>> ax.plot(xs, y1, label='Scott (default)')
>>> ax.plot(xs, y2, label='Silverman')
>>> ax.plot(xs, y3, label='Const (1/3 * Silverman)')
>>> ax.legend()
>>> plt.show()
_compute_covariance()

Computes the covariance matrix for each Gaussian kernel using covariance_factor().

pdf(x)

Evaluate the estimated pdf on a provided set of points.

This is an alias for gaussian_kde.evaluate. See the evaluate docstring for more details.

logpdf(x)

Evaluate the log of the estimated pdf on a provided set of points.