Inference#
- mellon.inference.compute_conditional_mean(x, landmarks, y, mu, cov_func, sigma=0, jitter=1e-06)#
Builds the mean function of the Gaussian process, conditioned on the function values (e.g., log-density) on x. Returns a function that is defined on the whole domain of x.
- Parameters:
x (array-like) – The training instances.
landmarks (array-like) – The landmark points for fast sparse computation. Landmarks can be None if not using landmark points.
y (array-like) – The function values at each point in x.
mu (float) – The original Gaussian process mean.
cov_func (function) – The Gaussian process covariance function.
sigma (float) – White moise veriance. Defaults to 0.
jitter (float) – A small amount to add to the diagonal for stability. Defaults to 1e-6.
- Returns:
conditional_mean - The conditioned Gaussian process mean function.
- Return type:
function
- mellon.inference.compute_log_density_x(pre_transformation, transform)#
Computes the log density at the training points.
- Parameters:
pre_transformation (array-like) – \(z \sim \text{Normal}(0, I)\)
transform (function) – A function \(z \sim \text{Normal}(0, I) \rightarrow f \sim \text{Normal}(mu, K')\), where \(I\) is the identity matrix and \(K \approx K' = L L^\top\), where \(K\) is the covariance matrix.
- Returns:
log_density_x - The log density at the training points.
- mellon.inference.compute_loss_func(nn_distances, d, transform, k)#
Computes the Bayesian loss function -(prior(\(z\)) + likelihood(transform(\(z\)))).
- Parameters:
nn_distances (array-like) – The observed nearest neighbor distances.
d (int) – The dimensionality of the data.
transform (function) – Maps \(z \sim \text{Normal}(0, I) \rightarrow f \sim \text{Normal}(mu, K')\), where \(I\) is the identity matrix and \(K \approx K' = L L^\top\), where \(K\) is the covariance matrix.
k (int) – dimension of transform input
- Returns:
loss_func - The Bayesian loss function
- Return type:
function, function
- mellon.inference.compute_transform(mu, L)#
Computes a function transform that maps \(z \sim \text{Normal}(0, I) \rightarrow f \sim \text{Normal}(mu, K')\), where \(I\) is the identity matrix and \(K \approx K' = L L^\top\), where \(K\) is the covariance matrix.
- Parameters:
mu (float) – The Gaussian process mean.
L (array-like) – A matrix such that \(L L^\top \approx K\), where \(K\) is the covariance matrix.
- Returns:
transform - The transform function \(z \rightarrow f\).
- mellon.inference.minimize_adam(loss_func, initial_value, n_iter=100, init_learn_rate=1, jit=False)#
Minimizes function with a starting guess of initial_value using adam and exponentially decaying learning rate.
- Parameters:
loss_func (function) – The loss function to minimize.
initial_value (array-like) – The initial guess.
n_iter (integer) – The number of optimization iterations. Defaults to 100.
init_learn_rate (float) – The initial learn rate. Defaults to 1.
- Returns:
Results - A named tuple containing pre_transformation, opt_state, losses: The optimized parameters, final state of the optimizer, and history of loss values,
- Return type:
array-like, array-like, Object
- mellon.inference.minimize_lbfgsb(loss_func, initial_value, jit=False)#
Minimizes function with a starting guess of initial_value.
- Parameters:
loss_func (function) – Loss function to minimize.
initial_value (array-like) – Initial guess.
- Returns:
Results - A named tuple containing pre_transformation, opt_state, loss: The optimized parameters, final state of the optimizer, and the final loss value,
- Return type:
array-like, array-like, Object