# `optimize._lsq.common`¶

Functions used by least-squares algorithms.

## Module Contents¶

### Functions¶

 `intersect_trust_region`(x,s,Delta) Find the intersection of a line with the boundary of a trust region. `solve_lsq_trust_region`(n,m,uf,s,V,Delta,initial_alpha=None,rtol=0.01,max_iter=10) Solve a trust-region problem arising in least-squares minimization. `solve_trust_region_2d`(B,g,Delta) Solve a general trust-region problem in 2 dimensions. `update_tr_radius`(Delta,actual_reduction,predicted_reduction,step_norm,bound_hit) Update the radius of a trust region based on the cost reduction. `build_quadratic_1d`(J,g,s,diag=None,s0=None) Parameterize a multivariate quadratic function along a line. `minimize_quadratic_1d`(a,b,lb,ub,c=0) Minimize a 1-d quadratic function subject to bounds. `evaluate_quadratic`(J,g,s,diag=None) Compute values of a quadratic function arising in least squares. `in_bounds`(x,lb,ub) Check if a point lies within bounds. `step_size_to_bound`(x,s,lb,ub) Compute a min_step size required to reach a bound. `find_active_constraints`(x,lb,ub,rtol=1e-10) Determine which constraints are active in a given point. `make_strictly_feasible`(x,lb,ub,rstep=1e-10) Shift a point to the interior of a feasible region. `CL_scaling_vector`(x,g,lb,ub) Compute Coleman-Li scaling vector and its derivatives. `reflective_transformation`(y,lb,ub) Compute reflective transformation and its gradient. `print_header_nonlinear`() `print_iteration_nonlinear`(iteration,nfev,cost,cost_reduction,step_norm,optimality) `print_header_linear`() `print_iteration_linear`(iteration,cost,cost_reduction,step_norm,optimality) `compute_grad`(J,f) Compute gradient of the least-squares cost function. `compute_jac_scale`(J,scale_inv_old=None) Compute variables scale based on the Jacobian matrix. `left_multiplied_operator`(J,d) Return diag(d) J as LinearOperator. `right_multiplied_operator`(J,d) Return J diag(d) as LinearOperator. `regularized_lsq_operator`(J,diag) Return a matrix arising in regularized least squares as LinearOperator. `right_multiply`(J,d,copy=True) Compute J diag(d). `left_multiply`(J,d,copy=True) Compute diag(d) J. `check_termination`(dF,F,dx_norm,x_norm,ratio,ftol,xtol) Check termination condition for nonlinear least squares. `scale_for_robust_loss_function`(J,f,rho) Scale Jacobian and residuals for a robust loss function.
`intersect_trust_region`(x, s, Delta)

Find the intersection of a line with the boundary of a trust region.

This function solves the quadratic equation with respect to t ||(x + s*t)||**2 = Delta**2.

t_neg, t_pos : tuple of float
Negative and positive roots.
ValueError
If s is zero or x is not within the trust region.
`solve_lsq_trust_region`(n, m, uf, s, V, Delta, initial_alpha=None, rtol=0.01, max_iter=10)

Solve a trust-region problem arising in least-squares minimization.

This function implements a method described by J. J. More [1]_ and used in MINPACK, but it relies on a single SVD of Jacobian instead of series of Cholesky decompositions. Before running this function, compute: `U, s, VT = svd(J, full_matrices=False)`.

n : int
Number of variables.
m : int
Number of residuals.
uf : ndarray
Computed as U.T.dot(f).
s : ndarray
Singular values of J.
V : ndarray
Transpose of VT.
Delta : float
Radius of a trust region.
initial_alpha : float, optional
Initial guess for alpha, which might be available from a previous iteration. If None, determined automatically.
rtol : float, optional
Stopping tolerance for the root-finding procedure. Namely, the solution `p` will satisfy `abs(norm(p) - Delta) < rtol * Delta`.
max_iter : int, optional
Maximum allowed number of iterations for the root-finding procedure.
p : ndarray, shape (n,)
Found solution of a trust-region problem.
alpha : float
Positive value such that (J.T*J + alpha*I)*p = -J.T*f. Sometimes called Levenberg-Marquardt parameter.
n_iter : int
Number of iterations made by root-finding procedure. Zero means that Gauss-Newton step was selected as the solution.
 [1] More, J. J., “The Levenberg-Marquardt Algorithm: Implementation and Theory,” Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, pp. 105-116, 1977.
`solve_trust_region_2d`(B, g, Delta)

Solve a general trust-region problem in 2 dimensions.

The problem is reformulated as a 4-th order algebraic equation, the solution of which is found by numpy.roots.

B : ndarray, shape (2, 2)
Symmetric matrix, defines a quadratic term of the function.
g : ndarray, shape (2,)
Defines a linear term of the function.
Delta : float
Radius of a trust region.
p : ndarray, shape (2,)
Found solution.
newton_step : bool
Whether the returned solution is the Newton step which lies within the trust region.
`update_tr_radius`(Delta, actual_reduction, predicted_reduction, step_norm, bound_hit)

Update the radius of a trust region based on the cost reduction.

Delta : float
ratio : float
Ratio between actual and predicted reductions. Zero if predicted reduction is zero.
`build_quadratic_1d`(J, g, s, diag=None, s0=None)

Parameterize a multivariate quadratic function along a line.

The resulting univariate quadratic function is given as follows:

```f(t) = 0.5 * (s0 + s*t).T * (J.T*J + diag) * (s0 + s*t) +
g.T * (s0 + s*t)
```
J : ndarray, sparse matrix or LinearOperator shape (m, n)
Jacobian matrix, affects the quadratic term.
g : ndarray, shape (n,)
Gradient, defines the linear term.
s : ndarray, shape (n,)
Direction vector of a line.
diag : None or ndarray with shape (n,), optional
Addition diagonal part, affects the quadratic term. If None, assumed to be 0.
s0 : None or ndarray with shape (n,), optional
Initial point. If None, assumed to be 0.
a : float
Coefficient for t**2.
b : float
Coefficient for t.
c : float
Free term. Returned only if s0 is provided.
`minimize_quadratic_1d`(a, b, lb, ub, c=0)

Minimize a 1-d quadratic function subject to bounds.

The free term c is 0 by default. Bounds must be finite.

t : float
Minimum point.
y : float
Minimum value.
`evaluate_quadratic`(J, g, s, diag=None)

Compute values of a quadratic function arising in least squares.

The function is 0.5 * s.T * (J.T * J + diag) * s + g.T * s.

J : ndarray, sparse matrix or LinearOperator, shape (m, n)
Jacobian matrix, affects the quadratic term.
g : ndarray, shape (n,)
Gradient, defines the linear term.
s : ndarray, shape (k, n) or (n,)
Array containing steps as rows.
diag : ndarray, shape (n,), optional
Addition diagonal part, affects the quadratic term. If None, assumed to be 0.
values : ndarray with shape (k,) or float
Values of the function. If s was 2-dimensional then ndarray is returned, otherwise float is returned.
`in_bounds`(x, lb, ub)

Check if a point lies within bounds.

`step_size_to_bound`(x, s, lb, ub)

Compute a min_step size required to reach a bound.

The function computes a positive scalar t, such that x + s * t is on the bound.

step : float
Computed step. Non-negative value.
hits : ndarray of int with shape of x

Each element indicates whether a corresponding variable reaches the bound:

• 0 - the bound was not hit.
• -1 - the lower bound was hit.
• 1 - the upper bound was hit.
`find_active_constraints`(x, lb, ub, rtol=1e-10)

Determine which constraints are active in a given point.

The threshold is computed using rtol and the absolute value of the closest bound.

active : ndarray of int with shape of x

Each component shows whether the corresponding constraint is active:

• 0 - a constraint is not active.
• -1 - a lower bound is active.
• 1 - a upper bound is active.
`make_strictly_feasible`(x, lb, ub, rstep=1e-10)

Shift a point to the interior of a feasible region.

Each element of the returned vector is at least at a relative distance rstep from the closest bound. If `rstep=0` then np.nextafter is used.

`CL_scaling_vector`(x, g, lb, ub)

Compute Coleman-Li scaling vector and its derivatives.

Components of a vector v are defined as follows:

```       | ub[i] - x[i], if g[i] < 0 and ub[i] < np.inf
v[i] = | x[i] - lb[i], if g[i] > 0 and lb[i] > -np.inf
| 1,           otherwise
```

According to this definition v[i] >= 0 for all i. It differs from the definition in paper [1]_ (eq. (2.2)), where the absolute value of v is used. Both definitions are equivalent down the line. Derivatives of v with respect to x take value 1, -1 or 0 depending on a case.

v : ndarray with shape of x
Scaling vector.
dv : ndarray with shape of x
Derivatives of v[i] with respect to x[i], diagonal elements of v’s Jacobian.
 [1] M.A. Branch, T.F. Coleman, and Y. Li, “A Subspace, Interior, and Conjugate Gradient Method for Large-Scale Bound-Constrained Minimization Problems,” SIAM Journal on Scientific Computing, Vol. 21, Number 1, pp 1-23, 1999.
`reflective_transformation`(y, lb, ub)

Compute reflective transformation and its gradient.

`print_header_nonlinear`()
`print_iteration_nonlinear`(iteration, nfev, cost, cost_reduction, step_norm, optimality)
`print_header_linear`()
`print_iteration_linear`(iteration, cost, cost_reduction, step_norm, optimality)
`compute_grad`(J, f)

Compute gradient of the least-squares cost function.

`compute_jac_scale`(J, scale_inv_old=None)

Compute variables scale based on the Jacobian matrix.

`left_multiplied_operator`(J, d)

Return diag(d) J as LinearOperator.

`right_multiplied_operator`(J, d)

Return J diag(d) as LinearOperator.

`regularized_lsq_operator`(J, diag)

Return a matrix arising in regularized least squares as LinearOperator.

The matrix is
[ J ] [ D ]

where D is diagonal matrix with elements from diag.

`right_multiply`(J, d, copy=True)

Compute J diag(d).

If copy is False, J is modified in place (unless being LinearOperator).

`left_multiply`(J, d, copy=True)

Compute diag(d) J.

If copy is False, J is modified in place (unless being LinearOperator).

`check_termination`(dF, F, dx_norm, x_norm, ratio, ftol, xtol)

Check termination condition for nonlinear least squares.

`scale_for_robust_loss_function`(J, f, rho)

Scale Jacobian and residuals for a robust loss function.

Arrays are modified in place.