# `optimize.nonlin`¶

r

## Nonlinear solvers¶

This is a collection of general-purpose nonlinear multidimensional solvers. These solvers find x for which F(x) = 0. Both x and F can be multidimensional.

### Routines¶

Large-scale nonlinear solvers:

 `newton_krylov` `anderson`

General nonlinear solvers:

 `broyden1` `broyden2`

Simple iterations:

 `excitingmixing` `linearmixing` `diagbroyden`

### Examples¶

Small problem

```>>> def F(x):
...    return np.cos(x) + x[::-1] - [1, 2, 3, 4]
>>> import scipy.optimize
>>> x = scipy.optimize.broyden1(F, [1,1,1,1], f_tol=1e-14)
>>> x
array([ 4.04674914,  3.91158389,  2.71791677,  1.61756251])
>>> np.cos(x) + x[::-1]
array([ 1.,  2.,  3.,  4.])
```

Large problem

Suppose that we needed to solve the following integrodifferential equation on the square :

with and elsewhere on the boundary of the square.

The solution can be found using the newton_krylov solver:

## Module Contents¶

### Classes¶

 `NoConvergence`() `TerminationCondition`(self,f_tol=None,f_rtol=None,x_tol=None,x_rtol=None,iter=None,norm=maxnorm) Termination condition for an iteration. It is terminated if `Jacobian`(self,**kw) Common interface for Jacobians or Jacobian approximations. `InverseJacobian`(self,jacobian) `GenericBroyden`() `LowRankMatrix`(self,alpha,n,dtype) r `BroydenFirst`(self,alpha=None,reduction_method=”restart”,max_rank=None) r `BroydenSecond`() Find a root of a function, using Broyden’s second Jacobian approximation. `Anderson`(self,alpha=None,w0=0.01,M=5) Find a root of a function, using (extended) Anderson mixing. `DiagBroyden`(self,alpha=None) Find a root of a function, using diagonal Broyden Jacobian approximation. `LinearMixing`(self,alpha=None) Find a root of a function, using a scalar Jacobian approximation. `ExcitingMixing`(self,alpha=None,alphamax=1.0) Find a root of a function, using a tuned diagonal Jacobian approximation. `KrylovJacobian`(self,rdiff=None,method=”lgmres”,inner_maxiter=20,inner_M=None,outer_k=10,**kw) r

### Functions¶

 `maxnorm`(x) `_as_inexact`(x) Return x as an array, of either floats or complex floats `_array_like`(x,x0) Return ndarray x as same array subclass and shape as x0 `_safe_norm`(v) `_set_doc`(obj) `nonlin_solve`(F,x0,jacobian=”krylov”,iter=None,verbose=False,maxiter=None,f_tol=None,f_rtol=None,x_tol=None,x_rtol=None,tol_norm=None,line_search=”armijo”,callback=None,full_output=False,raise_exception=True) Find a root of a function, in a way suitable for large-scale problems. `_nonlin_line_search`(func,x,Fx,dx,search_type=”armijo”,rdiff=1e-08,smin=0.01) `asjacobian`(J) Convert given object to one suitable for use as a Jacobian. `_nonlin_wrapper`(name,jac) Construct a solver wrapper with given name and jacobian approx.
class `NoConvergence`
`maxnorm`(x)
`_as_inexact`(x)

Return x as an array, of either floats or complex floats

`_array_like`(x, x0)

Return ndarray x as same array subclass and shape as x0

`_safe_norm`(v)
`_set_doc`(obj)
`nonlin_solve`(F, x0, jacobian="krylov", iter=None, verbose=False, maxiter=None, f_tol=None, f_rtol=None, x_tol=None, x_rtol=None, tol_norm=None, line_search="armijo", callback=None, full_output=False, raise_exception=True)

Find a root of a function, in a way suitable for large-scale problems.

%(params_basic)s jacobian : Jacobian

A Jacobian approximation: Jacobian object or something that asjacobian can transform to one. Alternatively, a string specifying which of the builtin Jacobian approximations to use:

krylov, broyden1, broyden2, anderson diagbroyden, linearmixing, excitingmixing

%(params_extra)s full_output : bool

If true, returns a dictionary info containing convergence information.
raise_exception : bool
If True, a NoConvergence exception is raise if no solution is found.

asjacobian, Jacobian

This algorithm implements the inexact Newton method, with backtracking or full line searches. Several Jacobian approximations are available, including Krylov and Quasi-Newton methods.

 [KIM] C. T. Kelley, “Iterative Methods for Linear and Nonlinear Equations”. Society for Industrial and Applied Mathematics. (1995) http://www.siam.org/books/kelley/fr16/index.php
class `TerminationCondition`(f_tol=None, f_rtol=None, x_tol=None, x_rtol=None, iter=None, norm=maxnorm)

Termination condition for an iteration. It is terminated if

• |F| < f_rtol*|F_0|, AND
• |F| < f_tol

AND

`__init__`(f_tol=None, f_rtol=None, x_tol=None, x_rtol=None, iter=None, norm=maxnorm)
`check`(f, x, dx)
class `Jacobian`(**kw)

Common interface for Jacobians or Jacobian approximations.

The optional methods come useful when implementing trust region etc. algorithms that often require evaluating transposes of the Jacobian.

solve
Returns J^-1 * v
update
Updates Jacobian to point x (where the function has residual Fx)
matvec : optional
Returns J * v
rmatvec : optional
Returns A^H * v
rsolve : optional
Returns A^-H * v
matmat : optional
Returns A * V, where V is a dense matrix with dimensions (N,K).
todense : optional
Form the dense Jacobian matrix. Necessary for dense trust region algorithms, and useful for testing.
shape
Matrix dimensions (M, N)
dtype
Data type of the matrix.
func : callable, optional
Function the Jacobian corresponds to
`__init__`(**kw)
`aspreconditioner`()
`solve`(v, tol=0)
`update`(x, F)
`setup`(x, F, func)
class `InverseJacobian`(jacobian)
`__init__`(jacobian)
`shape`()
`dtype`()
`asjacobian`(J)

Convert given object to one suitable for use as a Jacobian.

class `GenericBroyden`
`setup`(x0, f0, func)
`_update`(x, f, dx, df, dx_norm, df_norm)
`update`(x, f)
class `LowRankMatrix`(alpha, n, dtype)

r A matrix represented as

However, if the rank of the matrix reaches the dimension of the vectors, full matrix representation will be used thereon.

`__init__`(alpha, n, dtype)
`_matvec`(alpha, cs, ds)
`_solve`(alpha, cs, ds)

Evaluate w = M^-1 v

`matvec`(v)

Evaluate w = M v

`rmatvec`(v)

Evaluate w = M^H v

`solve`(v, tol=0)

Evaluate w = M^-1 v

`rsolve`(v, tol=0)

Evaluate w = M^-H v

`append`(c, d)
`__array__`()
`collapse`()

Collapse the low-rank matrix to a full-rank one.

`restart_reduce`(rank)

Reduce the rank of the matrix by dropping all vectors.

`simple_reduce`(rank)

Reduce the rank of the matrix by dropping oldest vectors.

`svd_reduce`(max_rank, to_retain=None)

Reduce the rank of the matrix by retaining some SVD components.

This corresponds to the “Broyden Rank Reduction Inverse” algorithm described in [1]_.

Note that the SVD decomposition can be done by solving only a problem whose size is the effective rank of this matrix, which is viable even for large problems.

max_rank : int
Maximum rank of this matrix after reduction.
to_retain : int, optional
Number of SVD components to retain when reduction is done (ie. rank > max_rank). Default is `max_rank - 2`.
 [1] B.A. van der Rotten, PhD thesis, “A limited memory Broyden method to solve high-dimensional systems of nonlinear equations”. Mathematisch Instituut, Universiteit Leiden, The Netherlands (2003). http://www.math.leidenuniv.nl/scripties/Rotten.pdf
class `BroydenFirst`(alpha=None, reduction_method="restart", max_rank=None)

r Find a root of a function, using Broyden’s first Jacobian approximation.

This method is also known as “Broyden’s good method”.

%(params_basic)s %(broyden_params)s %(params_extra)s

This algorithm implements the inverse Jacobian Quasi-Newton update

which corresponds to Broyden’s first Jacobian update

 [1] B.A. van der Rotten, PhD thesis, “A limited memory Broyden method to solve high-dimensional systems of nonlinear equations”. Mathematisch Instituut, Universiteit Leiden, The Netherlands (2003). http://www.math.leidenuniv.nl/scripties/Rotten.pdf
`__init__`(alpha=None, reduction_method="restart", max_rank=None)
`setup`(x, F, func)
`todense`()
`solve`(f, tol=0)
`matvec`(f)
`rsolve`(f, tol=0)
`rmatvec`(f)
`_update`(x, f, dx, df, dx_norm, df_norm)
class `BroydenSecond`

Find a root of a function, using Broyden’s second Jacobian approximation.

This method is also known as “Broyden’s bad method”.

%(params_basic)s %(broyden_params)s %(params_extra)s

This algorithm implements the inverse Jacobian Quasi-Newton update

corresponding to Broyden’s second method.

 [1] B.A. van der Rotten, PhD thesis, “A limited memory Broyden method to solve high-dimensional systems of nonlinear equations”. Mathematisch Instituut, Universiteit Leiden, The Netherlands (2003). http://www.math.leidenuniv.nl/scripties/Rotten.pdf
`_update`(x, f, dx, df, dx_norm, df_norm)
class `Anderson`(alpha=None, w0=0.01, M=5)

Find a root of a function, using (extended) Anderson mixing.

The Jacobian is formed by for a ‘best’ solution in the space spanned by last M vectors. As a result, only a MxM matrix inversions and MxN multiplications are required. [Ey]

%(params_basic)s alpha : float, optional

Initial guess for the Jacobian is (-1/alpha).
M : float, optional
Number of previous vectors to retain. Defaults to 5.
w0 : float, optional
Regularization parameter for numerical stability. Compared to unity, good values of the order of 0.01.

%(params_extra)s

 [Ey] Eyert, J. Comp. Phys., 124, 271 (1996).
`__init__`(alpha=None, w0=0.01, M=5)
`solve`(f, tol=0)
`matvec`(f)
`_update`(x, f, dx, df, dx_norm, df_norm)
class `DiagBroyden`(alpha=None)

Find a root of a function, using diagonal Broyden Jacobian approximation.

The Jacobian approximation is derived from previous iterations, by retaining only the diagonal of Broyden matrices.

Warning

This algorithm may be useful for specific problems, but whether it will work may depend strongly on the problem.

%(params_basic)s alpha : float, optional

Initial guess for the Jacobian is (-1/alpha).

%(params_extra)s

`__init__`(alpha=None)
`setup`(x, F, func)
`solve`(f, tol=0)
`matvec`(f)
`rsolve`(f, tol=0)
`rmatvec`(f)
`todense`()
`_update`(x, f, dx, df, dx_norm, df_norm)
class `LinearMixing`(alpha=None)

Find a root of a function, using a scalar Jacobian approximation.

Warning

This algorithm may be useful for specific problems, but whether it will work may depend strongly on the problem.

%(params_basic)s alpha : float, optional

The Jacobian approximation is (-1/alpha).

%(params_extra)s

`__init__`(alpha=None)
`solve`(f, tol=0)
`matvec`(f)
`rsolve`(f, tol=0)
`rmatvec`(f)
`todense`()
`_update`(x, f, dx, df, dx_norm, df_norm)
class `ExcitingMixing`(alpha=None, alphamax=1.0)

Find a root of a function, using a tuned diagonal Jacobian approximation.

The Jacobian matrix is diagonal and is tuned on each iteration.

Warning

This algorithm may be useful for specific problems, but whether it will work may depend strongly on the problem.

%(params_basic)s alpha : float, optional

Initial Jacobian approximation is (-1/alpha).
alphamax : float, optional
The entries of the diagonal Jacobian are kept in the range `[alpha, alphamax]`.

%(params_extra)s

`__init__`(alpha=None, alphamax=1.0)
`setup`(x, F, func)
`solve`(f, tol=0)
`matvec`(f)
`rsolve`(f, tol=0)
`rmatvec`(f)
`todense`()
`_update`(x, f, dx, df, dx_norm, df_norm)
class `KrylovJacobian`(rdiff=None, method="lgmres", inner_maxiter=20, inner_M=None, outer_k=10, **kw)

r Find a root of a function, using Krylov approximation for inverse Jacobian.

This method is suitable for solving large-scale problems.

%(params_basic)s rdiff : float, optional

Relative step size to use in numerical differentiation.
method : {‘lgmres’, ‘gmres’, ‘bicgstab’, ‘cgs’, ‘minres’} or function

Krylov method to use to approximate the Jacobian. Can be a string, or a function implementing the same interface as the iterative solvers in scipy.sparse.linalg.

The default is scipy.sparse.linalg.lgmres.

inner_M : LinearOperator or InverseJacobian

Preconditioner for the inner Krylov iteration. Note that you can use also inverse Jacobians as (adaptive) preconditioners. For example,

```>>> from scipy.optimize.nonlin import BroydenFirst, KrylovJacobian
>>> from scipy.optimize.nonlin import InverseJacobian
>>> jac = BroydenFirst()
>>> kjac = KrylovJacobian(inner_M=InverseJacobian(jac))
```

If the preconditioner has a method named ‘update’, it will be called as `update(x, f)` after each nonlinear step, with `x` giving the current point, and `f` the current function value.

inner_tol, inner_maxiter, …
Parameters to pass on to the “inner” Krylov solver. See scipy.sparse.linalg.gmres for details.
outer_k : int, optional
Size of the subspace kept across LGMRES nonlinear iterations. See scipy.sparse.linalg.lgmres for details.

%(params_extra)s

scipy.sparse.linalg.gmres scipy.sparse.linalg.lgmres

This function implements a Newton-Krylov solver. The basic idea is to compute the inverse of the Jacobian with an iterative Krylov method. These methods require only evaluating the Jacobian-vector products, which are conveniently approximated by a finite difference:

Due to the use of iterative matrix inverses, these methods can deal with large nonlinear problems.

Scipy’s scipy.sparse.linalg module offers a selection of Krylov solvers to choose from. The default here is lgmres, which is a variant of restarted GMRES iteration that reuses some of the information obtained in the previous Newton steps to invert Jacobians in subsequent steps.

For a review on Newton-Krylov methods, see for example [1]_, and for the LGMRES sparse inverse method, see [2].

 [1] D.A. Knoll and D.E. Keyes, J. Comp. Phys. 193, 357 (2004). :doi:`10.1016/j.jcp.2003.08.010`
 [2] A.H. Baker and E.R. Jessup and T. Manteuffel, SIAM J. Matrix Anal. Appl. 26, 962 (2005). :doi:`10.1137/S0895479803422014`
`__init__`(rdiff=None, method="lgmres", inner_maxiter=20, inner_M=None, outer_k=10, **kw)
`_update_diff_step`()
`matvec`(v)
`solve`(rhs, tol=0)
`update`(x, f)
`setup`(x, f, func)
`_nonlin_wrapper`(name, jac)

Construct a solver wrapper with given name and jacobian approx.

It inspects the keyword arguments of `jac.__init__`, and allows to use the same arguments in the wrapper function, in addition to the keyword arguments of nonlin_solve