# `optimize._root`¶

Unified interfaces to root finding algorithms.

## Functions¶

• root : find a root of a vector function.

## Module Contents¶

### Functions¶

 `root`(fun,x0,args=tuple,method=”hybr”,jac=None,tol=None,callback=None,options=None) Find a root of a vector function. `_warn_jac_unused`(jac,method) `_root_leastsq`(func,x0,args=tuple,jac=None,col_deriv=0,xtol=1.49012e-08,ftol=1.49012e-08,gtol=0.0,maxiter=0,eps=0.0,factor=100,diag=None,**unknown_options) Solve for least squares with Levenberg-Marquardt `_root_nonlin_solve`(func,x0,args=tuple,jac=None,_callback=None,_method=None,nit=None,disp=False,maxiter=None,ftol=None,fatol=None,xtol=None,xatol=None,tol_norm=None,line_search=”armijo”,jac_options=None,**unknown_options) `_root_broyden1_doc`() Options `_root_broyden2_doc`() Options `_root_anderson_doc`() Options `_root_linearmixing_doc`() Options `_root_diagbroyden_doc`() Options `_root_excitingmixing_doc`() Options `_root_krylov_doc`() Options
`root`(fun, x0, args=tuple, method="hybr", jac=None, tol=None, callback=None, options=None)

Find a root of a vector function.

fun : callable
A vector function to find a root of.
x0 : ndarray
Initial guess.
args : tuple, optional
Extra arguments passed to the objective function and its Jacobian.
method : str, optional

Type of solver. Should be one of

• ‘hybr’ (see here)
• ‘lm’ (see here)
• ‘broyden1’ (see here)
• ‘broyden2’ (see here)
• ‘anderson’ (see here)
• ‘linearmixing’ (see here)
• ‘diagbroyden’ (see here)
• ‘excitingmixing’ (see here)
• ‘krylov’ (see here)
• ‘df-sane’ (see here)
jac : bool or callable, optional
If jac is a Boolean and is True, fun is assumed to return the value of Jacobian along with the objective function. If False, the Jacobian will be estimated numerically. jac can also be a callable returning the Jacobian of fun. In this case, it must accept the same arguments as fun.
tol : float, optional
Tolerance for termination. For detailed control, use solver-specific options.
callback : function, optional
Optional callback function. It is called on every iteration as `callback(x, f)` where x is the current solution and f the corresponding residual. For all methods but ‘hybr’ and ‘lm’.
options : dict, optional
A dictionary of solver options. E.g. xtol or maxiter, see `show_options()` for details.
sol : OptimizeResult
The solution represented as a `OptimizeResult` object. Important attributes are: `x` the solution array, `success` a Boolean flag indicating if the algorithm exited successfully and `message` which describes the cause of the termination. See OptimizeResult for a description of other attributes.

show_options : Additional options accepted by the solvers

This section describes the available solvers that can be selected by the ‘method’ parameter. The default method is hybr.

Method hybr uses a modification of the Powell hybrid method as implemented in MINPACK .

Method lm solves the system of nonlinear equations in a least squares sense using a modification of the Levenberg-Marquardt algorithm as implemented in MINPACK .

Method df-sane is a derivative-free spectral method. 

Methods broyden1, broyden2, anderson, linearmixing, diagbroyden, excitingmixing, krylov are inexact Newton methods, with backtracking or full line searches . Each method corresponds to a particular Jacobian approximations. See nonlin for details.

• Method broyden1 uses Broyden’s first Jacobian approximation, it is known as Broyden’s good method.
• Method broyden2 uses Broyden’s second Jacobian approximation, it is known as Broyden’s bad method.
• Method anderson uses (extended) Anderson mixing.
• Method Krylov uses Krylov approximation for inverse Jacobian. It is suitable for large-scale problem.
• Method diagbroyden uses diagonal Broyden Jacobian approximation.
• Method linearmixing uses a scalar Jacobian approximation.
• Method excitingmixing uses a tuned diagonal Jacobian approximation.

Warning

The algorithms implemented for methods diagbroyden, linearmixing and excitingmixing may be useful for specific problems, but whether they will work may depend strongly on the problem.

New in version 0.11.0.

  (1, 2) More, Jorge J., Burton S. Garbow, and Kenneth E. Hillstrom. 1980. User Guide for MINPACK-1.
  C. T. Kelley. 1995. Iterative Methods for Linear and Nonlinear Equations. Society for Industrial and Applied Mathematics.
  La Cruz, J.M. Martinez, M. Raydan. Math. Comp. 75, 1429 (2006).

The following functions define a system of nonlinear equations and its jacobian.

```>>> def fun(x):
...     return [x  + 0.5 * (x - x)**3 - 1.0,
...             0.5 * (x - x)**3 + x]
```
```>>> def jac(x):
...     return np.array([[1 + 1.5 * (x - x)**2,
...                       -1.5 * (x - x)**2],
...                      [-1.5 * (x - x)**2,
...                       1 + 1.5 * (x - x)**2]])
```

A solution can be obtained as follows.

```>>> from scipy import optimize
>>> sol = optimize.root(fun, [0, 0], jac=jac, method='hybr')
>>> sol.x
array([ 0.8411639,  0.1588361])
```
`_warn_jac_unused`(jac, method)
`_root_leastsq`(func, x0, args=tuple, jac=None, col_deriv=0, xtol=1.49012e-08, ftol=1.49012e-08, gtol=0.0, maxiter=0, eps=0.0, factor=100, diag=None, **unknown_options)

Solve for least squares with Levenberg-Marquardt

col_deriv : bool
non-zero to specify that the Jacobian function computes derivatives down the columns (faster, because there is no transpose operation).
ftol : float
Relative error desired in the sum of squares.
xtol : float
Relative error desired in the approximate solution.
gtol : float
Orthogonality desired between the function vector and the columns of the Jacobian.
maxiter : int
The maximum number of calls to the function. If zero, then 100*(N+1) is the maximum where N is the number of elements in x0.
epsfcn : float
A suitable step length for the forward-difference approximation of the Jacobian (for Dfun=None). If epsfcn is less than the machine precision, it is assumed that the relative errors in the functions are of the order of the machine precision.
factor : float
A parameter determining the initial step bound (`factor * || diag * x||`). Should be in interval `(0.1, 100)`.
diag : sequence
N positive entries that serve as a scale factors for the variables.
`_root_nonlin_solve`(func, x0, args=tuple, jac=None, _callback=None, _method=None, nit=None, disp=False, maxiter=None, ftol=None, fatol=None, xtol=None, xatol=None, tol_norm=None, line_search="armijo", jac_options=None, **unknown_options)
`_root_broyden1_doc`()
nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional
Options for the respective Jacobian approximation.
alpha : float, optional
Initial guess for the Jacobian is (-1/alpha).
reduction_method : str or tuple, optional

Method used in ensuring that the rank of the Broyden matrix stays low. Can either be a string giving the name of the method, or a tuple of the form ```(method, param1, param2, ...)``` that gives the name of the method and values for additional parameters.

Methods available:
• `restart`: drop all matrix columns. Has no
extra parameters.
• `simple`: drop oldest matrix column. Has no
extra parameters.
• `svd`: keep only the most significant SVD
components.
Extra parameters:
• `to_retain`: number of SVD components to
retain when rank reduction is done. Default is `max_rank - 2`.
max_rank : int, optional
Maximum rank for the Broyden matrix. Default is infinity (ie., no rank reduction).
`_root_broyden2_doc`()
nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional

Options for the respective Jacobian approximation.

alpha : float, optional
Initial guess for the Jacobian is (-1/alpha).
reduction_method : str or tuple, optional

Method used in ensuring that the rank of the Broyden matrix stays low. Can either be a string giving the name of the method, or a tuple of the form ```(method, param1, param2, ...)``` that gives the name of the method and values for additional parameters.

Methods available:
• `restart`: drop all matrix columns. Has no
extra parameters.
• `simple`: drop oldest matrix column. Has no
extra parameters.
• `svd`: keep only the most significant SVD
components.
Extra parameters:
• `to_retain`: number of SVD components to
retain when rank reduction is done. Default is `max_rank - 2`.
max_rank : int, optional
Maximum rank for the Broyden matrix. Default is infinity (ie., no rank reduction).
`_root_anderson_doc`()
nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional

Options for the respective Jacobian approximation.

alpha : float, optional
Initial guess for the Jacobian is (-1/alpha).
M : float, optional
Number of previous vectors to retain. Defaults to 5.
w0 : float, optional
Regularization parameter for numerical stability. Compared to unity, good values of the order of 0.01.
`_root_linearmixing_doc`()
nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, `NoConvergence` is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional

Options for the respective Jacobian approximation.

alpha : float, optional
initial guess for the jacobian is (-1/alpha).
`_root_diagbroyden_doc`()
nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional

Options for the respective Jacobian approximation.

alpha : float, optional
initial guess for the jacobian is (-1/alpha).
`_root_excitingmixing_doc`()
nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional

Options for the respective Jacobian approximation.

alpha : float, optional
Initial Jacobian approximation is (-1/alpha).
alphamax : float, optional
The entries of the diagonal Jacobian are kept in the range `[alpha, alphamax]`.
`_root_krylov_doc`()
nit : int, optional
Number of iterations to make. If omitted (default), make as many as required to meet tolerances.
disp : bool, optional
Print status to stdout on every iteration.
maxiter : int, optional
Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.
ftol : float, optional
Relative tolerance for the residual. If omitted, not used.
fatol : float, optional
Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.
xtol : float, optional
Relative minimum step size. If omitted, not used.
xatol : float, optional
Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.
tol_norm : function(vector) -> scalar, optional
Norm to use in convergence check. Default is the maximum norm.
line_search : {None, ‘armijo’ (default), ‘wolfe’}, optional
Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.
jac_options : dict, optional

Options for the respective Jacobian approximation.

rdiff : float, optional
Relative step size to use in numerical differentiation.
method : {‘lgmres’, ‘gmres’, ‘bicgstab’, ‘cgs’, ‘minres’} or function

Krylov method to use to approximate the Jacobian. Can be a string, or a function implementing the same interface as the iterative solvers in scipy.sparse.linalg.

The default is scipy.sparse.linalg.lgmres.

inner_M : LinearOperator or InverseJacobian

Preconditioner for the inner Krylov iteration. Note that you can use also inverse Jacobians as (adaptive) preconditioners. For example,

```>>> jac = BroydenFirst()
>>> kjac = KrylovJacobian(inner_M=jac.inverse).
```

If the preconditioner has a method named ‘update’, it will be called as `update(x, f)` after each nonlinear step, with `x` giving the current point, and `f` the current function value.

inner_tol, inner_maxiter, …
Parameters to pass on to the “inner” Krylov solver. See scipy.sparse.linalg.gmres for details.
outer_k : int, optional

Size of the subspace kept across LGMRES nonlinear iterations.

See scipy.sparse.linalg.lgmres for details.