<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <!-- Created by GNU Texinfo 6.5, http://www.gnu.org/software/texinfo/ --> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Iterative Techniques (GNU Octave (version 5.1.0))</title> <meta name="description" content="Iterative Techniques (GNU Octave (version 5.1.0))"> <meta name="keywords" content="Iterative Techniques (GNU Octave (version 5.1.0))"> <meta name="resource-type" content="document"> <meta name="distribution" content="global"> <meta name="Generator" content="makeinfo"> <link href="index.html#Top" rel="start" title="Top"> <link href="Concept-Index.html#Concept-Index" rel="index" title="Concept Index"> <link href="index.html#SEC_Contents" rel="contents" title="Table of Contents"> <link href="Sparse-Matrices.html#Sparse-Matrices" rel="up" title="Sparse Matrices"> <link href="Real-Life-Example.html#Real-Life-Example" rel="next" title="Real Life Example"> <link href="Sparse-Linear-Algebra.html#Sparse-Linear-Algebra" rel="prev" title="Sparse Linear Algebra"> <style type="text/css"> <!-- a.summary-letter {text-decoration: none} blockquote.indentedblock {margin-right: 0em} blockquote.smallindentedblock {margin-right: 0em; font-size: smaller} blockquote.smallquotation {font-size: smaller} div.display {margin-left: 3.2em} div.example {margin-left: 3.2em} div.lisp {margin-left: 3.2em} div.smalldisplay {margin-left: 3.2em} div.smallexample {margin-left: 3.2em} div.smalllisp {margin-left: 3.2em} kbd {font-style: oblique} pre.display {font-family: inherit} pre.format {font-family: inherit} pre.menu-comment {font-family: serif} pre.menu-preformatted {font-family: serif} pre.smalldisplay {font-family: inherit; font-size: smaller} pre.smallexample {font-size: smaller} pre.smallformat {font-family: inherit; font-size: smaller} pre.smalllisp {font-size: smaller} span.nolinebreak {white-space: nowrap} span.roman {font-family: initial; font-weight: normal} span.sansserif {font-family: sans-serif; font-weight: normal} ul.no-bullet {list-style: none} --> </style> <link rel="stylesheet" type="text/css" href="octave.css"> </head> <body lang="en"> <a name="Iterative-Techniques"></a> <div class="header"> <p> Next: <a href="Real-Life-Example.html#Real-Life-Example" accesskey="n" rel="next">Real Life Example</a>, Previous: <a href="Sparse-Linear-Algebra.html#Sparse-Linear-Algebra" accesskey="p" rel="prev">Sparse Linear Algebra</a>, Up: <a href="Sparse-Matrices.html#Sparse-Matrices" accesskey="u" rel="up">Sparse Matrices</a> [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Concept-Index.html#Concept-Index" title="Index" rel="index">Index</a>]</p> </div> <hr> <a name="Iterative-Techniques-Applied-to-Sparse-Matrices"></a> <h3 class="section">22.3 Iterative Techniques Applied to Sparse Matrices</h3> <p>The left division <code>\</code> and right division <code>/</code> operators, discussed in the previous section, use direct solvers to resolve a linear equation of the form <code><var>x</var> = <var>A</var> \ <var>b</var></code> or <code><var>x</var> = <var>b</var> / <var>A</var></code>. Octave also includes a number of functions to solve sparse linear equations using iterative techniques. </p> <a name="XREFpcg"></a><dl> <dt><a name="index-pcg"></a><em><var>x</var> =</em> <strong>pcg</strong> <em>(<var>A</var>, <var>b</var>, <var>tol</var>, <var>maxit</var>, <var>m1</var>, <var>m2</var>, <var>x0</var>, …)</em></dt> <dt><a name="index-pcg-1"></a><em><var>x</var> =</em> <strong>pcg</strong> <em>(<var>A</var>, <var>b</var>, <var>tol</var>, <var>maxit</var>, <var>M</var>, [], <var>x0</var>, …)</em></dt> <dt><a name="index-pcg-2"></a><em>[<var>x</var>, <var>flag</var>, <var>relres</var>, <var>iter</var>, <var>resvec</var>, <var>eigest</var>] =</em> <strong>pcg</strong> <em>(<var>A</var>, <var>b</var>, …)</em></dt> <dd> <p>Solve the linear system of equations <code><var>A</var> * <var>x</var> = <var>b</var></code><!-- /@w --> by means of the Preconditioned Conjugate Gradient iterative method. </p> <p>The input arguments are: </p> <ul> <li> <var>A</var> is the matrix of the linear system and it must be square. <var>A</var> can be passed as a matrix, function handle, or inline function <code>Afun</code> such that <code>Afun(x) = A * x</code>. Additional parameters to <code>Afun</code> may be passed after <var>x0</var>. <p><var>A</var> has to be Hermitian and Positive Definite (HPD). If <code>pcg</code> detects <var>A</var> not to be positive definite, a warning is printed and the <var>flag</var> output is set. </p> </li><li> <var>b</var> is the right-hand side vector. </li><li> <var>tol</var> is the required relative tolerance for the residual error, <code><var>b</var> <span class="nolinebreak">-</span> <var>A</var> * <var>x</var></code><!-- /@w -->. The iteration stops if <code>norm (<var>b</var> <span class="nolinebreak">-</span> <var>A</var> * <var>x</var>)</code> ≤ <code><var>tol</var> * norm (<var>b</var>)</code><!-- /@w --><!-- /@w -->. If <var>tol</var> is omitted or empty, then a tolerance of 1e-6 is used. </li><li> <var>maxit</var> is the maximum allowed number of iterations; if <var>maxit</var> is omitted or empty then a value of 20 is used. </li><li> <var>m</var> is a HPD preconditioning matrix. For any decomposition <code><var>m</var> = <var>p1</var> * <var>p2</var></code> such that <code>inv (<var>p1</var>) * <var>A</var> * inv (<var>p2</var>)</code><!-- /@w --> is HPD, the conjugate gradient method is formally applied to the linear system <code>inv (<var>p1</var>) * <var>A</var> * inv (<var>p2</var>) * <var>y</var> = inv (<var>p1</var>) * <var>b</var></code><!-- /@w -->, with <code><var>x</var> = inv (<var>p2</var>) * <var>y</var></code> (split preconditioning). In practice, at each iteration of the conjugate gradient method a linear system with matrix <var>m</var> is solved with <code>mldivide</code>. If a particular factorization <code><var>m</var> = <var>m1</var> * <var>m2</var></code> is available (for instance, an incomplete Cholesky factorization of <var>a</var>), the two matrices <var>m1</var> and <var>m2</var> can be passed and the relative linear systems are solved with the <code>mldivide</code> operator. Note that a proper choice of the preconditioner may dramatically improve the overall performance of the method. Instead of matrices <var>m1</var> and <var>m2</var>, the user may pass two functions which return the results of applying the inverse of <var>m1</var> and <var>m2</var> to a vector. If <var>m1</var> is omitted or empty <code>[]</code>, then no preconditioning is applied. If no factorization of <var>m</var> is available, <var>m2</var> can be omitted or left [], and the input variable <var>m1</var> can be used to pass the preconditioner <var>m</var>. </li><li> <var>x0</var> is the initial guess. If <var>x0</var> is omitted or empty then the function sets <var>x0</var> to a zero vector by default. </li></ul> <p>The arguments which follow <var>x0</var> are treated as parameters, and passed in an appropriate manner to any of the functions (<var>A</var> or <var>m1</var> or <var>m2</var>) that have been given to <code>pcg</code>. See the examples below for further details. </p> <p>The output arguments are: </p> <ul> <li> <var>x</var> is the computed approximation to the solution of <code><var>A</var> * <var>x</var> = <var>b</var></code><!-- /@w -->. If the algorithm did not converge, then <var>x</var> is the iteration which has the minimum residual. </li><li> <var>flag</var> reports on the convergence: <ul> <li> 0: The algorithm converged to within the prescribed tolerance. </li><li> 1: The algorithm did not converge and it reached the maximum number of iterations. </li><li> 2: The preconditioner matrix is singular. </li><li> 3: The algorithm stagnated, i.e., the absolute value of the difference between the current iteration <var>x</var> and the previous is less than <code><var>eps</var> * norm (<var>x</var>,2)</code>. </li><li> 4: The algorithm detects that the input (preconditioned) matrix is not HPD. </li></ul> </li><li> <var>relres</var> is the ratio of the final residual to its initial value, measured in the Euclidean norm. </li><li> <var>iter</var> indicates the iteration of <var>x</var> which it was computed. Since the output <var>x</var> corresponds to the minimal residual solution, the total number of iterations that the method performed is given by <code>length(resvec) - 1</code>. </li><li> <var>resvec</var> describes the convergence history of the method. <code><var>resvec</var> (<var>i</var>, 1)</code> is the Euclidean norm of the residual, and <code><var>resvec</var> (<var>i</var>, 2)</code> is the preconditioned residual norm, after the (<var>i</var>-1)-th iteration, <code><var>i</var> = 1, 2, …, <var>iter</var>+1</code>. The preconditioned residual norm is defined as <code><var>r</var>' * (<var>m</var> \ <var>r</var>)</code> where <code><var>r</var> = <var>b</var> - <var>A</var> * <var>x</var></code>, see also the description of <var>m</var>. If <var>eigest</var> is not required, only <code><var>resvec</var> (:, 1)</code> is returned. </li><li> <var>eigest</var> returns the estimate for the smallest <code><var>eigest</var>(1)</code> and largest <code><var>eigest</var>(2)</code> eigenvalues of the preconditioned matrix <code><var>P</var> = <var>m</var> \ <var>A</var></code><!-- /@w -->. In particular, if no preconditioning is used, the estimates for the extreme eigenvalues of <var>A</var> are returned. <code><var>eigest</var>(1)</code> is an overestimate and <code><var>eigest</var>(2)</code> is an underestimate, so that <code><var>eigest</var>(2) / <var>eigest</var>(1)</code> is a lower bound for <code>cond (<var>P</var>, 2)</code>, which nevertheless in the limit should theoretically be equal to the actual value of the condition number. </li></ul> <p>Let us consider a trivial problem with a tridiagonal matrix </p> <div class="example"> <pre class="example">n = 10; A = toeplitz (sparse ([1, 1], [1, 2], [2, 1], 1, n)); b = A * ones (n, 1); M1 = ichol (A); # in this tridiagonal case it corresponds to chol (A)' M2 = M1'; M = M1 * M2; Afun = @(x) A * x; Mfun = @(x) M \ x; M1fun = @(x) M1 \ x; M2fun = @(x) M2 \ x; </pre></div> <p><small>EXAMPLE 1:</small> Simplest use of <code>pcg</code> </p> <div class="example"> <pre class="example">x = pcg (A, b) </pre></div> <p><small>EXAMPLE 2:</small> <code>pcg</code> with a function which computes <code><var>A</var> * <var>x</var></code> </p> <div class="example"> <pre class="example">x = pcg (Afun, b) </pre></div> <p><small>EXAMPLE 3:</small> <code>pcg</code> with a preconditioner matrix <var>M</var> </p> <div class="example"> <pre class="example">x = pcg (A, b, 1e-06, 100, M) </pre></div> <p><small>EXAMPLE 4:</small> <code>pcg</code> with a function as preconditioner </p> <div class="example"> <pre class="example">x = pcg (Afun, b, 1e-6, 100, Mfun) </pre></div> <p><small>EXAMPLE 5:</small> <code>pcg</code> with preconditioner matrices <var>M1</var> and <var>M2</var> </p> <div class="example"> <pre class="example">x = pcg (A, b, 1e-6, 100, M1, M2) </pre></div> <p><small>EXAMPLE 6:</small> <code>pcg</code> with functions as preconditioners </p> <div class="example"> <pre class="example">x = pcg (Afun, b, 1e-6, 100, M1fun, M2fun) </pre></div> <p><small>EXAMPLE 7:</small> <code>pcg</code> with as input a function requiring an argument </p> <div class="example"> <pre class="example"> function y = Ap (A, x, p) # compute A^p * x y = x; for i = 1:p y = A * y; endfor endfunction Apfun = @(x, p) Ap (A, x, p); x = pcg (Apfun, b, [], [], [], [], [], 2); </pre></div> <p><small>EXAMPLE 8:</small> explicit example to show that <code>pcg</code> uses a split preconditioner </p> <div class="example"> <pre class="example">M1 = ichol (A + 0.1 * eye (n)); # factorization of A perturbed M2 = M1'; M = M1 * M2; ## reference solution computed by pcg after two iterations [x_ref, fl] = pcg (A, b, [], 2, M) ## split preconditioning [y, fl] = pcg ((M1 \ A) / M2, M1 \ b, [], 2) x = M2 \ y # compare x and x_ref </pre></div> <p>References: </p> <ol> <li> C.T. Kelley, <cite>Iterative Methods for Linear and Nonlinear Equations</cite>, SIAM, 1995. (the base PCG algorithm) </li><li> Y. Saad, <cite>Iterative Methods for Sparse Linear Systems</cite>, PWS 1996. (condition number estimate from PCG) Revised version of this book is available online at <a href="https://www-users.cs.umn.edu/~saad/books.html">https://www-users.cs.umn.edu/~saad/books.html</a> </li></ol> <p><strong>See also:</strong> <a href="Creating-Sparse-Matrices.html#XREFsparse">sparse</a>, <a href="#XREFpcr">pcr</a>, <a href="Specialized-Solvers.html#XREFgmres">gmres</a>, <a href="Specialized-Solvers.html#XREFbicg">bicg</a>, <a href="Specialized-Solvers.html#XREFbicgstab">bicgstab</a>, <a href="Specialized-Solvers.html#XREFcgs">cgs</a>. </p></dd></dl> <a name="XREFpcr"></a><dl> <dt><a name="index-pcr"></a><em><var>x</var> =</em> <strong>pcr</strong> <em>(<var>A</var>, <var>b</var>, <var>tol</var>, <var>maxit</var>, <var>m</var>, <var>x0</var>, …)</em></dt> <dt><a name="index-pcr-1"></a><em>[<var>x</var>, <var>flag</var>, <var>relres</var>, <var>iter</var>, <var>resvec</var>] =</em> <strong>pcr</strong> <em>(…)</em></dt> <dd> <p>Solve the linear system of equations <code><var>A</var> * <var>x</var> = <var>b</var></code> by means of the Preconditioned Conjugate Residuals iterative method. </p> <p>The input arguments are </p> <ul> <li> <var>A</var> can be either a square (preferably sparse) matrix or a function handle, inline function or string containing the name of a function which computes <code><var>A</var> * <var>x</var></code>. In principle <var>A</var> should be symmetric and non-singular; if <code>pcr</code> finds <var>A</var> to be numerically singular, you will get a warning message and the <var>flag</var> output parameter will be set. </li><li> <var>b</var> is the right hand side vector. </li><li> <var>tol</var> is the required relative tolerance for the residual error, <code><var>b</var> - <var>A</var> * <var>x</var></code>. The iteration stops if <code>norm (<var>b</var> - <var>A</var> * <var>x</var>) <= <var>tol</var> * norm (<var>b</var> - <var>A</var> * <var>x0</var>)</code>. If <var>tol</var> is empty or is omitted, the function sets <code><var>tol</var> = 1e-6</code> by default. </li><li> <var>maxit</var> is the maximum allowable number of iterations; if <code>[]</code> is supplied for <var>maxit</var>, or <code>pcr</code> has less arguments, a default value equal to 20 is used. </li><li> <var>m</var> is the (left) preconditioning matrix, so that the iteration is (theoretically) equivalent to solving by <code>pcr</code> <code><var>P</var> * <var>x</var> = <var>m</var> \ <var>b</var></code>, with <code><var>P</var> = <var>m</var> \ <var>A</var></code>. Note that a proper choice of the preconditioner may dramatically improve the overall performance of the method. Instead of matrix <var>m</var>, the user may pass a function which returns the results of applying the inverse of <var>m</var> to a vector (usually this is the preferred way of using the preconditioner). If <code>[]</code> is supplied for <var>m</var>, or <var>m</var> is omitted, no preconditioning is applied. </li><li> <var>x0</var> is the initial guess. If <var>x0</var> is empty or omitted, the function sets <var>x0</var> to a zero vector by default. </li></ul> <p>The arguments which follow <var>x0</var> are treated as parameters, and passed in a proper way to any of the functions (<var>A</var> or <var>m</var>) which are passed to <code>pcr</code>. See the examples below for further details. </p> <p>The output arguments are </p> <ul> <li> <var>x</var> is the computed approximation to the solution of <code><var>A</var> * <var>x</var> = <var>b</var></code>. </li><li> <var>flag</var> reports on the convergence. <code><var>flag</var> = 0</code> means the solution converged and the tolerance criterion given by <var>tol</var> is satisfied. <code><var>flag</var> = 1</code> means that the <var>maxit</var> limit for the iteration count was reached. <code><var>flag</var> = 3</code> reports a <code>pcr</code> breakdown, see [1] for details. </li><li> <var>relres</var> is the ratio of the final residual to its initial value, measured in the Euclidean norm. </li><li> <var>iter</var> is the actual number of iterations performed. </li><li> <var>resvec</var> describes the convergence history of the method, so that <code><var>resvec</var> (i)</code> contains the Euclidean norms of the residual after the (<var>i</var>-1)-th iteration, <code><var>i</var> = 1,2, …, <var>iter</var>+1</code>. </li></ul> <p>Let us consider a trivial problem with a diagonal matrix (we exploit the sparsity of A) </p> <div class="example"> <pre class="example">n = 10; A = sparse (diag (1:n)); b = rand (N, 1); </pre></div> <p><small>EXAMPLE 1:</small> Simplest use of <code>pcr</code> </p> <div class="example"> <pre class="example">x = pcr (A, b) </pre></div> <p><small>EXAMPLE 2:</small> <code>pcr</code> with a function which computes <code><var>A</var> * <var>x</var></code>. </p> <div class="example"> <pre class="example">function y = apply_a (x) y = [1:10]' .* x; endfunction x = pcr ("apply_a", b) </pre></div> <p><small>EXAMPLE 3:</small> Preconditioned iteration, with full diagnostics. The preconditioner (quite strange, because even the original matrix <var>A</var> is trivial) is defined as a function </p> <div class="example"> <pre class="example">function y = apply_m (x) k = floor (length (x) - 2); y = x; y(1:k) = x(1:k) ./ [1:k]'; endfunction [x, flag, relres, iter, resvec] = ... pcr (A, b, [], [], "apply_m") semilogy ([1:iter+1], resvec); </pre></div> <p><small>EXAMPLE 4:</small> Finally, a preconditioner which depends on a parameter <var>k</var>. </p> <div class="example"> <pre class="example">function y = apply_m (x, varargin) k = varargin{1}; y = x; y(1:k) = x(1:k) ./ [1:k]'; endfunction [x, flag, relres, iter, resvec] = ... pcr (A, b, [], [], "apply_m"', [], 3) </pre></div> <p>References: </p> <p>[1] W. Hackbusch, <cite>Iterative Solution of Large Sparse Systems of Equations</cite>, section 9.5.4; Springer, 1994 </p> <p><strong>See also:</strong> <a href="Creating-Sparse-Matrices.html#XREFsparse">sparse</a>, <a href="#XREFpcg">pcg</a>. </p></dd></dl> <p>The speed with which an iterative solver converges to a solution can be accelerated with the use of a pre-conditioning matrix <var>M</var>. In this case the linear equation <code><var>M</var>^-1 * <var>x</var> = <var>M</var>^-1 * <var>A</var> \ <var>b</var></code> is solved instead. Typical pre-conditioning matrices are partial factorizations of the original matrix. </p> <a name="XREFichol"></a><dl> <dt><a name="index-ichol"></a><em><var>L</var> =</em> <strong>ichol</strong> <em>(<var>A</var>)</em></dt> <dt><a name="index-ichol-1"></a><em><var>L</var> =</em> <strong>ichol</strong> <em>(<var>A</var>, <var>opts</var>)</em></dt> <dd> <p>Compute the incomplete Cholesky factorization of the sparse square matrix <var>A</var>. </p> <p>By default, <code>ichol</code> uses only the lower triangle of <var>A</var> and produces a lower triangular factor <var>L</var> such that <code>L*L'</code> approximates <var>A</var>. </p> <p>The factor given by this routine may be useful as a preconditioner for a system of linear equations being solved by iterative methods such as PCG (Preconditioned Conjugate Gradient). </p> <p>The factorization may be modified by passing options in a structure <var>opts</var>. The option name is a field of the structure and the setting is the value of field. Names and specifiers are case sensitive. </p> <dl compact="compact"> <dt>type</dt> <dd><p>Type of factorization. </p> <dl compact="compact"> <dt><code>"nofill"</code> (default)</dt> <dd><p>Incomplete Cholesky factorization with no fill-in (IC(0)). </p> </dd> <dt><code>"ict"</code></dt> <dd><p>Incomplete Cholesky factorization with threshold dropping (ICT). </p></dd> </dl> </dd> <dt>diagcomp</dt> <dd><p>A non-negative scalar <var>alpha</var> for incomplete Cholesky factorization of <code><var>A</var> + <var>alpha</var> * diag (diag (<var>A</var>))</code> instead of <var>A</var>. This can be useful when <var>A</var> is not positive definite. The default value is 0. </p> </dd> <dt>droptol</dt> <dd><p>A non-negative scalar specifying the drop tolerance for factorization if performing ICT. The default value is 0 which produces the complete Cholesky factorization. </p> <p>Non-diagonal entries of <var>L</var> are set to 0 unless </p> <p><code>abs (<var>L</var>(i,j)) >= droptol * norm (<var>A</var>(j:end, j), 1)</code>. </p> </dd> <dt>michol</dt> <dd><p>Modified incomplete Cholesky factorization: </p> <dl compact="compact"> <dt><code>"off"</code> (default)</dt> <dd><p>Row and column sums are not necessarily preserved. </p> </dd> <dt><code>"on"</code></dt> <dd><p>The diagonal of <var>L</var> is modified so that row (and column) sums are preserved even when elements have been dropped during the factorization. The relationship preserved is: <code><var>A</var> * e = <var>L</var> * <var>L</var>' * e</code>, where e is a vector of ones. </p></dd> </dl> </dd> <dt>shape</dt> <dd> <dl compact="compact"> <dt><code>"lower"</code> (default)</dt> <dd><p>Use only the lower triangle of <var>A</var> and return a lower triangular factor <var>L</var> such that <code>L*L'</code> approximates <var>A</var>. </p> </dd> <dt><code>"upper"</code></dt> <dd><p>Use only the upper triangle of <var>A</var> and return an upper triangular factor <var>U</var> such that <code>U'*U</code> approximates <var>A</var>. </p></dd> </dl> </dd> </dl> <p>EXAMPLES </p> <p>The following problem demonstrates how to factorize a sample symmetric positive definite matrix with the full Cholesky decomposition and with the incomplete one. </p> <div class="example"> <pre class="example">A = [ 0.37, -0.05, -0.05, -0.07; -0.05, 0.116, 0.0, -0.05; -0.05, 0.0, 0.116, -0.05; -0.07, -0.05, -0.05, 0.202]; A = sparse (A); nnz (tril (A)) ans = 9 L = chol (A, "lower"); nnz (L) ans = 10 norm (A - L * L', "fro") / norm (A, "fro") ans = 1.1993e-16 opts.type = "nofill"; L = ichol (A, opts); nnz (L) ans = 9 norm (A - L * L', "fro") / norm (A, "fro") ans = 0.019736 </pre></div> <p>Another example for decomposition is a finite difference matrix used to solve a boundary value problem on the unit square. </p> <div class="example"> <pre class="example">nx = 400; ny = 200; hx = 1 / (nx + 1); hy = 1 / (ny + 1); Dxx = spdiags ([ones(nx, 1), -2*ones(nx, 1), ones(nx, 1)], [-1 0 1 ], nx, nx) / (hx ^ 2); Dyy = spdiags ([ones(ny, 1), -2*ones(ny, 1), ones(ny, 1)], [-1 0 1 ], ny, ny) / (hy ^ 2); A = -kron (Dxx, speye (ny)) - kron (speye (nx), Dyy); nnz (tril (A)) ans = 239400 opts.type = "nofill"; L = ichol (A, opts); nnz (tril (A)) ans = 239400 norm (A - L * L', "fro") / norm (A, "fro") ans = 0.062327 </pre></div> <p>References for implemented algorithms: </p> <p>[1] Y. Saad. "Preconditioning Techniques." <cite>Iterative Methods for Sparse Linear Systems</cite>, PWS Publishing Company, 1996. </p> <p>[2] M. Jones, P. Plassmann: <cite>An Improved Incomplete Cholesky Factorization</cite>, 1992. </p> <p><strong>See also:</strong> <a href="Matrix-Factorizations.html#XREFchol">chol</a>, <a href="#XREFilu">ilu</a>, <a href="#XREFpcg">pcg</a>. </p></dd></dl> <a name="XREFilu"></a><dl> <dt><a name="index-ilu"></a><em></em> <strong>ilu</strong> <em>(<var>A</var>)</em></dt> <dt><a name="index-ilu-1"></a><em></em> <strong>ilu</strong> <em>(<var>A</var>, <var>opts</var>)</em></dt> <dt><a name="index-ilu-2"></a><em>[<var>L</var>, <var>U</var>] =</em> <strong>ilu</strong> <em>(…)</em></dt> <dt><a name="index-ilu-3"></a><em>[<var>L</var>, <var>U</var>, <var>P</var>] =</em> <strong>ilu</strong> <em>(…)</em></dt> <dd> <p>Compute the incomplete LU factorization of the sparse square matrix <var>A</var>. </p> <p><code>ilu</code> returns a unit lower triangular matrix <var>L</var>, an upper triangular matrix <var>U</var>, and optionally a permutation matrix <var>P</var>, such that <code><var>L</var>*<var>U</var></code> approximates <code><var>P</var>*<var>A</var></code>. </p> <p>The factors given by this routine may be useful as preconditioners for a system of linear equations being solved by iterative methods such as BICG (BiConjugate Gradients) or GMRES (Generalized Minimum Residual Method). </p> <p>The factorization may be modified by passing options in a structure <var>opts</var>. The option name is a field of the structure and the setting is the value of field. Names and specifiers are case sensitive. </p> <dl compact="compact"> <dt><code>type</code></dt> <dd><p>Type of factorization. </p> <dl compact="compact"> <dt><code>"nofill"</code> (default)</dt> <dd><p>ILU factorization with no fill-in (ILU(0)). </p> <p>Additional supported options: <code>milu</code>. </p> </dd> <dt><code>"crout"</code></dt> <dd><p>Crout version of ILU factorization (ILUC). </p> <p>Additional supported options: <code>milu</code>, <code>droptol</code>. </p> </dd> <dt><code>"ilutp"</code></dt> <dd><p>ILU factorization with threshold and pivoting. </p> <p>Additional supported options: <code>milu</code>, <code>droptol</code>, <code>udiag</code>, <code>thresh</code>. </p></dd> </dl> </dd> <dt><code>droptol</code></dt> <dd><p>A non-negative scalar specifying the drop tolerance for factorization. The default value is 0 which produces the complete LU factorization. </p> <p>Non-diagonal entries of <var>U</var> are set to 0 unless </p> <p><code>abs (<var>U</var>(i,j)) >= droptol * norm (<var>A</var>(:,j))</code>. </p> <p>Non-diagonal entries of <var>L</var> are set to 0 unless </p> <p><code>abs (<var>L</var>(i,j)) >= droptol * norm (<var>A</var>(:,j))/<var>U</var>(j,j)</code>. </p> </dd> <dt><code>milu</code></dt> <dd><p>Modified incomplete LU factorization: </p> <dl compact="compact"> <dt><code>"row"</code></dt> <dd><p>Row-sum modified incomplete LU factorization. The factorization preserves row sums: <code><var>A</var> * e = <var>L</var> * <var>U</var> * e</code>, where e is a vector of ones. </p> </dd> <dt><code>"col"</code></dt> <dd><p>Column-sum modified incomplete LU factorization. The factorization preserves column sums: <code>e' * <var>A</var> = e' * <var>L</var> * <var>U</var></code>. </p> </dd> <dt><code>"off"</code> (default)</dt> <dd><p>Row and column sums are not necessarily preserved. </p></dd> </dl> </dd> <dt><code>udiag</code></dt> <dd><p>If true, any zeros on the diagonal of the upper triangular factor are replaced by the local drop tolerance <code>droptol * norm (<var>A</var>(:,j))/<var>U</var>(j,j)</code>. The default is false. </p> </dd> <dt><code>thresh</code></dt> <dd><p>Pivot threshold for factorization. It can range between 0 (diagonal pivoting) and 1 (default), where the maximum magnitude entry in the column is chosen to be the pivot. </p></dd> </dl> <p>If <code>ilu</code> is called with just one output, the returned matrix is <code><var>L</var> + <var>U</var> - speye (size (<var>A</var>))</code>, where <var>L</var> is unit lower triangular and <var>U</var> is upper triangular. </p> <p>With two outputs, <code>ilu</code> returns a unit lower triangular matrix <var>L</var> and an upper triangular matrix <var>U</var>. For <var>opts</var>.type == <code>"ilutp"</code>, one of the factors is permuted based on the value of <var>opts</var>.milu. When <var>opts</var>.milu == <code>"row"</code>, <var>U</var> is a column permuted upper triangular factor. Otherwise, <var>L</var> is a row-permuted unit lower triangular factor. </p> <p>If there are three named outputs and <var>opts</var>.milu != <code>"row"</code>, <var>P</var> is returned such that <var>L</var> and <var>U</var> are incomplete factors of <code><var>P</var>*<var>A</var></code>. When <var>opts</var>.milu == <code>"row"</code>, <var>P</var> is returned such that <var>L</var> and <var>U</var> are incomplete factors of <code><var>A</var>*<var>P</var></code>. </p> <p>EXAMPLES </p> <div class="example"> <pre class="example">A = gallery ("neumann", 1600) + speye (1600); opts.type = "nofill"; nnz (A) ans = 7840 nnz (lu (A)) ans = 126478 nnz (ilu (A, opts)) ans = 7840 </pre></div> <p>This shows that <var>A</var> has 7,840 nonzeros, the complete LU factorization has 126,478 nonzeros, and the incomplete LU factorization, with 0 level of fill-in, has 7,840 nonzeros, the same amount as <var>A</var>. Taken from: <a href="http://www.mathworks.com/help/matlab/ref/ilu.html">http://www.mathworks.com/help/matlab/ref/ilu.html</a> </p> <div class="example"> <pre class="example">A = gallery ("wathen", 10, 10); b = sum (A, 2); tol = 1e-8; maxit = 50; opts.type = "crout"; opts.droptol = 1e-4; [L, U] = ilu (A, opts); x = bicg (A, b, tol, maxit, L, U); norm (A * x - b, inf) </pre></div> <p>This example uses ILU as preconditioner for a random FEM-Matrix, which has a large condition number. Without <var>L</var> and <var>U</var> BICG would not converge. </p> <p><strong>See also:</strong> <a href="Matrix-Factorizations.html#XREFlu">lu</a>, <a href="#XREFichol">ichol</a>, <a href="Specialized-Solvers.html#XREFbicg">bicg</a>, <a href="Specialized-Solvers.html#XREFgmres">gmres</a>. </p></dd></dl> <hr> <div class="header"> <p> Next: <a href="Real-Life-Example.html#Real-Life-Example" accesskey="n" rel="next">Real Life Example</a>, Previous: <a href="Sparse-Linear-Algebra.html#Sparse-Linear-Algebra" accesskey="p" rel="prev">Sparse Linear Algebra</a>, Up: <a href="Sparse-Matrices.html#Sparse-Matrices" accesskey="u" rel="up">Sparse Matrices</a> [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Concept-Index.html#Concept-Index" title="Index" rel="index">Index</a>]</p> </div> </body> </html>