comparison src/DLD-FUNCTIONS/lu.cc @ 11553:01f703952eff

Improve docstrings for functions in DLD-FUNCTIONS directory. Use same variable names in error() strings and in documentation.
author Rik <octave@nomad.inbox5.com>
date Sun, 16 Jan 2011 22:13:23 -0800
parents fd0a3ac60b0e
children 7d6d8c1e471f
comparison
equal deleted inserted replaced
11552:6b6e9051ecb8 11553:01f703952eff
61 return U; 61 return U;
62 } 62 }
63 63
64 DEFUN_DLD (lu, args, nargout, 64 DEFUN_DLD (lu, args, nargout,
65 "-*- texinfo -*-\n\ 65 "-*- texinfo -*-\n\
66 @deftypefn {Loadable Function} {[@var{l}, @var{u}] =} lu (@var{a})\n\ 66 @deftypefn {Loadable Function} {[@var{L}, @var{U}] =} lu (@var{A})\n\
67 @deftypefnx {Loadable Function} {[@var{l}, @var{u}, @var{p}] =} lu (@var{a})\n\ 67 @deftypefnx {Loadable Function} {[@var{L}, @var{U}, @var{P}] =} lu (@var{A})\n\
68 @deftypefnx {Loadable Function} {[@var{l}, @var{u}, @var{p}, @var{q}] =} lu (@var{s})\n\ 68 @deftypefnx {Loadable Function} {[@var{L}, @var{U}, @var{P}, @var{Q}] =} lu (@var{S})\n\
69 @deftypefnx {Loadable Function} {[@var{l}, @var{u}, @var{p}, @var{q}, @var{r}] =} lu (@var{s})\n\ 69 @deftypefnx {Loadable Function} {[@var{L}, @var{U}, @var{P}, @var{Q}, @var{R}] =} lu (@var{S})\n\
70 @deftypefnx {Loadable Function} {[@dots{}] =} lu (@var{s}, @var{thres})\n\ 70 @deftypefnx {Loadable Function} {[@dots{}] =} lu (@var{S}, @var{thres})\n\
71 @deftypefnx {Loadable Function} {@var{y} =} lu (@dots{})\n\ 71 @deftypefnx {Loadable Function} {@var{y} =} lu (@dots{})\n\
72 @deftypefnx {Loadable Function} {[@dots{}] =} lu (@dots{}, 'vector')\n\ 72 @deftypefnx {Loadable Function} {[@dots{}] =} lu (@dots{}, 'vector')\n\
73 @cindex LU decomposition\n\ 73 @cindex LU decomposition\n\
74 Compute the LU decomposition of @var{a}. If @var{a} is full subroutines from\n\ 74 Compute the LU@tie{}decomposition of @var{A}. If @var{A} is full\n\
75 @sc{lapack} are used and if @var{a} is sparse then @sc{umfpack} is used. The\n\ 75 subroutines from\n\
76 @sc{lapack} are used and if @var{A} is sparse then @sc{umfpack} is used. The\n\
76 result is returned in a permuted form, according to the optional return\n\ 77 result is returned in a permuted form, according to the optional return\n\
77 value @var{p}. For example, given the matrix @code{a = [1, 2; 3, 4]},\n\ 78 value @var{P}. For example, given the matrix @code{a = [1, 2; 3, 4]},\n\
78 \n\ 79 \n\
79 @example\n\ 80 @example\n\
80 [l, u, p] = lu (a)\n\ 81 [l, u, p] = lu (@var{a})\n\
81 @end example\n\ 82 @end example\n\
82 \n\ 83 \n\
83 @noindent\n\ 84 @noindent\n\
84 returns\n\ 85 returns\n\
85 \n\ 86 \n\
102 @end group\n\ 103 @end group\n\
103 @end example\n\ 104 @end example\n\
104 \n\ 105 \n\
105 The matrix is not required to be square.\n\ 106 The matrix is not required to be square.\n\
106 \n\ 107 \n\
107 Called with two or three output arguments and a spare input matrix,\n\ 108 When called with two or three output arguments and a spare input matrix,\n\
108 then @dfn{lu} does not attempt to perform sparsity preserving column\n\ 109 @code{lu} does not attempt to perform sparsity preserving column\n\
109 permutations. Called with a fourth output argument, the sparsity\n\ 110 permutations. Called with a fourth output argument, the sparsity\n\
110 preserving column transformation @var{Q} is returned, such that\n\ 111 preserving column transformation @var{Q} is returned, such that\n\
111 @code{@var{p} * @var{a} * @var{q} = @var{l} * @var{u}}.\n\ 112 @code{@var{P} * @var{A} * @var{Q} = @var{L} * @var{U}}.\n\
112 \n\ 113 \n\
113 Called with a fifth output argument and a sparse input matrix, then\n\ 114 Called with a fifth output argument and a sparse input matrix,\n\
114 @dfn{lu} attempts to use a scaling factor @var{r} on the input matrix\n\ 115 @code{lu} attempts to use a scaling factor @var{R} on the input matrix\n\
115 such that\n\ 116 such that\n\
116 @code{@var{p} * (@var{r} \\ @var{a}) * @var{q} = @var{l} * @var{u}}.\n\ 117 @code{@var{P} * (@var{R} \\ @var{A}) * @var{Q} = @var{L} * @var{U}}.\n\
117 This typically leads to a sparser and more stable factorization.\n\ 118 This typically leads to a sparser and more stable factorization.\n\
118 \n\ 119 \n\
119 An additional input argument @var{thres}, that defines the pivoting\n\ 120 An additional input argument @var{thres}, that defines the pivoting\n\
120 threshold can be given. @var{thres} can be a scalar, in which case\n\ 121 threshold can be given. @var{thres} can be a scalar, in which case\n\
121 it defines @sc{umfpack} pivoting tolerance for both symmetric and unsymmetric\n\ 122 it defines the @sc{umfpack} pivoting tolerance for both symmetric and\n\
122 cases. If @var{thres} is a two element vector, then the first element\n\ 123 unsymmetric cases. If @var{thres} is a 2-element vector, then the first\n\
123 defines the pivoting tolerance for the unsymmetric @sc{umfpack} pivoting\n\ 124 element defines the pivoting tolerance for the unsymmetric @sc{umfpack}\n\
124 strategy and the second the symmetric strategy. By default, the values\n\ 125 pivoting strategy and the second for the symmetric strategy. By default,\n\
125 defined by @code{spparms} are used and are by default @code{[0.1, 0.001]}.\n\ 126 the values defined by @code{spparms} are used ([0.1, 0.001]).\n\
126 \n\ 127 \n\
127 Given the string argument 'vector', @dfn{lu} returns the values of @var{p}\n\ 128 Given the string argument 'vector', @code{lu} returns the values of @var{P}\n\
128 @var{q} as vector values, such that for full matrix, @code{@var{a}\n\ 129 and @var{Q} as vector values, such that for full matrix, @code{@var{A}\n\
129 (@var{p},:) = @var{l} * @var{u}}, and @code{@var{r}(@var{p},:) * @var{a}\n\ 130 (@var{P},:) = @var{L} * @var{U}}, and @code{@var{R}(@var{P},:) * @var{A}\n\
130 (:, @var{q}) = @var{l} * @var{u}}.\n\ 131 (:, @var{Q}) = @var{L} * @var{U}}.\n\
131 \n\ 132 \n\
132 With two output arguments, returns the permuted forms of the upper and\n\ 133 With two output arguments, returns the permuted forms of the upper and\n\
133 lower triangular matrices, such that @code{@var{a} = @var{l} * @var{u}}.\n\ 134 lower triangular matrices, such that @code{@var{A} = @var{L} * @var{U}}.\n\
134 With one output argument @var{y}, then the matrix returned by the @sc{lapack}\n\ 135 With one output argument @var{y}, then the matrix returned by the @sc{lapack}\n\
135 routines is returned. If the input matrix is sparse then the matrix @var{l}\n\ 136 routines is returned. If the input matrix is sparse then the matrix @var{L}\n\
136 is embedded into @var{u} to give a return value similar to the full case.\n\ 137 is embedded into @var{U} to give a return value similar to the full case.\n\
137 For both full and sparse matrices, @dfn{lu} loses the permutation\n\ 138 For both full and sparse matrices, @code{lu} loses the permutation\n\
138 information.\n\ 139 information.\n\
139 @end deftypefn") 140 @end deftypefn")
140 { 141 {
141 octave_value_list retval; 142 octave_value_list retval;
142 int nargin = args.length (); 143 int nargin = args.length ();
173 Matrix tmp = args(n++).matrix_value (); 174 Matrix tmp = args(n++).matrix_value ();
174 175
175 if (! error_state ) 176 if (! error_state )
176 { 177 {
177 if (!issparse) 178 if (!issparse)
178 error ("lu: can not define pivoting threshold for full matrices"); 179 error ("lu: can not define pivoting threshold THRES for full matrices");
179 else if (tmp.nelem () == 1) 180 else if (tmp.nelem () == 1)
180 { 181 {
181 thres.resize(1,2); 182 thres.resize(1,2);
182 thres(0) = tmp(0); 183 thres(0) = tmp(0);
183 thres(1) = tmp(0); 184 thres(1) = tmp(0);
184 } 185 }
185 else if (tmp.nelem () == 2) 186 else if (tmp.nelem () == 2)
186 thres = tmp; 187 thres = tmp;
187 else 188 else
188 error ("lu: expecting 2 element vector for thres"); 189 error ("lu: expecting 2-element vector for THRES");
189 } 190 }
190 } 191 }
191 } 192 }
192 193
193 octave_value arg = args(0); 194 octave_value arg = args(0);
594 (p.is_undefined () || p.rows () == m)); 595 (p.is_undefined () || p.rows () == m));
595 } 596 }
596 597
597 DEFUN_DLD (luupdate, args, , 598 DEFUN_DLD (luupdate, args, ,
598 "-*- texinfo -*-\n\ 599 "-*- texinfo -*-\n\
599 @deftypefn {Loadable Function} {[@var{L}, @var{U}] =} luupdate (@var{l}, @var{u}, @var{x}, @var{y})\n\ 600 @deftypefn {Loadable Function} {[@var{L}, @var{U}] =} luupdate (@var{L}, @var{U}, @var{x}, @var{y})\n\
600 @deftypefnx {Loadable Function} {[@var{L}, @var{U}, @var{P}] =} luupdate (@var{L}, @var{U}, @var{P}, @var{x}, @var{y})\n\ 601 @deftypefnx {Loadable Function} {[@var{L}, @var{U}, @var{P}] =} luupdate (@var{L}, @var{U}, @var{P}, @var{x}, @var{y})\n\
601 Given an LU@tie{}factorization of a real or complex matrix\n\ 602 Given an LU@tie{}factorization of a real or complex matrix\n\
602 @w{@var{A} = @var{L}*@var{U}}, @var{L}@tie{}lower unit trapezoidal and\n\ 603 @w{@var{A} = @var{L}*@var{U}}, @var{L}@tie{}lower unit trapezoidal and\n\
603 @var{U}@tie{}upper trapezoidal, return the LU@tie{}factorization\n\ 604 @var{U}@tie{}upper trapezoidal, return the LU@tie{}factorization\n\
604 of @w{@var{A} + @var{x}*@var{y}.'}, where @var{x} and @var{y} are\n\ 605 of @w{@var{A} + @var{x}*@var{y}.'}, where @var{x} and @var{y} are\n\
613 @example\n\ 614 @example\n\
614 [@var{L}, @var{U}, @var{P}] = lu (@var{A});\n\ 615 [@var{L}, @var{U}, @var{P}] = lu (@var{A});\n\
615 @end example\n\ 616 @end example\n\
616 \n\ 617 \n\
617 @noindent\n\ 618 @noindent\n\
618 then a factorization of @code{@var{a}+@var{x}*@var{y}.'} can be obtained either\n\ 619 then a factorization of @code{@var{A}+@var{x}*@var{y}.'} can be obtained\n\
619 as\n\ 620 either as\n\
620 \n\ 621 \n\
621 @example\n\ 622 @example\n\
622 [@var{L1}, @var{U1}] = lu (@var{L}, @var{U}, @var{P}*@var{x}, @var{y})\n\ 623 [@var{L1}, @var{U1}] = lu (@var{L}, @var{U}, @var{P}*@var{x}, @var{y})\n\
623 @end example\n\ 624 @end example\n\
624 \n\ 625 \n\
627 \n\ 628 \n\
628 @example\n\ 629 @example\n\
629 [@var{L1}, @var{U1}, @var{P1}] = lu (@var{L}, @var{U}, @var{P}, @var{x}, @var{y})\n\ 630 [@var{L1}, @var{U1}, @var{P1}] = lu (@var{L}, @var{U}, @var{P}, @var{x}, @var{y})\n\
630 @end example\n\ 631 @end example\n\
631 \n\ 632 \n\
632 The first form uses the unpivoted algorithm, which is faster, but less stable.\n\ 633 The first form uses the unpivoted algorithm, which is faster, but less\n\
633 The second form uses a slower pivoted algorithm, which is more stable.\n\ 634 stable. The second form uses a slower pivoted algorithm, which is more\n\
634 \n\ 635 stable.\n\
635 Note that the matrix case is done as a sequence of rank-1 updates;\n\ 636 \n\
636 thus, for k large enough, it will be both faster and more accurate to recompute\n\ 637 The matrix case is done as a sequence of rank-1 updates;\n\
637 the factorization from scratch.\n\ 638 thus, for large enough k, it will be both faster and more accurate to\n\
639 recompute the factorization from scratch.\n\
638 @seealso{lu,qrupdate,cholupdate}\n\ 640 @seealso{lu,qrupdate,cholupdate}\n\
639 @end deftypefn") 641 @end deftypefn")
640 { 642 {
641 octave_idx_type nargin = args.length (); 643 octave_idx_type nargin = args.length ();
642 octave_value_list retval; 644 octave_value_list retval;
754 retval(0) = get_lu_l (fact); 756 retval(0) = get_lu_l (fact);
755 } 757 }
756 } 758 }
757 } 759 }
758 else 760 else
759 error ("luupdate: dimensions mismatch"); 761 error ("luupdate: dimension mismatch");
760 } 762 }
761 else 763 else
762 error ("luupdate: expecting numeric arguments"); 764 error ("luupdate: L, U, X, and Y must be numeric");
763 765
764 return retval; 766 return retval;
765 } 767 }
766 768
767 /* 769 /*