changeset 9066:be150a172010

Cleanup documentation for diagperm.texi, sparse.texi Grammarcheck input .txi files Spellcheck .texi files
author Rik <rdrider0-list@yahoo.com>
date Sat, 28 Mar 2009 21:29:08 -0700
parents 8207b833557f
children 8970b4b10e9f
files doc/interpreter/diagperm.txi doc/interpreter/sparse.txi scripts/sparse/spaugment.m scripts/sparse/svds.m src/DLD-FUNCTIONS/amd.cc src/DLD-FUNCTIONS/symrcm.cc src/data.cc
diffstat 7 files changed, 215 insertions(+), 167 deletions(-) [+]
line wrap: on
line diff
--- a/doc/interpreter/diagperm.txi
+++ b/doc/interpreter/diagperm.txi
@@ -45,7 +45,7 @@
 matrix.
 
 A permutation matrix is defined as a square matrix that has a single element equal to unity
-in each row and each column; all other elements are zero. That is, there exists a 
+in each row and each column; all other elements are zero.  That is, there exists a 
 permutation (vector) 
 @iftex
 @tex
@@ -59,7 +59,7 @@
 @end ifnottex
 
 Octave provides special treatment of real and complex rectangular diagonal matrices,
-as well as permutation matrices. They are stored as special objects, using efficient 
+as well as permutation matrices.  They are stored as special objects, using efficient 
 storage and algorithms, facilitating writing both readable and efficient matrix algebra
 expressions in the Octave language.
 
@@ -73,7 +73,7 @@
 @subsection Creating Diagonal Matrices
 
 The most common and easiest way to create a diagonal matrix is using the built-in
-function @dfn{diag}. The expression @code{diag (v)}, with @var{v} a vector,
+function @dfn{diag}.  The expression @code{diag (v)}, with @var{v} a vector,
 will create a square diagonal matrix with elements on the main diagonal given
 by the elements of @var{v}, and size equal to the length of @var{v}.
 @code{diag (v, m, n)} can be used to construct a rectangular diagonal matrix.
@@ -81,11 +81,12 @@
 than a general matrix object.
 
 Diagonal matrix with unit elements can be created using @dfn{eye}.
-Some other built-in functions can also return diagonal matrices. Examples include
+Some other built-in functions can also return diagonal matrices.  Examples include
 @dfn{balance} or @dfn{inv}.
 
 Example:
 @example
+@group
   diag (1:4)
 @result{}
 Diagonal Matrix
@@ -105,6 +106,7 @@
    0   0   3
    0   0   0
    0   0   0
+@end group
 @end example  
 
 @node Creating Permutation Matrices
@@ -128,6 +130,7 @@
 
 For example:
 @example
+@group
   eye (4) ([1,3,2,4],:)
 @result{}
 Permutation Matrix
@@ -145,16 +148,18 @@
    0   0   1   0
    0   1   0   0
    0   0   0   1
+@end group
 @end example
 
 Mathematically, an identity matrix is both diagonal and permutation matrix.
 In Octave, @code{eye (n)} returns a diagonal matrix, because a matrix
-can only have one class. You can convert this diagonal matrix to a permutation
+can only have one class.  You can convert this diagonal matrix to a permutation
 matrix by indexing it by an identity permutation, as shown below.
 This is a special property of the identity matrix; indexing other diagonal
 matrices generally produces a full matrix.
 
 @example
+@group
   eye (3)
 @result{}
 Diagonal Matrix
@@ -170,21 +175,22 @@
    1   0   0
    0   1   0
    0   0   1
+@end group
 @end example
 
-Some other built-in functions can also return permutation matrices. Examples include
+Some other built-in functions can also return permutation matrices.  Examples include
 @dfn{inv} or @dfn{lu}.
 
 @node Explicit and Implicit Conversions
 @subsection Explicit and Implicit Conversions
 
-The diagonal and permutation matrices are special objects in their own right. A number
+The diagonal and permutation matrices are special objects in their own right.  A number
 of operations and built-in functions are defined for these matrices to use special,
-more efficient code than would be used for a full matrix in the same place. Examples
+more efficient code than would be used for a full matrix in the same place.  Examples
 are given in further sections.
 
 To facilitate smooth mixing with full matrices, backward compatibility, and
-compatibility with Matlab, the diagonal and permutation matrices should allow
+compatibility with @sc{matlab}, the diagonal and permutation matrices should allow
 any operation that works on full matrices, and will either treat it specially,
 or implicitly convert themselves to full matrices.
 
@@ -193,7 +199,7 @@
 such as @dfn{exp}.
 
 An explicit conversion to a full matrix can be requested using the built-in
-function @dfn{full}. It should also be noted that the diagonal and permutation
+function @dfn{full}.  It should also be noted that the diagonal and permutation
 matrix objects will cache the result of the conversion after it is first
 requested (explicitly or implicitly), so that subsequent conversions will
 be very cheap.
@@ -203,7 +209,7 @@
 
 As has been already said, diagonal and permutation matrices make it
 possible to use efficient algorithms while preserving natural linear
-algebra syntax. This section describes in detail the operations that
+algebra syntax.  This section describes in detail the operations that
 are treated specially when performed on these special matrix objects.
 
 @menu
@@ -214,8 +220,8 @@
 @node Expressions Involving Diagonal Matrices
 @subsection Expressions Involving Diagonal Matrices
 
-Assume @var{D} is a diagonal matrix. If @var{M} is a full matrix,
-then @code{D*M} will scale the rows of @var{M}. That means,
+Assume @var{D} is a diagonal matrix.  If @var{M} is a full matrix,
+then @code{D*M} will scale the rows of @var{M}.  That means,
 if @code{S = D*M}, then for each pair of indices
 i,j it holds 
 @iftex
@@ -235,77 +241,79 @@
 @example
 D(:,1:m) * M(1:m,:),
 @end example
-i.e. trailing @code{n-m} rows of @var{M} are ignored. If @code{m > n}, 
+i.e., trailing @code{n-m} rows of @var{M} are ignored.  If @code{m > n}, 
 then @code{D*M} is equivalent to 
 @example
 [D(1:n,n) * M; zeros(m-n, columns (M))],
 @end example
-i.e. null rows are appended to the result.
+i.e., null rows are appended to the result.
 The situation for right-multiplication @code{M*D} is analogous.
 
 The expressions @code{D \ M} and @code{M / D} perform inverse scaling.
 They are equivalent to solving a diagonal (or rectangular diagonal)
-in a least-squares minimum-norm sense. In exact arithmetics, this is
-equivalent to multiplying by a pseudoinverse. The pseudoinverse of
+in a least-squares minimum-norm sense.  In exact arithmetics, this is
+equivalent to multiplying by a pseudoinverse.  The pseudoinverse of
 a rectangular diagonal matrix is again a rectangular diagonal matrix
 with swapped dimensions, where each nonzero diagonal element is replaced
 by its reciprocal.
 The matrix division algorithms do, in fact, use division rather than 
 multiplication by reciprocals for better numerical accuracy; otherwise, they
 honor the above definition.  Note that a diagonal matrix is never truncated due
-to ill-conditioning; otherwise, it would not be much useful for scaling. This
-is typically consistent with linear algebra needs. A full matrix that only
+to ill-conditioning; otherwise, it would not be much useful for scaling.  This
+is typically consistent with linear algebra needs.  A full matrix that only
 happens to be diagonal (an is thus not a special object) is of course treated
 normally.
 
 Multiplication and division by diagonal matrices works efficiently also when
-combined with sparse matrices, i.e. @code{D*S}, where @var{D} is a diagonal
+combined with sparse matrices, i.e., @code{D*S}, where @var{D} is a diagonal
 matrix and @var{S} is a sparse matrix scales the rows of the sparse matrix and
-returns a sparse matrix. The expressions @code{S*D}, @code{D\S}, @code{S/D} work
+returns a sparse matrix.  The expressions @code{S*D}, @code{D\S}, @code{S/D} work
 analogically.
 
 If @var{D1} and @var{D2} are both diagonal matrices, then the expressions
 @example
+@group
 D1 + D2
 D1 - D2 
 D1 * D2 
 D1 / D2 
 D1 \ D2
+@end group
 @end example
 again produce diagonal matrices, provided that normal
-dimension matching rules are obeyed. The relations used are same as described above.
+dimension matching rules are obeyed.  The relations used are same as described above.
 
 Also, a diagonal matrix @var{D} can be multiplied or divided by a scalar, or raised
 to a scalar power if it is square, producing diagonal matrix result in all cases. 
 
 A diagonal matrix can also be transposed or conjugate-transposed, giving the expected
-result. Extracting a leading submatrix of a diagonal matrix, i.e. @code{D(1:m,1:n)},
+result.  Extracting a leading submatrix of a diagonal matrix, i.e., @code{D(1:m,1:n)},
 will produce a diagonal matrix, other indexing expressions will implicitly convert to
 full matrix.
 
-Adding a diagonal matrix to a full matrix only operates on the diagonal elements. Thus,
+Adding a diagonal matrix to a full matrix only operates on the diagonal elements.  Thus,
 @example
 A = A + eps * eye (n)
 @end example
-is an efficient method of augmenting the diagonal of a matrix. Subtraction works
+is an efficient method of augmenting the diagonal of a matrix.  Subtraction works
 analogically.
 
 When involved in expressions with other element-by-element operators, @code{.*},
 @code{./}, @code{.\} or @code{.^}, an implicit conversion to full matrix will
-take place. This is not always strictly necessary but chosen to facilitate
-better consistency with Matlab.
+take place.  This is not always strictly necessary but chosen to facilitate
+better consistency with @sc{matlab}.
 
 @node Expressions Involving Permutation Matrices
 @subsection Expressions Involving Permutation Matrices
 
 If @var{P} is a permutation matrix and @var{M} a matrix, the expression
-@code{P*M} will permute the rows of @var{M}. Similarly, @code{M*P} will
+@code{P*M} will permute the rows of @var{M}.  Similarly, @code{M*P} will
 yield a column permutation. 
 Matrix division @code{P\M} and @code{M/P} can be used to do inverse permutation.
 
 The previously described syntax for creating permutation matrices can actually
 help an user to understand the connection between a permutation matrix and
-a permuting vector. Namely, the following holds, where @code{I = eye (n)}
+a permuting vector.  Namely, the following holds, where @code{I = eye (n)}
 is an identity matrix:
 @example
   I(p,:) * M = (I*M) (p,:) = M(p,:)
@@ -320,20 +328,20 @@
 A permutation matrix can be transposed (or conjugate-transposed, which is the
 same, because a permutation matrix is never complex), inverting the
 permutation, or equivalently, turning a row-permutation matrix into a
-column-permutation one. For permutation matrices, transpose is equivalent to
+column-permutation one.  For permutation matrices, transpose is equivalent to
 inversion, thus @code{P\M} is equivalent to @code{P'*M}.  Transpose of a
 permutation matrix (or inverse) is a constant-time operation, flipping only a
 flag internally, and thus the choice between the two above equivalent
 expressions for inverse permuting is completely up to the user's taste.
 
 Multiplication and division by permutation matrices works efficiently also when
-combined with sparse matrices, i.e. @code{P*S}, where @var{P} is a permutation
+combined with sparse matrices, i.e., @code{P*S}, where @var{P} is a permutation
 matrix and @var{S} is a sparse matrix permutes the rows of the sparse matrix and
-returns a sparse matrix. The expressions @code{S*P}, @code{P\S}, @code{S/P} work
+returns a sparse matrix.  The expressions @code{S*P}, @code{P\S}, @code{S/P} work
 analogically.
 
 Two permutation matrices can be multiplied or divided (if their sizes match), performing
-a composition of permutations. Also a permutation matrix can be indexed by a permutation
+a composition of permutations.  Also a permutation matrix can be indexed by a permutation
 vector (or two vectors), giving again a permutation matrix.
 Any other operations do not generally yield a permutation matrix and will thus
 trigger the implicit conversion.
@@ -342,7 +350,7 @@
 @section Functions That Are Aware of These Matrices
 
 This section lists the built-in functions that are aware of diagonal and
-permutation matrices on input, or can return them as output. Passed to other
+permutation matrices on input, or can return them as output.  Passed to other
 functions, these matrices will in general trigger an implicit conversion.
 (Of course, user-defined dynamically linked functions may also work with
 diagonal or permutation matrices).
@@ -356,7 +364,7 @@
 @subsection Diagonal Matrix Functions
 
 @dfn{inv} and @dfn{pinv} can be applied to a diagonal matrix, yielding again
-a diagonal matrix. @dfn{det} will use an efficient straightforward calculation
+a diagonal matrix.  @dfn{det} will use an efficient straightforward calculation
 when given a diagonal matrix, as well as @dfn{cond}.
 The following mapper functions can be applied to a diagonal matrix
 without converting it to a full one:
@@ -370,7 +378,7 @@
 @subsection Permutation Matrix Functions
 
 @dfn{inv} and @dfn{pinv} will invert a permutation matrix, preserving its
-specialness. @dfn{det} can be applied to a permutation matrix, efficiently
+specialness.  @dfn{det} can be applied to a permutation matrix, efficiently
 calculating the sign of the permutation (which is equal to the determinant).
 
 A permutation matrix can also be returned from the built-in functions
@@ -387,20 +395,24 @@
 The following can be used to solve a linear system @code{A*x = b}
 using the pivoted LU factorization:
 @example
+@group
   [L, U, P] = lu (A); ## now L*U = P*A
   x = U \ L \ P*b;
+@end group
 @end example
 
 @noindent
 This is how you normalize columns of a matrix @var{X} to unit norm:
 @example
+@group
   s = norm (X, "columns");
   X = diag (s) \ X;
+@end group
 @end example
 
 @noindent
 The following expression is a way to efficiently calculate the sign of a
-permutation, given by a permutation vector @var{p}. It will also work
+permutation, given by a permutation vector @var{p}.  It will also work
 in earlier versions of Octave, but slowly.
 @example
   det (eye (length (p))(p, :))
@@ -410,10 +422,11 @@
 Finally, here's how you solve a linear system @code{A*x = b} 
 with Tikhonov regularization (ridge regression) using SVD (a skeleton only):
 @example
+@group
   m = rows (A); n = columns (A);
   [U, S, V] = svd (A);
   ## determine the regularization factor alpha
-  ## alpha = ...
+  ## alpha = @dots{}
   ## transform to orthogonal basis
   b = U'*b;
   ## Use the standard formula, replacing A with S.
@@ -421,6 +434,7 @@
   x = (S'*S + alpha^2 * eye (n)) \ (S' * b);
   ## transform to solution basis
   x = V*x;
+@end group
 @end example
 
 
@@ -439,8 +453,8 @@
 Numerical software dealing with structured and sparse matrices (including
 Octave) however, almost always makes a distinction between a "numerical zero"
 and an "assumed zero". 
-A "numerical zero" is a zero value occuring in a place where any floating-point
-value could occur. It is normally stored somewhere in memory as an explicit
+A "numerical zero" is a zero value occurring in a place where any floating-point
+value could occur.  It is normally stored somewhere in memory as an explicit
 value. 
 An "assumed zero", on the contrary, is a zero matrix element implied by the
 matrix structure (diagonal, triangular) or a sparsity pattern; its value is
@@ -453,7 +467,7 @@
 or divided by @code{NaN}.
 The reason for this behavior is that the numerical multiplication is not
 actually performed anywhere by the underlying algorithm; the result is
-just assumed to be zero. Equivalently, one can say that the part of the
+just assumed to be zero.  Equivalently, one can say that the part of the
 computation involving assumed zeros is performed symbolically, not numerically.
 
 This behavior not only facilitates the most straightforward and efficient
@@ -469,11 +483,12 @@
 
 Note that certain competing software does not strictly follow this principle
 and converts assumed zeros to numerical zeros in certain cases, while not doing
-so in other cases. As of today, there are no intentions to mimick such behavior 
+so in other cases.  As of today, there are no intentions to mimic such behavior 
 in Octave.
 
 Examples of effects of assumed zeros vs. numerical zeros:
 @example
+@group
 Inf * eye (3)
 @result{}
    Inf     0     0
@@ -494,9 +509,11 @@
    NaN   Inf   NaN
    NaN   NaN   Inf
 
+@end group
 @end example
 
 @example
+@group
 diag(1:3) * [NaN; 1; 1]
 @result{}
    NaN
@@ -513,5 +530,6 @@
    NaN
    NaN
    NaN
+@end group
 @end example
 
--- a/doc/interpreter/sparse.txi
+++ b/doc/interpreter/sparse.txi
@@ -37,21 +37,21 @@
 @section The Creation and Manipulation of Sparse Matrices
 
 The size of mathematical problems that can be treated at any particular
-time is generally limited by the available computing resources. Both,
+time is generally limited by the available computing resources.  Both,
 the speed of the computer and its available memory place limitation on
 the problem size. 
 
 There are many classes of mathematical problems which give rise to
-matrices, where a large number of the elements are zero. In this case
+matrices, where a large number of the elements are zero.  In this case
 it makes sense to have a special matrix type to handle this class of
 problems where only the non-zero elements of the matrix are
-stored. Not only does this reduce the amount of memory to store the
+stored.  Not only does this reduce the amount of memory to store the
 matrix, but it also means that operations on this type of matrix can
 take advantage of the a-priori knowledge of the positions of the
 non-zero elements to accelerate their calculations.
 
 A matrix type that stores only the non-zero elements is generally called
-sparse. It is the purpose of this document to discuss the basics of the
+sparse.  It is the purpose of this document to discuss the basics of the
 storage and creation of sparse matrices and the fundamental operations
 on them.
 
@@ -66,15 +66,15 @@
 @subsection Storage of Sparse Matrices
 
 It is not strictly speaking necessary for the user to understand how
-sparse matrices are stored. However, such an understanding will help
-to get an understanding of the size of sparse matrices. Understanding
+sparse matrices are stored.  However, such an understanding will help
+to get an understanding of the size of sparse matrices.  Understanding
 the storage technique is also necessary for those users wishing to 
 create their own oct-files. 
 
-There are many different means of storing sparse matrix data. What all
+There are many different means of storing sparse matrix data.  What all
 of the methods have in common is that they attempt to reduce the complexity
 and storage given a-priori knowledge of the particular class of problems
-that will be solved. A good summary of the available techniques for storing
+that will be solved.  A good summary of the available techniques for storing
 sparse matrix is given by Saad @footnote{Youcef Saad "SPARSKIT: A basic toolkit
 for sparse matrix computation", 1994,
 @url{http://www-users.cs.umn.edu/~saad/software/SPARSKIT/paper.ps}}.
@@ -85,33 +85,35 @@
 
 An obvious way to do this is by storing the elements of the matrix as
 triplets, with two elements being their position in the array 
-(rows and column) and the third being the data itself. This is conceptually
+(rows and column) and the third being the data itself.  This is conceptually
 easy to grasp, but requires more storage than is strictly needed.
 
 The storage technique used within Octave is the compressed column
 format.  In this format the position of each element in a row and the
-data are stored as previously. However, if we assume that all elements
+data are stored as previously.  However, if we assume that all elements
 in the same column are stored adjacent in the computers memory, then
 we only need to store information on the number of non-zero elements
-in each column, rather than their positions. Thus assuming that the
+in each column, rather than their positions.  Thus assuming that the
 matrix has more non-zero elements than there are columns in the
 matrix, we win in terms of the amount of memory used.
 
 In fact, the column index contains one more element than the number of
-columns, with the first element always being zero. The advantage of
+columns, with the first element always being zero.  The advantage of
 this is a simplification in the code, in that there is no special case
-for the first or last columns. A short example, demonstrating this in
+for the first or last columns.  A short example, demonstrating this in
 C is.
 
 @example
+@group
   for (j = 0; j < nc; j++)
     for (i = cidx (j); i < cidx(j+1); i++)
        printf ("non-zero element (%i,%i) is %d\n", 
 	   ridx(i), j, data(i));
+@end group
 @end example
 
 A clear understanding might be had by considering an example of how the
-above applies to an example matrix. Consider the matrix
+above applies to an example matrix.  Consider the matrix
 
 @example
 @group
@@ -133,7 +135,7 @@
 @end example
 
 This will be stored as three vectors @var{cidx}, @var{ridx} and @var{data},
-representing the column indexing, row indexing and data respectively. The
+representing the column indexing, row indexing and data respectively.  The
 contents of these three vectors for the above matrix will be
 
 @example
@@ -146,22 +148,22 @@
 
 Note that this is the representation of these elements with the first row
 and column assumed to start at zero, while in Octave itself the row and 
-column indexing starts at one. Thus the number of elements in the 
+column indexing starts at one.  Thus the number of elements in the 
 @var{i}-th column is given by @code{@var{cidx} (@var{i} + 1) - 
 @var{cidx} (@var{i})}.
 
 Although Octave uses a compressed column format, it should be noted
-that compressed row formats are equally possible. However, in the
+that compressed row formats are equally possible.  However, in the
 context of mixed operations between mixed sparse and dense matrices,
 it makes sense that the elements of the sparse matrices are in the
-same order as the dense matrices. Octave stores dense matrices in
+same order as the dense matrices.  Octave stores dense matrices in
 column major ordering, and so sparse matrices are equally stored in
 this manner.
 
 A further constraint on the sparse matrix storage used by Octave is that 
 all elements in the rows are stored in increasing order of their row
-index, which makes certain operations faster. However, it imposes
-the need to sort the elements on the creation of sparse matrices. Having
+index, which makes certain operations faster.  However, it imposes
+the need to sort the elements on the creation of sparse matrices.  Having
 disordered elements is potentially an advantage in that it makes operations
 such as concatenating two sparse matrices together easier and faster, however
 it adds complexity and speed problems elsewhere.
@@ -173,11 +175,11 @@
 
 @table @asis
 @item Returned from a function
-There are many functions that directly return sparse matrices. These include
+There are many functions that directly return sparse matrices.  These include
 @dfn{speye}, @dfn{sprand}, @dfn{diag}, etc.
 @item Constructed from matrices or vectors
 The function @dfn{sparse} allows a sparse matrix to be constructed from 
-three vectors representing the row, column and data. Alternatively, the
+three vectors representing the row, column and data.  Alternatively, the
 function @dfn{spconvert} uses a three column matrix format to allow easy
 importation of data from elsewhere.
 @item Created and then filled
@@ -188,15 +190,15 @@
 @end table
 
 There are several basic functions to return specific sparse
-matrices. For example the sparse identity matrix, is a matrix that is
-often needed. It therefore has its own function to create it as
+matrices.  For example the sparse identity matrix, is a matrix that is
+often needed.  It therefore has its own function to create it as
 @code{speye (@var{n})} or @code{speye (@var{r}, @var{c})}, which
 creates an @var{n}-by-@var{n} or @var{r}-by-@var{c} sparse identity
 matrix.
 
 Another typical sparse matrix that is often needed is a random distribution
-of random elements. The functions @dfn{sprand} and @dfn{sprandn} perform
-this for uniform and normal random distributions of elements. They have exactly
+of random elements.  The functions @dfn{sprand} and @dfn{sprandn} perform
+this for uniform and normal random distributions of elements.  They have exactly
 the same calling convention, where @code{sprand (@var{r}, @var{c}, @var{d})},
 creates an @var{r}-by-@var{c} sparse matrix with a density of filled
 elements of @var{d}.
@@ -204,7 +206,7 @@
 Other functions of interest that directly create sparse matrices, are
 @dfn{diag} or its generalization @dfn{spdiags}, that can take the
 definition of the diagonals of the matrix and create the sparse matrix 
-that corresponds to this. For example
+that corresponds to this.  For example
 
 @example
 s = diag (sparse(randn(1,n)), -1);
@@ -234,9 +236,10 @@
 
 The recommended way for the user to create a sparse matrix, is to create 
 two vectors containing the row and column index of the data and a third
-vector of the same size containing the data to be stored. For example
+vector of the same size containing the data to be stored.  For example
 
 @example
+@group
   ri = ci = d = [];
   for j = 1:c
     ri = [ri; randperm(r)(1:n)'];
@@ -244,32 +247,36 @@
     d = [d; rand(n,1)];
   endfor
   s = sparse (ri, ci, d, r, c);
+@end group
 @end example
 
 creates an @var{r}-by-@var{c} sparse matrix with a random distribution
-of @var{n} (<@var{r}) elements per column. The elements of the vectors
+of @var{n} (<@var{r}) elements per column.  The elements of the vectors
 do not need to be sorted in any particular order as Octave will sort
-them prior to storing the data. However, pre-sorting the data will
+them prior to storing the data.  However, pre-sorting the data will
 make the creation of the sparse matrix faster.
 
 The function @dfn{spconvert} takes a three or four column real matrix.
 The first two columns represent the row and column index respectively and
 the third and four columns, the real and imaginary parts of the sparse
-matrix. The matrix can contain zero elements and the elements can be 
-sorted in any order. Adding zero elements is a convenient way to define
-the size of the sparse matrix. For example
+matrix.  The matrix can contain zero elements and the elements can be 
+sorted in any order.  Adding zero elements is a convenient way to define
+the size of the sparse matrix.  For example
 
 @example
+@group
 s = spconvert ([1 2 3 4; 1 3 4 4; 1 2 3 0]')
 @result{} Compressed Column Sparse (rows=4, cols=4, nnz=3)
       (1 , 1) -> 1
       (2 , 3) -> 2
       (3 , 4) -> 3
+@end group
 @end example
 
 An example of creating and filling a matrix might be
 
 @example
+@group
 k = 5;
 nz = r * k;
 s = spalloc (r, c, nz)
@@ -278,13 +285,14 @@
   s (:, j) = [zeros(r - k, 1); ...
         rand(k, 1)] (idx);
 endfor
+@end group
 @end example
 
 It should be noted, that due to the way that the Octave
 assignment functions are written that the assignment will reallocate
 the memory used by the sparse matrix at each iteration of the above loop. 
 Therefore the @dfn{spalloc} function ignores the @var{nz} argument and 
-does not preassign the memory for the matrix. Therefore, it is vitally
+does not preassign the memory for the matrix.  Therefore, it is vitally
 important that code using to above structure should be vectorized
 as much as possible to minimize the number of assignments and reduce the
 number of memory allocations.
@@ -298,7 +306,7 @@
 @DOCSTRING(spconvert)
 
 The above problem of memory reallocation can be avoided in
-oct-files. However, the construction of a sparse matrix from an oct-file
+oct-files.  However, the construction of a sparse matrix from an oct-file
 is more complex than can be discussed here, and
 you are referred to chapter @ref{Dynamically Linked Functions}, to have
 a full description of the techniques involved.
@@ -307,18 +315,18 @@
 @subsection Finding out Information about Sparse Matrices
 
 There are a number of functions that allow information concerning
-sparse matrices to be obtained. The most basic of these is
+sparse matrices to be obtained.  The most basic of these is
 @dfn{issparse} that identifies whether a particular Octave object is
 in fact a sparse matrix.
 
 Another very basic function is @dfn{nnz} that returns the number of
 non-zero entries there are in a sparse matrix, while the function
 @dfn{nzmax} returns the amount of storage allocated to the sparse
-matrix. Note that Octave tends to crop unused memory at the first
-opportunity for sparse objects. There are some cases of user created
-sparse objects where the value returned by @dfn{nzmaz} will not be
+matrix.  Note that Octave tends to crop unused memory at the first
+opportunity for sparse objects.  There are some cases of user created
+sparse objects where the value returned by @dfn{nzmax} will not be
 the same as @dfn{nnz}, but in general they will give the same
-result. The function @dfn{spstats} returns some basic statistics on
+result.  The function @dfn{spstats} returns some basic statistics on
 the columns of a sparse matrix including the number of elements, the
 mean and the variance of each column.
 
@@ -334,38 +342,42 @@
 
 When solving linear equations involving sparse matrices Octave
 determines the means to solve the equation based on the type of the
-matrix as discussed in @ref{Sparse Linear Algebra}. Octave probes the
+matrix as discussed in @ref{Sparse Linear Algebra}.  Octave probes the
 matrix type when the div (/) or ldiv (\) operator is first used with
-the matrix and then caches the type. However the @dfn{matrix_type}
+the matrix and then caches the type.  However the @dfn{matrix_type}
 function can be used to determine the type of the sparse matrix prior
-to use of the div or ldiv operators. For example
+to use of the div or ldiv operators.  For example
 
 @example
+@group
 a = tril (sprandn(1024, 1024, 0.02), -1) ...
     + speye(1024); 
 matrix_type (a);
 ans = Lower
+@end group
 @end example
 
 show that Octave correctly determines the matrix type for lower
-triangular matrices. @dfn{matrix_type} can also be used to force
-the type of a matrix to be a particular type. For example
+triangular matrices.  @dfn{matrix_type} can also be used to force
+the type of a matrix to be a particular type.  For example
 
 @example
+@group
 a = matrix_type (tril (sprandn (1024, ...
    1024, 0.02), -1) + speye(1024), 'Lower');
+@end group
 @end example
 
 This allows the cost of determining the matrix type to be
-avoided. However, incorrectly defining the matrix type will result in
+avoided.  However, incorrectly defining the matrix type will result in
 incorrect results from solutions of linear equations, and so it is
 entirely the responsibility of the user to correctly identify the
 matrix type
 
 There are several graphical means of finding out information about
-sparse matrices. The first is the @dfn{spy} command, which displays
+sparse matrices.  The first is the @dfn{spy} command, which displays
 the structure of the non-zero elements of the
-matrix. @xref{fig:spmatrix}, for an example of the use of
+matrix.  @xref{fig:spmatrix}, for an example of the use of
 @dfn{spy}.  More advanced graphical information can be obtained with the
 @dfn{treeplot}, @dfn{etreeplot} and @dfn{gplot} commands.
 
@@ -376,24 +388,26 @@
 
 One use of sparse matrices is in graph theory, where the
 interconnections between nodes are represented as an adjacency
-matrix. That is, if the i-th node in a graph is connected to the j-th
-node. Then the ij-th node (and in the case of undirected graphs the
-ji-th node) of the sparse adjacency matrix is non-zero. If each node
-is then associated with a set of co-ordinates, then the @dfn{gplot}
+matrix.  That is, if the i-th node in a graph is connected to the j-th
+node.  Then the ij-th node (and in the case of undirected graphs the
+ji-th node) of the sparse adjacency matrix is non-zero.  If each node
+is then associated with a set of coordinates, then the @dfn{gplot}
 command can be used to graphically display the interconnections
 between nodes.
 
 As a trivial example of the use of @dfn{gplot}, consider the example
 
 @example
+@group
 A = sparse([2,6,1,3,2,4,3,5,4,6,1,5],
     [1,1,2,2,3,3,4,4,5,5,6,6],1,6,6);
 xy = [0,4,8,6,4,2;5,0,5,7,5,7]';
 gplot(A,xy)
+@end group
 @end example
 
 which creates an adjacency matrix @code{A} where node 1 is connected
-to nodes 2 and 6, node 2 with nodes 1 and 3, etc. The co-ordinates of
+to nodes 2 and 6, node 2 with nodes 1 and 3, etc.  The coordinates of
 the nodes are given in the n-by-2 matrix @code{xy}.
 @ifset htmltex 
 @xref{fig:gplot}.
@@ -406,7 +420,7 @@
 
 The dependencies between the nodes of a Cholesky factorization can be
 calculated in linear time without explicitly needing to calculate the
-Cholesky factorization by the @code{etree} command. This command
+Cholesky factorization by the @code{etree} command.  This command
 returns the elimination tree of the matrix and can be displayed
 graphically by the command @code{treeplot(etree(A))} if @code{A} is
 symmetric or @code{treeplot(etree(A+A'))} otherwise.
@@ -437,17 +451,17 @@
 
 An important consideration in the use of the sparse functions of
 Octave is that many of the internal functions of Octave, such as
-@dfn{diag}, cannot accept sparse matrices as an input. The sparse
+@dfn{diag}, cannot accept sparse matrices as an input.  The sparse
 implementation in Octave therefore uses the @dfn{dispatch}
 function to overload the normal Octave functions with equivalent
-functions that work with sparse matrices. However, at any time the
+functions that work with sparse matrices.  However, at any time the
 sparse matrix specific version of the function can be used by
 explicitly calling its function name. 
 
 The table below lists all of the sparse functions of Octave.  Note that
 the names of the 
 specific sparse forms of the functions are typically the same as
-the general versions with a @dfn{sp} prefix. In the table below, and the
+the general versions with a @dfn{sp} prefix.  In the table below, and the
 rest of this article the specific sparse versions of the functions are
 used.
 
@@ -487,9 +501,9 @@
   @dfn{spparms}, @dfn{symbfact}, @dfn{spstats}
 @end table
 
-In addition all of the standard Octave mapper functions (ie. basic
+In addition all of the standard Octave mapper functions (i.e., basic
 math functions that take a single argument) such as @dfn{abs}, etc
-can accept sparse matrices. The reader is referred to the documentation
+can accept sparse matrices.  The reader is referred to the documentation
 supplied with these functions within Octave itself for further
 details.
 
@@ -497,22 +511,24 @@
 @subsubsection The Return Types of Operators and Functions
 
 The two basic reasons to use sparse matrices are to reduce the memory 
-usage and to not have to do calculations on zero elements. The two are
+usage and to not have to do calculations on zero elements.  The two are
 closely related in that the computation time on a sparse matrix operator
 or function is roughly linear with the number of non-zero elements.
 
 Therefore, there is a certain density of non-zero elements of a matrix 
 where it no longer makes sense to store it as a sparse matrix, but rather
-as a full matrix. For this reason operators and functions that have a 
-high probability of returning a full matrix will always return one. For
+as a full matrix.  For this reason operators and functions that have a 
+high probability of returning a full matrix will always return one.  For
 example adding a scalar constant to a sparse matrix will almost always
 make it a full matrix, and so the example
 
 @example
+@group
 speye(3) + 0
 @result{}   1  0  0
   0  1  0
   0  0  1
+@end group
 @end example
 
 returns a full matrix as can be seen. 
@@ -521,28 +537,28 @@
 Additionally, if @code{sparse_auto_mutate} is true, all sparse functions
 test the amount of memory occupied by the sparse matrix to see if the
 amount of storage used is larger than the amount used by the full
-equivalent. Therefore @code{speye (2) * 1} will return a full matrix as
+equivalent.  Therefore @code{speye (2) * 1} will return a full matrix as
 the memory used is smaller for the full version than the sparse version.
 
 As all of the mixed operators and functions between full and sparse 
-matrices exist, in general this does not cause any problems. However,
+matrices exist, in general this does not cause any problems.  However,
 one area where it does cause a problem is where a sparse matrix is
 promoted to a full matrix, where subsequent operations would resparsify
-the matrix. Such cases are rare, but can be artificially created, for
+the matrix.  Such cases are rare, but can be artificially created, for
 example @code{(fliplr(speye(3)) + speye(3)) - speye(3)} gives a full
-matrix when it should give a sparse one. In general, where such cases 
+matrix when it should give a sparse one.  In general, where such cases 
 occur, they impose only a small memory penalty.
 
 There is however one known case where this behavior of Octave's
-sparse matrices will cause a problem. That is in the handling of the
-@dfn{diag} function. Whether @dfn{diag} returns a sparse or full matrix
-depending on the type of its input arguments. So 
+sparse matrices will cause a problem.  That is in the handling of the
+@dfn{diag} function.  Whether @dfn{diag} returns a sparse or full matrix
+depending on the type of its input arguments.  So 
 
 @example
  a = diag (sparse([1,2,3]), -1);
 @end example
 
-should return a sparse matrix. To ensure this actually happens, the
+should return a sparse matrix.  To ensure this actually happens, the
 @dfn{sparse} function, and other functions based on it like @dfn{speye}, 
 always returns a sparse matrix, even if the memory used will be larger 
 than its full representation.
@@ -550,19 +566,20 @@
 @DOCSTRING(sparse_auto_mutate)
 
 Note that the @code{sparse_auto_mutate} option is incompatible with
-@sc{Matlab}, and so it is off by default.
+@sc{matlab}, and so it is off by default.
 
 @node Mathematical Considerations
 @subsubsection Mathematical Considerations
 
 The attempt has been made to make sparse matrices behave in exactly the
-same manner as there full counterparts. However, there are certain differences
+same manner as there full counterparts.  However, there are certain differences
 and especially differences with other products sparse implementations.
 
-Firstly, the "./" and ".^" operators must be used with care. Consider what
+Firstly, the "./" and ".^" operators must be used with care.  Consider what
 the examples
 
 @example
+@group
   s = speye (4);
   a1 = s .^ 2;
   a2 = s .^ s;
@@ -570,10 +587,11 @@
   a4 = s ./ 2;
   a5 = 2 ./ s;
   a6 = s ./ s;
+@end group
 @end example
 
-will give. The first example of @var{s} raised to the power of 2 causes
-no problems. However @var{s} raised element-wise to itself involves a
+will give.  The first example of @var{s} raised to the power of 2 causes
+no problems.  However @var{s} raised element-wise to itself involves a
 large number of terms @code{0 .^ 0} which is 1. There @code{@var{s} .^
 @var{s}} is a full matrix. 
 
@@ -582,7 +600,7 @@
 
 For the "./" operator @code{@var{s} ./ 2} has no problems, but 
 @code{2 ./ @var{s}} involves a large number of infinity terms as well
-and is equally a full matrix. The case of @code{@var{s} ./ @var{s}}
+and is equally a full matrix.  The case of @code{@var{s} ./ @var{s}}
 involves terms like @code{0 ./ 0} which is a @code{NaN} and so this
 is equally a full matrix with the zero elements of @var{s} filled with
 @code{NaN} values.
@@ -592,9 +610,10 @@
 
 A particular problem of sparse matrices comes about due to the fact that
 as the zeros are not stored, the sign-bit of these zeros is equally not
-stored. In certain cases the sign-bit of zero is important. For example
+stored.  In certain cases the sign-bit of zero is important.  For example
 
 @example
+@group
  a = 0 ./ [-1, 1; 1, -1];
  b = 1 ./ a
  @result{} -Inf            Inf
@@ -602,27 +621,28 @@
  c = 1 ./ sparse (a)
  @result{}  Inf            Inf
      Inf            Inf
+@end group
 @end example
  
 To correct this behavior would mean that zero elements with a negative
 sign-bit would need to be stored in the matrix to ensure that their 
-sign-bit was respected. This is not done at this time, for reasons of
+sign-bit was respected.  This is not done at this time, for reasons of
 efficiency, and so the user is warned that calculations where the sign-bit
 of zero is important must not be done using sparse matrices.
 
 In general any function or operator used on a sparse matrix will
 result in a sparse matrix with the same or a larger number of non-zero
-elements than the original matrix. This is particularly true for the
-important case of sparse matrix factorizations. The usual way to
+elements than the original matrix.  This is particularly true for the
+important case of sparse matrix factorizations.  The usual way to
 address this is to reorder the matrix, such that its factorization is
-sparser than the factorization of the original matrix. That is the
+sparser than the factorization of the original matrix.  That is the
 factorization of @code{L * U = P * S * Q} has sparser terms @code{L}
 and @code{U} than the equivalent factorization @code{L * U = S}.
 
 Several functions are available to reorder depending on the type of the
-matrix to be factorized. If the matrix is symmetric positive-definite,
-then @dfn{symamd} or @dfn{csymamd} should be used. Otherwise
-@dfn{amd}, @dfn{colamd} or @dfn{ccolamd} should be used. For completeness
+matrix to be factorized.  If the matrix is symmetric positive-definite,
+then @dfn{symamd} or @dfn{csymamd} should be used.  Otherwise
+@dfn{amd}, @dfn{colamd} or @dfn{ccolamd} should be used.  For completeness
 the reordering functions @dfn{colperm} and @dfn{randperm} are
 also available.
 
@@ -636,7 +656,7 @@
 
 The standard Cholesky factorization of this matrix can be
 obtained by the same command that would be used for a full
-matrix. This can be visualized with the command 
+matrix.  This can be visualized with the command 
 @code{r = chol(A); spy(r);}.
 @ifset HAVE_CHOLMOD
 @ifset HAVE_COLAMD
@@ -661,7 +681,7 @@
 @ifset htmltex
 10200,
 @end ifset
-with only half of the symmetric matrix being stored. This
+with only half of the symmetric matrix being stored.  This
 is a significant level of fill in, and although not an issue
 for such a small test case, can represents a large overhead 
 in working with other sparse matrices.
@@ -669,7 +689,7 @@
 The appropriate sparsity preserving permutation of the original
 matrix is given by @dfn{symamd} and the factorization using this
 reordering can be visualized using the command @code{q = symamd(A);
-r = chol(A(q,q)); spy(r)}. This gives 
+r = chol(A(q,q)); spy(r)}.  This gives 
 @ifinfo
 @ifnothtml
 29
@@ -729,7 +749,7 @@
 
 Octave includes a polymorphic solver for sparse matrices, where 
 the exact solver used to factorize the matrix, depends on the properties
-of the sparse matrix itself. Generally, the cost of determining the matrix type
+of the sparse matrix itself.  Generally, the cost of determining the matrix type
 is small relative to the cost of factorizing the matrix itself, but in any
 case the matrix type is cached once it is calculated, so that it is not
 re-determined each time it is used in a linear equation.
@@ -740,7 +760,7 @@
 @item If the matrix is diagonal, solve directly and goto 8
 
 @item If the matrix is a permuted diagonal, solve directly taking into
-account the permutations. Goto 8
+account the permutations.  Goto 8
 
 @item If the matrix is square, banded and if the band density is less
 than that given by @code{spparms ("bandden")} continue, else goto 4.
@@ -788,30 +808,30 @@
 @end enumerate
 
 The band density is defined as the number of non-zero values in the matrix
-divided by the number of non-zero values in the matrix. The banded matrix
+divided by the number of non-zero values in the matrix.  The banded matrix
 solvers can be entirely disabled by using @dfn{spparms} to set @code{bandden}
-to 1 (i.e. @code{spparms ("bandden", 1)}).
+to 1 (i.e., @code{spparms ("bandden", 1)}).
 
-The QR solver factorizes the problem with a Dulmage-Mendhelsohn, to
+The QR solver factorizes the problem with a Dulmage-Mendelsohn, to
 separate the problem into blocks that can be treated as over-determined,
-multiple well determined blocks, and a final over-determined block. For
+multiple well determined blocks, and a final over-determined block.  For
 matrices with blocks of strongly connected nodes this is a big win as
-LU decomposition can be used for many blocks. It also significantly
+LU decomposition can be used for many blocks.  It also significantly
 improves the chance of finding a solution to over-determined problems
 rather than just returning a vector of @dfn{NaN}'s.
 
 All of the solvers above, can calculate an estimate of the condition
-number. This can be used to detect numerical stability problems in the
-solution and force a minimum norm solution to be used. However, for
+number.  This can be used to detect numerical stability problems in the
+solution and force a minimum norm solution to be used.  However, for
 narrow banded, triangular or diagonal matrices, the cost of
 calculating the condition number is significant, and can in fact
-exceed the cost of factoring the matrix. Therefore the condition
+exceed the cost of factoring the matrix.  Therefore the condition
 number is not calculated in these cases, and Octave relies on simpler
 techniques to detect singular matrices or the underlying LAPACK code in
 the case of banded matrices.
 
 The user can force the type of the matrix with the @code{matrix_type}
-function. This overcomes the cost of discovering the type of the matrix.
+function.  This overcomes the cost of discovering the type of the matrix.
 However, it should be noted that identifying the type of the matrix incorrectly
 will lead to unpredictable results, and so @code{matrix_type} should be
 used with care.
@@ -848,7 +868,7 @@
 The left division @code{\} and right division @code{/} operators,
 discussed in the previous section, use direct solvers to resolve a
 linear equation of the form @code{@var{x} = @var{A} \ @var{b}} or
-@code{@var{x} = @var{b} / @var{A}}. Octave equally includes a number of
+@code{@var{x} = @var{b} / @var{A}}.  Octave equally includes a number of
 functions to solve sparse linear equations using iterative techniques.
 
 @DOCSTRING(pcg)
@@ -856,9 +876,9 @@
 @DOCSTRING(pcr)
 
 The speed with which an iterative solver converges to a solution can be
-accelerated with the use of a pre-conditioning matrix @var{M}. In this
+accelerated with the use of a pre-conditioning matrix @var{M}.  In this
 case the linear equation @code{@var{M}^-1 * @var{x} = @var{M}^-1 *
-@var{A} \ @var{b}} is solved instead. Typical pre-conditioning matrices
+@var{A} \ @var{b}} is solved instead.  Typical pre-conditioning matrices
 are partial factorizations of the original matrix.
 
 @DOCSTRING(luinc)
@@ -867,13 +887,13 @@
 @section Real Life Example of the use of Sparse Matrices
 
 A common application for sparse matrices is in the solution of Finite
-Element Models. Finite element models allow numerical solution of
+Element Models.  Finite element models allow numerical solution of
 partial differential equations that do not have closed form solutions,
 typically because of the complex shape of the domain.
 
 In order to motivate this application, we consider the boundary value
-Laplace equation. This system can model scalar potential fields, such
-as heat or electrical potential. Given a medium 
+Laplace equation.  This system can model scalar potential fields, such
+as heat or electrical potential.  Given a medium 
 @iftex
 @tex
 $\Omega$ 
@@ -925,7 +945,7 @@
 @end ifinfo
 and know the boundary temperature (Dirichlet condition)
 or heat flux (from which we can calculate the Neumann condition
-by dividing by the thermal conductivity  at the boundary). Similarly, 
+by dividing by the thermal conductivity at the boundary).  Similarly, 
 in an electrical model, we want to calculate the voltage in
 @iftex
 @tex
@@ -955,19 +975,20 @@
 We take as an 3D example a cylindrical liquid filled tank with a small 
 non-conductive ball from the EIDORS project@footnote{EIDORS - Electrical 
 Impedance Tomography and Diffuse optical Tomography Reconstruction Software 
-@url{http://eidors3d.sourceforge.net}}. This is model is designed to reflect
-an application of electrical  impedance tomography, where current patterns
-are applied to such a tank in order to  image the internal conductivity
-distribution. In order to describe the FEM geometry, we have a matrix of 
+@url{http://eidors3d.sourceforge.net}}.  This is model is designed to reflect
+an application of electrical impedance tomography, where current patterns
+are applied to such a tank in order to image the internal conductivity
+distribution.  In order to describe the FEM geometry, we have a matrix of 
 vertices @code{nodes} and simplices @code{elems}.
 @end ifset
 
 The following example creates a simple rectangular 2D electrically
 conductive medium with 10 V and 20 V imposed on opposite sides 
-(Dirichlet boundary conditions). All other edges are electrically
+(Dirichlet boundary conditions).  All other edges are electrically
 isolated.
 
 @example
+@group
    node_y= [1;1.2;1.5;1.8;2]*ones(1,11);
    node_x= ones(5,1)*[1,1.05,1.1,1.2, ...
              1.3,1.5,1.7,1.8,1.9,1.95,2];
@@ -985,20 +1006,23 @@
    E= size(elems,1); # No. of simplices
    N= size(nodes,1); # No. of vertices
    D= size(elems,2); # dimensions+1
+@end group
 @end example
 
 This creates a N-by-2 matrix @code{nodes} and a E-by-3 matrix
 @code{elems} with values, which define finite element triangles:
 
 @example
+@group
   nodes(1:7,:)'
-    1.00 1.00 1.00 1.00 1.00 1.05 1.05 ...
-    1.00 1.20 1.50 1.80 2.00 1.00 1.20 ...
+    1.00 1.00 1.00 1.00 1.00 1.05 1.05 @dots{}
+    1.00 1.20 1.50 1.80 2.00 1.00 1.20 @dots{}
 
   elems(1:7,:)'
-    1    2    3    4    2    3    4 ...
-    2    3    4    5    7    8    9 ...
-    6    7    8    9    6    7    8 ...
+    1    2    3    4    2    3    4 @dots{}
+    2    3    4    5    7    8    9 @dots{}
+    6    7    8    9    6    7    8 @dots{}
+@end group
 @end example
 
 Using a first order FEM, we approximate the electrical conductivity 
@@ -1014,12 +1038,13 @@
 as constant on each simplex (represented by the vector @code{conductivity}).
 Based on the finite element geometry, we first calculate a system (or
 stiffness) matrix for each simplex (represented as 3-by-3 elements on the
-diagonal of the element-wise system matrix @code{SE}. Based on @code{SE} 
+diagonal of the element-wise system matrix @code{SE}.  Based on @code{SE} 
 and a N-by-DE connectivity matrix @code{C}, representing the connections 
 between simplices and vertices, the global connectivity matrix @code{S} is
 calculated.
 
 @example
+@group
   # Element conductivity
   conductivity= [1*ones(1,16), ...
          2*ones(1,48), 1*ones(1,16)];
@@ -1046,6 +1071,7 @@
   SE= sparse(Siidx,Sjidx,Sdata);
   # Global system matrix
   S= C'* SE *C;
+@end group
 @end example
 
 The system matrix acts like the conductivity 
@@ -1070,6 +1096,7 @@
 solve for the voltages at each vertex @code{V}. 
 
 @example
+@group
   # Dirichlet boundary conditions
   D_nodes=[1:5, 51:55]; 
   D_value=[10*ones(1,5), 20*ones(1,5)]; 
@@ -1080,7 +1107,7 @@
              # boundary condns
   idx(D_nodes) = [];
 
-  # Neumann boundary conditions. Note that
+  # Neumann boundary conditions.  Note that
   # N_value must be normalized by the
   # boundary length and element conductivity
   N_nodes=[];
@@ -1091,6 +1118,7 @@
 
   V(idx) = S(idx,idx) \ ( Q(idx) - ...
             S(idx,D_nodes) * V(D_nodes));
+@end group
 @end example
 
 Finally, in order to display the solution, we show each solved voltage 
@@ -1106,12 +1134,14 @@
 @end ifset
 
 @example
+@group
   elemx = elems(:,[1,2,3,1])';
   xelems = reshape (nodes(elemx, 1), 4, E);
   yelems = reshape (nodes(elemx, 2), 4, E);
   velems = reshape (V(elemx), 4, E);
   plot3 (xelems,yelems,velems,'k'); 
   print ('grid.eps');
+@end group
 @end example
 
 
--- a/scripts/sparse/spaugment.m
+++ b/scripts/sparse/spaugment.m
@@ -28,7 +28,7 @@
 ## @end example
 ##
 ## @noindent
-## This is related to the leasted squared solution of 
+## This is related to the least squares solution of 
 ## @code{@var{a} \\ @var{b}}, by
 ## 
 ## @example
--- a/scripts/sparse/svds.m
+++ b/scripts/sparse/svds.m
@@ -55,7 +55,7 @@
 ## 1e-10.
 ##
 ## @item maxit
-## The maximum number of iterations.  The defaut is 300.
+## The maximum number of iterations.  The default is 300.
 ##
 ## @item disp
 ## The level of diagnostic printout.  If @code{disp} is 0 then there is no
--- a/src/DLD-FUNCTIONS/amd.cc
+++ b/src/DLD-FUNCTIONS/amd.cc
@@ -67,14 +67,14 @@
 @table @asis\n\
 @item opts.dense\n\
 Determines what @code{amd} considers to be a dense row or column of the\n\
-input matrix.  Rows or columns with more that @code{max(16, (dense *\n\
+input matrix.  Rows or columns with more than @code{max(16, (dense *\n\
 sqrt (@var{n})} entries, where @var{n} is the order of the matrix @var{s},\n\
-are igorned by @code{amd} during the calculation of the permutation\n\
+are ignored by @code{amd} during the calculation of the permutation\n\
 The value of dense must be a positive scalar and its default value is 10.0\n\
 \n\
 @item opts.aggressive\n\
-If this value is a non zero scalar, then @code{amd} performs agressive\n\
-absorption.  The default is not to perform agressive absorption.\n\
+If this value is a non zero scalar, then @code{amd} performs aggressive\n\
+absorption.  The default is not to perform aggressive absorption.\n\
 @end table\n\
 \n\
 The author of the code itself is Timothy A. Davis (davis@@cise.ufl.edu),\n\
--- a/src/DLD-FUNCTIONS/symrcm.cc
+++ b/src/DLD-FUNCTIONS/symrcm.cc
@@ -430,7 +430,7 @@
 descriptions found in\n\
 \n\
 E. Cuthill, J. McKee: Reducing the Bandwidth of Sparse Symmetric\n\
-Matrices. Proceedings of the 24th ACM National Conference, 157-172\n\
+Matrices. Proceedings of the 24th ACM National Conference, 157--172\n\
 1969, Brandon Press, New Jersey.\n\
 \n\
 Alan George, Joseph W. H. Liu: Computer Solution of Large Sparse\n\
--- a/src/data.cc
+++ b/src/data.cc
@@ -2493,7 +2493,7 @@
 Return the amount of storage allocated to the sparse matrix @var{SM}.\n\
 Note that Octave tends to crop unused memory at the first opportunity\n\
 for sparse objects.  There are some cases of user created sparse objects\n\
-where the value returned by @dfn{nzmaz} will not be the same as @dfn{nnz},\n\
+where the value returned by @dfn{nzmax} will not be the same as @dfn{nnz},\n\
 but in general they will give the same result.\n\
 @seealso{sparse, spalloc}\n\
 @end deftypefn")