changeset 14119:94e2a76f1e5a stable

doc: Final grammarcheck and spellcheck before 3.6.0 release. * container.txi, aspell-octave.en.pws, expr.txi, vectorize.txi, accumarray.m, accumdim.m, interpft.m, strread.m, parseparams.m, warning_ids.m, cellfun.cc, help.cc: grammarcheck and spellcheck docstrings.
author Rik <octave@nomad.inbox5.com>
date Thu, 29 Dec 2011 06:05:00 -0800
parents ebe2e6b2ba52
children 0a051c406242
files doc/interpreter/container.txi doc/interpreter/doccheck/aspell-octave.en.pws doc/interpreter/expr.txi doc/interpreter/vectorize.txi scripts/general/accumarray.m scripts/general/accumdim.m scripts/general/interpft.m scripts/io/strread.m scripts/miscellaneous/parseparams.m scripts/miscellaneous/warning_ids.m src/DLD-FUNCTIONS/cellfun.cc src/help.cc
diffstat 12 files changed, 158 insertions(+), 131 deletions(-) [+]
line wrap: on
line diff
--- a/doc/interpreter/container.txi
+++ b/doc/interpreter/container.txi
@@ -507,7 +507,7 @@
 The simplest way to process data in a structure is within a @code{for}
 loop (@pxref{Looping Over Structure Elements}).  A similar effect can be
 achieved with the @code{structfun} function, where a user defined
-function is applied to each field of the structure. @xref{doc-structfun}.
+function is applied to each field of the structure.  @xref{doc-structfun}.
 
 Alternatively, to process the data in a structure, the structure might
 be converted to another type of container before being treated.
@@ -885,7 +885,7 @@
 is to iterate through it using one or more @code{for} loops.  The same
 idea can be implemented more easily through the use of the @code{cellfun}
 function that calls a user-specified function on all elements of a cell
-array. @xref{doc-cellfun}.
+array.  @xref{doc-cellfun}.
 
 An alternative is to convert the data to a different container, such as
 a matrix or a data structure.  Depending on the data this is possible
--- a/doc/interpreter/doccheck/aspell-octave.en.pws
+++ b/doc/interpreter/doccheck/aspell-octave.en.pws
@@ -83,6 +83,7 @@
 breakpoint
 Brenan
 Brockwell
+BSX
 builtin
 builtins
 ButtonDownFcn
@@ -162,6 +163,10 @@
 ctranspose
 CTRL
 CTS
+cummax
+cummin
+cumprod
+cumsum
 cURL
 Cuthill
 cxsparse
@@ -226,6 +231,7 @@
 eigenvectors
 eigs
 Ekerdt
+elementwise
 Elfers
 elseif
 emacs
@@ -397,6 +403,7 @@
 Hypergeometric
 hypergeometric
 IEEE
+ifelse
 iff
 ifft
 ifftn
@@ -601,6 +608,7 @@
 nocompute
 nolabel
 noncommercially
+nonconformant
 nonsmooth
 nonzeros
 noperm
@@ -616,6 +624,7 @@
 nthargout
 NTSC
 nul
+Numpy
 Nx
 nzmax
 oct
@@ -736,6 +745,7 @@
 Reindent
 relicensing
 ren
+repelems
 repmat
 resampled
 resampling
@@ -781,6 +791,7 @@
 SIGNUM
 sim
 SIMAX
+SIMD
 simplechol
 simplecholperm
 simplematrix
@@ -864,6 +875,7 @@
 substring
 substrings
 SuiteSparse
+sumsq
 SunOS
 superiorto
 supradiagonal
--- a/doc/interpreter/expr.txi
+++ b/doc/interpreter/expr.txi
@@ -512,7 +512,7 @@
 @cindex complex-conjugate transpose
 
 The following arithmetic operators are available, and work on scalars
-and matrices. They element-by-element operators and functions broadcast
+and matrices.  The element-by-element operators and functions broadcast
 (@pxref{Broadcasting}).
 
 @table @asis
@@ -717,7 +717,7 @@
 All of Octave's comparison operators return a value of 1 if the
 comparison is true, or 0 if it is false.  For matrix values, they all
 work on an element-by-element basis.  Broadcasting rules apply.
-@xref{Broadcasting}. For example:
+@xref{Broadcasting}.  For example:
 
 @example
 @group
@@ -864,7 +864,7 @@
 @var{boolean} is false.
 @end table
 
-These operators work on an element-by-element basis. For example, the
+These operators work on an element-by-element basis.  For example, the
 expression
 
 @example
@@ -874,7 +874,7 @@
 @noindent
 returns a two by two identity matrix.
 
-For the binary operators, broadcasting rules apply. @xref{Broadcasting}.
+For the binary operators, broadcasting rules apply.  @xref{Broadcasting}.
 In particular, if one of the operands is a scalar and the other a
 matrix, the operator is applied to the scalar and each element of the
 matrix.
--- a/doc/interpreter/vectorize.txi
+++ b/doc/interpreter/vectorize.txi
@@ -22,20 +22,20 @@
 @cindex vectorize
 
 Vectorization is a programming technique that uses vector operations
-instead of element-by-element loop-based operations. Besides frequently
-writing more succinct code, vectorization also allows for better
-optimization of the code. The optimizations may occur either in Octave's
-own Fortran, C, or C++ internal implementation, or even at a lower level
-depending on the compiler and external numerical libraries used to
-build Octave. The ultimate goal is to make use of your hardware's vector
+instead of element-by-element loop-based operations.  Besides frequently
+producing more succinct Octave code, vectorization also allows for better
+optimization in the subsequent implementation.  The optimizations may occur
+either in Octave's own Fortran, C, or C++ internal implementation, or even at a
+lower level depending on the compiler and external numerical libraries used to
+build Octave.  The ultimate goal is to make use of your hardware's vector
 instructions if possible or to perform other optimizations in software.
 
-Vectorization is not a concept unique to Octave, but is particularly
-important because Octave is a matrix-oriented language. Vectorized
-Octave code will see a dramatic speed up in most cases.
+Vectorization is not a concept unique to Octave, but it is particularly
+important because Octave is a matrix-oriented language.  Vectorized
+Octave code will see a dramatic speed up (10X--100X) in most cases.
 
-This chapter discusses vectorization and other techniques for faster
-code execution.
+This chapter discusses vectorization and other techniques for writing faster
+code.
 
 @menu
 * Basic Vectorization::        Basic techniques for code optimization
@@ -50,7 +50,7 @@
 @section Basic Vectorization
 
 To a very good first approximation, the goal in vectorization is to
-write code that avoids loops and uses whole-array operations. As a
+write code that avoids loops and uses whole-array operations.  As a
 trivial example, consider
 
 @example
@@ -72,20 +72,20 @@
 
 @noindent
 This isn't merely easier to write; it is also internally much easier to
-optimize. Octave delegates this operation to an underlying
-implementation which among other optimizations may use special vector
+optimize.  Octave delegates this operation to an underlying
+implementation which, among other optimizations, may use special vector
 hardware instructions or could conceivably even perform the additions in
-parallel. In general, if the code is vectorized, the underlying
-implementation has much more freedom about the assumptions it can make
+parallel.  In general, if the code is vectorized, the underlying
+implementation has more freedom about the assumptions it can make
 in order to achieve faster execution.
 
-This is especially important for loops with "cheap" bodies. Often it
+This is especially important for loops with "cheap" bodies.  Often it
 suffices to vectorize just the innermost loop to get acceptable
-performance. A general rule of thumb is that the "order" of the
+performance.  A general rule of thumb is that the "order" of the
 vectorized body should be greater or equal to the "order" of the
 enclosing loop.
 
-As a less trivial example, rather than
+As a less trivial example, instead of
 
 @example
 @group
@@ -103,9 +103,9 @@
 @end example
 
 This shows an important general concept about using arrays for indexing
-instead of looping over an index variable. @xref{Index Expressions}.
-Also use boolean indexing generously. If a condition needs to be tested,
-this condition can also be written as a boolean index. For instance,
+instead of looping over an index variable.  @xref{Index Expressions}.
+Also use boolean indexing generously.  If a condition needs to be tested,
+this condition can also be written as a boolean index.  For instance,
 instead of
 
 @example
@@ -129,18 +129,18 @@
 which exploits the fact that @code{a > 5} produces a boolean index.
 
 Use elementwise vector operators whenever possible to avoid looping
-(operators like @code{.*} and @code{.^}). @xref{Arithmetic Ops}. For
-simple in-line functions, the @code{vectorize} function can do this
+(operators like @code{.*} and @code{.^}).  @xref{Arithmetic Ops}.  For
+simple inline functions, the @code{vectorize} function can do this
 automatically.
 
 @DOCSTRING(vectorize)
 
-Also exploit broadcasting in these elementise operators both to avoid
+Also exploit broadcasting in these elementwise operators both to avoid
 looping and unnecessary intermediate memory allocations.
 @xref{Broadcasting}.
 
-Use built-in and library functions if possible. Built-in and compiled
-functions are very fast. Even with a m-file library function, chances
+Use built-in and library functions if possible.  Built-in and compiled
+functions are very fast.  Even with an m-file library function, chances
 are good that it is already optimized, or will be optimized more in a
 future release.
 
@@ -158,8 +158,8 @@
 @end example
 
 Most Octave functions are written with vector and array arguments in
-mind. If you find yourself writing a loop with a very simple operation,
-chances are that such a function already exists. The following functions
+mind.  If you find yourself writing a loop with a very simple operation,
+chances are that such a function already exists.  The following functions
 occur frequently in vectorized code:
 
 @itemize @bullet
@@ -169,16 +169,22 @@
 @itemize
 @item
 find
+
 @item
 sub2ind
+
 @item
 ind2sub
+
 @item
 sort
+
 @item
 unique
+
 @item
 lookup
+
 @item
 ifelse / merge
 @end itemize
@@ -188,6 +194,7 @@
 @itemize
 @item
 repmat
+
 @item
 repelems
 @end itemize
@@ -197,20 +204,28 @@
 @itemize
 @item
 sum
+
 @item
 prod
+
 @item
 cumsum
+
 @item
 cumprod
+
 @item
 sumsq
+
 @item
 diff
+
 @item
 dot
+
 @item
 cummax
+
 @item
 cummin
 @end itemize
@@ -220,12 +235,16 @@
 @itemize
 @item
 reshape
+
 @item
 resize
+
 @item
 permute
+
 @item
 squeeze
+
 @item
 deal
 @end itemize
@@ -241,27 +260,28 @@
 @cindex SIMD
 
 Broadcasting refers to how Octave binary operators and functions behave
-when their matrix or array operands or arguments differ in size. Since
+when their matrix or array operands or arguments differ in size.  Since
 version 3.6.0, Octave now automatically broadcasts vectors, matrices,
 and arrays when using elementwise binary operators and functions.
 Broadly speaking, smaller arrays are ``broadcast'' across the larger
-one, until they have a compatible shape. The rule is that corresponding
+one, until they have a compatible shape.  The rule is that corresponding
 array dimensions must either
 
 @enumerate
 @item
-be equal or,
+be equal, or
+
 @item
 one of them must be 1.
 @end enumerate
 
 @noindent
 In case all dimensions are equal, no broadcasting occurs and ordinary
-element-by-element arithmetic takes place. For arrays of higher
+element-by-element arithmetic takes place.  For arrays of higher
 dimensions, if the number of dimensions isn't the same, then missing
-trailing dimensions are treated as 1. When one of the dimensions is 1,
+trailing dimensions are treated as 1.  When one of the dimensions is 1,
 the array with that singleton dimension gets copied along that dimension
-until it matches the dimension of the other array. For example, consider
+until it matches the dimension of the other array.  For example, consider
 
 @example
 @group
@@ -276,8 +296,8 @@
 @end example
 
 @noindent
-Without broadcasting, @code{x + y} would be an error because dimensions
-do not agree. However, with broadcasting it is as if the following
+Without broadcasting, @code{x + y} would be an error because the dimensions
+do not agree.  However, with broadcasting it is as if the following
 operation were performed:
 
 @example
@@ -299,8 +319,8 @@
 
 @noindent
 That is, the smaller array of size @code{[1 3]} gets copied along the
-singleton dimension (the number of rows) until it is @code{[3 3]}. No
-actual copying takes place, however. The internal implementation reuses
+singleton dimension (the number of rows) until it is @code{[3 3]}.  No
+actual copying takes place, however.  The internal implementation reuses
 elements along the necessary dimension in order to achieve the desired
 effect without copying in memory.
 
@@ -322,16 +342,16 @@
 subtraction takes place.
 
 For a higher-dimensional example, suppose @code{img} is an RGB image of
-size @code{[m n 3]} and we wish to multiply each colour by a different
-scalar. The following code accomplishes this with broadcasting,
+size @code{[m n 3]} and we wish to multiply each color by a different
+scalar.  The following code accomplishes this with broadcasting,
 
 @example
 img .*= permute ([0.8, 0.9, 1.2], [1, 3, 2]);
 @end example
 
 @noindent
-Note the usage of permute to match the dimensions of the @code{[0.8,
-0.9, 1.2]} vector with @code{img}.
+Note the usage of permute to match the dimensions of the
+@code{[0.8, 0.9, 1.2]} vector with @code{img}.
 
 For functions that are not written with broadcasting semantics,
 @code{bsxfun} can be useful for coercing them to broadcast.
@@ -339,7 +359,7 @@
 @DOCSTRING(bsxfun)
 
 Broadcasting is only applied if either of the two broadcasting
-conditions hold. As usual, however, broadcasting does not apply when two
+conditions hold.  As usual, however, broadcasting does not apply when two
 dimensions differ and neither is 1:
 
 @example
@@ -356,11 +376,10 @@
 This will produce an error about nonconformant arguments.
 
 Besides common arithmetic operations, several functions of two arguments
-also broadcast. The full list of functions and operators that broadcast
+also broadcast.  The full list of functions and operators that broadcast
 is
 
 @example
-@group
       plus      +  .+
       minus     -  .-
       times     .*
@@ -384,7 +403,6 @@
       xor
 
       +=  -=  .+=  .-=  .*=  ./=  .\=  .^=  .**=  &=  |=
-@end group
 @end example
 
 Beware of resorting to broadcasting if a simpler operation will suffice.
@@ -397,25 +415,24 @@
 @noindent
 This operation broadcasts the two matrices with permuted dimensions
 across each other during elementwise multiplication in order to obtain a
-larger 3d array, and this array is the summed along the third dimension.
+larger 3-D array, and this array is then summed along the third dimension.
 A moment of thought will prove that this operation is simply the much
 faster ordinary matrix multiplication, @code{c = a*b;}.
 
 A note on terminology: ``broadcasting'' is the term popularized by the
-Numpy numerical environment in the Python programming language. In other
+Numpy numerical environment in the Python programming language.  In other
 programming languages and environments, broadcasting may also be known
-as @emph{binary singleton expansion} (BSX, in @sc{Matlab}, and the
+as @emph{binary singleton expansion} (BSX, in @sc{matlab}, and the
 origin of the name of the @code{bsxfun} function), @emph{recycling} (R
 programming language), @emph{single-instruction multiple data} (SIMD),
 or @emph{replication}.
 
 @subsection Broadcasting and Legacy Code
 
-The new broadcasting semantics do not affect almost any code that worked
-in previous versions of Octave without error. Thus for example all code
-inherited from @sc{Matlab} that worked in previous versions of Octave
-should still work without change in Octave. The only exception is code
-such as
+The new broadcasting semantics almost never affect code that worked
+in previous versions of Octave.  Consequently, all code inherited from
+@sc{matlab} that worked in previous versions of Octave should still work
+without change in Octave.  The only exception is code such as
 
 @example
 @group
@@ -430,7 +447,7 @@
 @noindent
 that may have relied on matrices of different size producing an error.
 Due to how broadcasting changes semantics with older versions of Octave,
-by default Octave warns if a broadcasting operation is performed. To
+by default Octave warns if a broadcasting operation is performed.  To
 disable this warning, refer to its ID (@pxref{doc-warning_ids}):
 
 @example
@@ -438,7 +455,7 @@
 @end example
 
 @noindent
-If you want to recover the old behaviour and produce an error, turn this
+If you want to recover the old behavior and produce an error, turn this
 warning into an error:
 
 @example
@@ -453,14 +470,17 @@
 
 As a general rule, functions should already be written with matrix
 arguments in mind and should consider whole matrix operations in a
-vectorized manner. Sometimes, writing functions in this way appears
-difficult or impossible for various reasons. For those situations,
+vectorized manner.  Sometimes, writing functions in this way appears
+difficult or impossible for various reasons.  For those situations,
 Octave provides facilities for applying a function to each element of an
 array, cell, or struct.
 
 @DOCSTRING(arrayfun)
+
 @DOCSTRING(spfun)
+
 @DOCSTRING(cellfun)
+
 @DOCSTRING(structfun)
 
 @node Accumulation
@@ -485,21 +505,19 @@
 
 @itemize @bullet
 
-@item
-Avoid computing costly intermediate results multiple times. Octave
-currently does not eliminate common subexpressions. Also, certain
-internal computation results are cached for variables. For instance, if
+@item Avoid computing costly intermediate results multiple times.
+Octave currently does not eliminate common subexpressions.  Also, certain
+internal computation results are cached for variables.  For instance, if
 a matrix variable is used multiple times as an index, checking the
 indices (and internal conversion to integers) is only done once.
 
-@item
+@item Be aware of lazy copies (copy-on-write).  
 @cindex copy-on-write
 @cindex COW
 @cindex memory management
-Be aware of lazy copies (copy-on-write). When a copy of an object is
-created, the data is not immediately copied, but rather shared. The
-actual copying is postponed until the copied data needs to be modified.
-For example:
+When a copy of an object is created, the data is not immediately copied, but
+rather shared.  The actual copying is postponed until the copied data needs to
+be modified.  For example:
 
 @example
 @group
@@ -514,30 +532,29 @@
 elements).
 
 Additionally, index expressions also use lazy copying when Octave can
-determine that the indexed portion is contiguous in memory. For example:
+determine that the indexed portion is contiguous in memory.  For example:
 
 @example
 @group
 a = zeros (1000); # create a 1000x1000 matrix
-b = a(:,10:100); # no copying done here
-b = a(10:100,:); # copying done here
+b = a(:,10:100);  # no copying done here
+b = a(10:100,:);  # copying done here
 @end group
 @end example
 
 This applies to arrays (matrices), cell arrays, and structs indexed
-using (). Index expressions generating cs-lists can also benefit of
-shallow copying in some cases. In particular, when @var{a} is a struct
-array, expressions like @code{@{a.x@}, @{a(:,2).x@}} will use lazy
-copying, so that data can be shared between a struct array and a cell
-array.
+using @samp{()}.  Index expressions generating comma-separated lists can also
+benefit from shallow copying in some cases.  In particular, when @var{a} is a
+struct array, expressions like @code{@{a.x@}, @{a(:,2).x@}} will use lazy
+copying, so that data can be shared between a struct array and a cell array.
 
-Most indexing expressions do not live longer than their `parent'
-objects. In rare cases, however, a lazily copied slice outlasts its
+Most indexing expressions do not live longer than their parent
+objects.  In rare cases, however, a lazily copied slice outlasts its
 parent, in which case it becomes orphaned, still occupying unnecessarily
-more memory than needed. To provide a remedy working in most real cases,
+more memory than needed.  To provide a remedy working in most real cases,
 Octave checks for orphaned lazy slices at certain situations, when a
 value is stored into a "permanent" location, such as a named variable or
-cell or struct element, and possibly economizes them. For example:
+cell or struct element, and possibly economizes them.  For example:
 
 @example
 @group
@@ -548,23 +565,22 @@
 @end group
 @end example
 
-@item
-Avoid deep recursion. Function calls to m-file functions carry a
-relatively significant overhead, so rewriting a recursion as a loop
-often helps. Also, note that the maximum level of recursion is limited.
+@item Avoid deep recursion.
+Function calls to m-file functions carry a relatively significant overhead, so
+rewriting a recursion as a loop often helps.  Also, note that the maximum level
+of recursion is limited.
 
-@item
-Avoid resizing matrices unnecessarily.  When building a single result
-matrix from a series of calculations, set the size of the result matrix
-first, then insert values into it.  Write
+@item Avoid resizing matrices unnecessarily.
+When building a single result matrix from a series of calculations, set the
+size of the result matrix first, then insert values into it.  Write
 
 @example
 @group
 result = zeros (big_n, big_m)
 for i = over:and_over
-  r1 = @dots{}
-  r2 = @dots{}
-  result (r1, r2) = new_value ();
+  ridx = @dots{}
+  cidx = @dots{}
+  result(ridx, cidx) = new_value ();
 endfor
 @end group
 @end example
@@ -581,12 +597,12 @@
 @end group
 @end example
 
-Sometimes the number of items can't be computed in advance, and
-stack-like operations are needed. When elements are being repeatedly
-inserted at/removed from the end of an array, Octave detects it as stack
-usage and attempts to use a smarter memory management strategy
-pre-allocating the array in bigger chunks. Likewise works for cell and
-struct arrays.
+Sometimes the number of items can not be computed in advance, and
+stack-like operations are needed.  When elements are being repeatedly
+inserted or removed from the end of an array, Octave detects it as stack
+usage and attempts to use a smarter memory management strategy by
+pre-allocating the array in bigger chunks.  This strategy is also applied
+to cell and struct arrays.
 
 @example
 @group
@@ -601,21 +617,19 @@
 @end group
 @end example
 
-@item
-Avoid calling @code{eval} or @code{feval} excessively, because
-they require Octave to parse input or look up the name of a function in
-the symbol table.
+@item Avoid calling @code{eval} or @code{feval} excessively.
+Parsing input or looking up the name of a function in the symbol table are
+relatively expensive operations.
 
-If you are using @code{eval} as an exception handling mechanism and not
+If you are using @code{eval} merely as an exception handling mechanism, and not
 because you need to execute some arbitrary text, use the @code{try}
 statement instead.  @xref{The @code{try} Statement}.
 
-@item
-If you are calling lots of functions but none of them will need to
-change during your run, set the variable
-@code{ignore_function_time_stamp} to @code{"all"} so that Octave doesn't
-waste a lot of time checking to see if you have updated your function
-files.
+@item Use @code{ignore_function_time_stamp} when appropriate.
+If you are calling lots of functions, and none of them will need to change
+during your run, set the variable @code{ignore_function_time_stamp} to
+@code{"all"}.  This will stop Octave from checking the time stamp of a function
+file to see if it has been updated while the program is being run.
 @end itemize
 
 @node Examples
@@ -650,6 +664,6 @@
 @end example
 
 Note the usage of colon indexing to flatten an intermediate result into
-a column vector. This is a common vectorization trick.
+a column vector.  This is a common vectorization trick.
 
 @end itemize
--- a/scripts/general/accumarray.m
+++ b/scripts/general/accumarray.m
@@ -26,13 +26,13 @@
 ## the rows of the matrix @var{subs} and the values by @var{vals}.  Each
 ## row of @var{subs} corresponds to one of the values in @var{vals}.  If
 ## @var{vals} is a scalar, it will be used for each of the row of
-## @var{subs}. If @var{subs} is a cell array of vectors, all vectors
+## @var{subs}.  If @var{subs} is a cell array of vectors, all vectors
 ## must be of the same length, and the subscripts in the @var{k}th
 ## vector must correspond to the @var{k}th dimension of the result.
 ##
 ## The size of the matrix will be determined by the subscripts
-## themselves. However, if @var{sz} is defined it determines the matrix
-## size. The length of @var{sz} must correspond to the number of columns
+## themselves.  However, if @var{sz} is defined it determines the matrix
+## size.  The length of @var{sz} must correspond to the number of columns
 ## in @var{subs}.  An exception is if @var{subs} has only one column, in
 ## which case @var{sz} may be the dimensions of a vector and the
 ## subscripts of @var{subs} are taken as the indices into it.
--- a/scripts/general/accumdim.m
+++ b/scripts/general/accumdim.m
@@ -22,11 +22,11 @@
 ## positions defined by their subscripts along a specified dimension.
 ## The subscripts are defined by the index vector @var{subs}.
 ## The dimension is specified by @var{dim}.  If not given, it defaults
-## to the first non-singleton dimension. The length of @var{subs} must
+## to the first non-singleton dimension.  The length of @var{subs} must
 ## be equal to @code{size (@var{vals}, @var{dim})}.
 ##
 ## The extent of the result matrix in the working dimension will be
-## determined by the subscripts themselves. However, if @var{n} is
+## determined by the subscripts themselves.  However, if @var{n} is
 ## defined it determines this extent.
 ##
 ## The default action of @code{accumdim} is to sum the subarrays with the
@@ -39,7 +39,7 @@
 ##
 ## The slices of the returned array that have no subscripts associated
 ## with them are set to zero.  Defining @var{fillval} to some other
-## value allows  these values to be defined.
+## value allows these values to be defined.
 ##
 ## An example of the use of @code{accumdim} is:
 ##
--- a/scripts/general/interpft.m
+++ b/scripts/general/interpft.m
@@ -27,7 +27,7 @@
 ## along the dimension @var{dim}.
 ##
 ## @code{interpft} assumes that the interpolated function is periodic,
-## and so assumptions are made about the end points of the interpolation.
+## and so assumptions are made about the endpoints of the interpolation.
 ##
 ## @seealso{interp1}
 ## @end deftypefn
--- a/scripts/io/strread.m
+++ b/scripts/io/strread.m
@@ -126,7 +126,7 @@
 ##
 ## @item "emptyvalue":
 ## Value to return for empty numeric values in non-whitespace delimited data.
-## The default is NaN. When the data type does not support NaN
+## The default is NaN@.  When the data type does not support NaN
 ## (int32 for example), then default is zero.
 ##
 ## @item "multipledelimsasone"
--- a/scripts/miscellaneous/parseparams.m
+++ b/scripts/miscellaneous/parseparams.m
@@ -48,7 +48,7 @@
 ## with their default values given as name-value pairs.
 ## If @var{params} do not form name-value pairs, or if an option occurs
 ## that does not match any of the available options, an error occurs.
-## When called from a m-file function, the error is prefixed with the
+## When called from an m-file function, the error is prefixed with the
 ## name of the caller function.
 ## The matching of options is case-insensitive.
 ##
--- a/scripts/miscellaneous/warning_ids.m
+++ b/scripts/miscellaneous/warning_ids.m
@@ -17,6 +17,7 @@
 ## <http://www.gnu.org/licenses/>.
 
 ## -*- texinfo -*-
+## @cindex warning ids
 ## @table @code
 ## @item Octave:abbreviated-property-match
 ## By default, the @code{Octave:abbreviated-property-match} warning is enabled.
@@ -121,9 +122,9 @@
 ## By default, the @code{Octave:autoload-relative-file-name} warning is enabled.
 ##
 ## @item Octave:broadcast
-## Warn when performing broadcasting operations. By default, this is
-## enabled. See the Broadcasting section in the Vectorization and Faster
-## Code Execution chapter in the manual.
+## Warn when performing broadcasting operations.  By default, this is
+## enabled.  @xref{Broadcasting} in the chapter Vectorization and Faster Code
+## Execution of the manual.
 ##
 ## @item Octave:built-in-variable-assignment
 ## By default, the @code{Octave:built-in-variable-assignment} warning is
--- a/src/DLD-FUNCTIONS/cellfun.cc
+++ b/src/DLD-FUNCTIONS/cellfun.cc
@@ -365,12 +365,12 @@
 @end group\n\
 @end example\n\
 \n\
-Use @code{cellfun} intelligently. The @code{cellfun} function is a\n\
-useful tool for avoiding loops. It is often used with anonymous\n\
+Use @code{cellfun} intelligently.  The @code{cellfun} function is a\n\
+useful tool for avoiding loops.  It is often used with anonymous\n\
 function handles; however, calling an anonymous function involves an\n\
 overhead quite comparable to the overhead of an m-file function.\n\
 Passing a handle to a built-in function is faster, because the\n\
-interpreter is not involved in the internal loop. For example:\n\
+interpreter is not involved in the internal loop.  For example:\n\
 \n\
 @example\n\
 @group\n\
--- a/src/help.cc
+++ b/src/help.cc
@@ -617,7 +617,7 @@
 
   pair_type ("parfor",
     "-*- texinfo -*-\n\
-@deftypefn {Keyword} {} for @var{i} = @var{range}\n\
+@deftypefn  {Keyword} {} for @var{i} = @var{range}\n\
 @deftypefnx {Keyword} {} for (@var{i} = @var{range}, @var{maxproc})\n\
 Begin a for loop that may execute in parallel.\n\
 \n\