Mercurial > hg > octave-terminal
changeset 14038:b0cdd60db5e5 stable
doc: Grammarcheck documentation ahead of 3.6.0 release.
* basics.txi, container.txi, contrib.txi, debug.txi, expr.txi, func.txi,
install.txi, io.txi, package.txi, polyarea.m, ezcontour.m, ezcontourf.m,
ezmesh.m, ezmeshc.m, ezplot.m, ezplot3.m, ezpolar.m, ezsurf.m, ezsurfc.m,
assert.m, amd.cc, chol.cc, colamd.cc, rand.cc: Grammarcheck documentation.
author | Rik <octave@nomad.inbox5.com> |
---|---|
date | Mon, 12 Dec 2011 21:01:27 -0800 |
parents | 4228c102eca9 |
children | e98140f84ae0 |
files | doc/interpreter/basics.txi doc/interpreter/container.txi doc/interpreter/contrib.txi doc/interpreter/debug.txi doc/interpreter/expr.txi doc/interpreter/func.txi doc/interpreter/install.txi doc/interpreter/io.txi doc/interpreter/package.txi scripts/general/polyarea.m scripts/plot/ezcontour.m scripts/plot/ezcontourf.m scripts/plot/ezmesh.m scripts/plot/ezmeshc.m scripts/plot/ezplot.m scripts/plot/ezplot3.m scripts/plot/ezpolar.m scripts/plot/ezsurf.m scripts/plot/ezsurfc.m scripts/testfun/assert.m src/DLD-FUNCTIONS/amd.cc src/DLD-FUNCTIONS/chol.cc src/DLD-FUNCTIONS/colamd.cc src/DLD-FUNCTIONS/rand.cc |
diffstat | 24 files changed, 58 insertions(+), 56 deletions(-) [+] |
line wrap: on
line diff
--- a/doc/interpreter/basics.txi +++ b/doc/interpreter/basics.txi @@ -1059,7 +1059,7 @@ lines "@code{disp(2);}" and "@code{disp(1);}" won't be executed. The block comment markers must appear alone as the only characters on a line -(excepting whitespace) in order to to be parsed correctly. +(excepting whitespace) in order to be parsed correctly. @node Comments and the Help System @subsection Comments and the Help System
--- a/doc/interpreter/container.txi +++ b/doc/interpreter/container.txi @@ -374,7 +374,8 @@ Besides the index operator ".", Octave can use dynamic naming "(var)" or the @code{struct} function to create structures. Dynamic naming uses the string -value of a variable as the field name. For example, +value of a variable as the field name. For example: + @example @group a = "field2";
--- a/doc/interpreter/contrib.txi +++ b/doc/interpreter/contrib.txi @@ -71,7 +71,7 @@ @end example You may want to get familiar with Mercurial queues to manage your -changesets. Here is a slightly more complex example using Mercurial +changesets. Here is a slightly more complex example using Mercurial queues, where work on two unrelated changesets is done in parallel and one of the changesets is updated after discussion on the maintainers mailing list: @@ -154,7 +154,7 @@ look for methods before constructors * symtab.cc (symbol_table::fcn_info::fcn_info_rep::find): -Look for class methods before constructors, contrary to Matlab +Look for class methods before constructors, contrary to @sc{matlab} documentation. * test/ctor-vs-method: New directory of test classes.
--- a/doc/interpreter/debug.txi +++ b/doc/interpreter/debug.txi @@ -190,11 +190,11 @@ @cindex profiler @cindex code profiling -Octave supports profiling of code execution on a per-function level. If +Octave supports profiling of code execution on a per-function level. If profiling is enabled, each call to a function (supporting built-ins, operators, functions in oct- and mex-files, user-defined functions in Octave code and anonymous functions) is recorded while running Octave -code. After that, this data can aid in analyzing the code behavior, and +code. After that, this data can aid in analyzing the code behavior, and is in particular helpful for finding ``hot spots'' in the code which use up a lot of computation time and are the best targets to spend optimization efforts on. @@ -207,7 +207,7 @@ @DOCSTRING(profile) An easy way to get an overview over the collected data is -@code{profshow}. This function takes the profiler data returned by +@code{profshow}. This function takes the profiler data returned by @code{profile} as input and prints a flat profile, for instance: @example @@ -223,7 +223,7 @@ This shows that most of the run time was spent executing the function @samp{myfib}, and some minor proportion evaluating the listed binary -operators. Furthermore, it is shown how often the function was called +operators. Furthermore, it is shown how often the function was called and the profiler also records that it is recursive. @DOCSTRING(profshow) @@ -233,12 +233,11 @@ @node Profiler Example @section Profiler Example -Below, we will give a short example of a profiler session. See also +Below, we will give a short example of a profiler session. See also @ref{Profiling} for the documentation of the profiler functions in -detail. Consider the code: +detail. Consider the code: @example -@group global N A; N = 300; @@ -274,7 +273,6 @@ fib = bar (N - 1) + bar (N - 2); endif endfunction -@end group @end example If we execute the two main functions, we get: @@ -291,7 +289,7 @@ But this does not give much information about where this time is spent; for instance, whether the single call to @code{expm} is more expensive -or the recursive time-stepping itself. To get a more detailed picture, +or the recursive time-stepping itself. To get a more detailed picture, we can use the profiler. @example @@ -326,29 +324,29 @@ The entries are the individual functions which have been executed (only the 10 most important ones), together with some information for each of -them. The entries like @samp{binary *} denote operators, while other -entries are ordinary functions. They include both built-ins like -@code{expm} and our own routines (for instance @code{timesteps}). From +them. The entries like @samp{binary *} denote operators, while other +entries are ordinary functions. They include both built-ins like +@code{expm} and our own routines (for instance @code{timesteps}). From this profile, we can immediately deduce that @code{expm} uses up the largest proportion of the processing time, even though it is only called -once. The second expensive operation is the matrix-vector product in the -routine @code{timesteps}. @footnote{We only know it is the binary +once. The second expensive operation is the matrix-vector product in the +routine @code{timesteps}. @footnote{We only know it is the binary multiplication operator, but fortunately this operator appears only at one place in the code and thus we know which occurrence takes so much -time. If there were multiple places, we would have to use the +time. If there were multiple places, we would have to use the hierarchical profile to find out the exact place which uses up the time which is not covered in this example.} Timing, however, is not the only information available from the profile. The attribute column shows us that @code{timesteps} calls itself -recursively. This may not be that remarkable in this example (since it's -clear anyway), but could be helpful in a more complex setting. As to the +recursively. This may not be that remarkable in this example (since it's +clear anyway), but could be helpful in a more complex setting. As to the question of why is there a @samp{binary \} in the output, we can easily -shed some light on that too. Note that @code{data} is a structure array +shed some light on that too. Note that @code{data} is a structure array (@ref{Structure Arrays}) which contains the field @code{FunctionTable}. -This stores the raw data for the profile shown. The number in the first +This stores the raw data for the profile shown. The number in the first column of the table gives the index under which the shown function can -be found there. Looking up @code{data.FunctionTable(41)} gives: +be found there. Looking up @code{data.FunctionTable(41)} gives: @example @group @@ -364,13 +362,13 @@ @end example Here we see the information from the table again, but have additional -fields @code{Parents} and @code{Children}. Those are both arrays, which +fields @code{Parents} and @code{Children}. Those are both arrays, which contain the indices of functions which have directly called the function in question (which is entry 7, @code{expm}, in this case) or been called -by it (no functions). Hence, the backslash operator has been used +by it (no functions). Hence, the backslash operator has been used internally by @code{expm}. -Now let's take a look at @code{bar}. For this, we start a fresh +Now let's take a look at @code{bar}. For this, we start a fresh profiling session (@code{profile on} does this; the old data is removed before the profiler is restarted): @@ -387,6 +385,7 @@ This gives: @example +@group # Function Attr Time (s) Calls ------------------------------------------------------- 1 bar R 2.091 13529 @@ -398,14 +397,15 @@ 6 nargin 0.000 1 7 binary != 0.000 1 9 __profiler_enable__ 0.000 1 +@end group @end example -Unsurprisingly, @code{bar} is also recursive. It has been called 13,529 +Unsurprisingly, @code{bar} is also recursive. It has been called 13,529 times in the course of recursively calculating the Fibonacci number in a suboptimal way, and most of the time was spent in @code{bar} itself. Finally, let's say we want to profile the execution of both @code{foo} -and @code{bar} together. Since we already have the run-time data +and @code{bar} together. Since we already have the run-time data collected for @code{bar}, we can restart the profiler without clearing the existing data and collect the missing statistics about @code{foo}. This is done by:
--- a/doc/interpreter/expr.txi +++ b/doc/interpreter/expr.txi @@ -62,7 +62,7 @@ elements of the array are taken in column-first order (like Fortran). The output from indexing assumes the dimensions of the index -expression. For example, +expression. For example: @example @group @@ -77,8 +77,10 @@ matrix. For example: @example +@group a(:) # result is a column vector a(:)' # result is a row vector +@end group @end example The above two code idioms are often used in place of @code{reshape} @@ -149,7 +151,6 @@ with an example. @example -@group a = reshape (1:8, 2, 2, 2) # Create 3-D array a = @@ -169,7 +170,6 @@ a(2,4); # Case (m < n), idx outside array: # Dimension 2 & 3 folded into new dimension of size 2x2 = 4 # Select 2nd row, 4th element of [2, 4, 6, 8], ans = 8 -@end group @end example One advanced use of indexing is to create arrays filled with a single
--- a/doc/interpreter/func.txi +++ b/doc/interpreter/func.txi @@ -290,8 +290,8 @@ values distinct names. It is possible to use the @code{nthargout} function to obtain only some -of the return values or several at once in a cell array. @ref{Cell Array -Objects} +of the return values or several at once in a cell array. +@ref{Cell Array Objects} @DOCSTRING(nthargout)
--- a/doc/interpreter/install.txi +++ b/doc/interpreter/install.txi @@ -82,7 +82,7 @@ @item --disable-docs Disable building all forms of the documentation (Info, PDF, HTML). The default is to build documentation, but your system will need functioning -Texinfo and Tex installs for this to succeed. +Texinfo and @TeX{} installs for this to succeed. @item --enable-float-truncate This option allows for truncation of intermediate floating point results
--- a/doc/interpreter/io.txi +++ b/doc/interpreter/io.txi @@ -582,7 +582,7 @@ @item @samp{%x}, @samp{%X} Print an integer as an unsigned hexadecimal number. @samp{%x} uses -lower-case letters and @samp{%X} uses upper-case. @xref{Integer +lowercase letters and @samp{%X} uses uppercase. @xref{Integer Conversions}, for details. @item @samp{%f} @@ -591,13 +591,13 @@ @item @samp{%e}, @samp{%E} Print a floating-point number in exponential notation. @samp{%e} uses -lower-case letters and @samp{%E} uses upper-case. @xref{Floating-Point +lowercase letters and @samp{%E} uses uppercase. @xref{Floating-Point Conversions}, for details. @item @samp{%g}, @samp{%G} Print a floating-point number in either normal (fixed-point) or exponential notation, whichever is more appropriate for its magnitude. -@samp{%g} uses lower-case letters and @samp{%G} uses upper-case. +@samp{%g} uses lowercase letters and @samp{%G} uses uppercase. @xref{Floating-Point Conversions}, for details. @item @samp{%c}
--- a/doc/interpreter/package.txi +++ b/doc/interpreter/package.txi @@ -206,7 +206,7 @@ @item package/NEWS This is an optional file describing all user-visible changes worth -mentioning. As this file increases on size, old entries can be moved +mentioning. As this file increases on size, old entries can be moved into @file{package/ONEWS}. @item package/ONEWS
--- a/scripts/general/polyarea.m +++ b/scripts/general/polyarea.m @@ -20,7 +20,7 @@ ## @deftypefn {Function File} {} polyarea (@var{x}, @var{y}) ## @deftypefnx {Function File} {} polyarea (@var{x}, @var{y}, @var{dim}) ## -## Determines area of a polygon by triangle method. The variables +## Determine area of a polygon by triangle method. The variables ## @var{x} and @var{y} define the vertex pairs, and must therefore have ## the same shape. They can be either vectors or arrays. If they are ## arrays then the columns of @var{x} and @var{y} are treated separately
--- a/scripts/plot/ezcontour.m +++ b/scripts/plot/ezcontour.m @@ -23,7 +23,7 @@ ## @deftypefnx {Function File} {} ezcontour (@var{h}, @dots{}) ## @deftypefnx {Function File} {@var{h} =} ezcontour (@dots{}) ## -## Plots the contour lines of a function. @var{f} is a string, inline function +## Plot the contour lines of a function. @var{f} is a string, inline function ## or function handle with two arguments defining the function. By default the ## plot is over the domain @code{-2*pi < @var{x} < 2*pi} and @code{-2*pi < ## @var{y} < 2*pi} with 60 points in each dimension.
--- a/scripts/plot/ezcontourf.m +++ b/scripts/plot/ezcontourf.m @@ -23,7 +23,7 @@ ## @deftypefnx {Function File} {} ezcontourf (@var{h}, @dots{}) ## @deftypefnx {Function File} {@var{h} =} ezcontourf (@dots{}) ## -## Plots the filled contour lines of a function. @var{f} is a string, inline +## Plot the filled contour lines of a function. @var{f} is a string, inline ## function or function handle with two arguments defining the function. By ## default the plot is over the domain @code{-2*pi < @var{x} < 2*pi} and ## @code{-2*pi < @var{y} < 2*pi} with 60 points in each dimension.
--- a/scripts/plot/ezmesh.m +++ b/scripts/plot/ezmesh.m @@ -25,7 +25,7 @@ ## @deftypefnx {Function File} {} ezmesh (@var{h}, @dots{}) ## @deftypefnx {Function File} {@var{h} =} ezmesh (@dots{}) ## -## Plots the mesh defined by a function. @var{f} is a string, inline +## Plot the mesh defined by a function. @var{f} is a string, inline ## function or function handle with two arguments defining the function. By ## default the plot is over the domain @code{-2*pi < @var{x} < 2*pi} and ## @code{-2*pi < @var{y} < 2*pi} with 60 points in each dimension.
--- a/scripts/plot/ezmeshc.m +++ b/scripts/plot/ezmeshc.m @@ -25,7 +25,7 @@ ## @deftypefnx {Function File} {} ezmeshc (@var{h}, @dots{}) ## @deftypefnx {Function File} {@var{h} =} ezmeshc (@dots{}) ## -## Plots the mesh and contour lines defined by a function. @var{f} is a string, +## Plot the mesh and contour lines defined by a function. @var{f} is a string, ## inline function or function handle with two arguments defining the function. ## By default the plot is over the domain @code{-2*pi < @var{x} < 2*pi} and ## @code{-2*pi < @var{y} < 2*pi} with 60 points in each dimension.
--- a/scripts/plot/ezplot.m +++ b/scripts/plot/ezplot.m @@ -24,7 +24,7 @@ ## @deftypefnx {Function File} {} ezplot (@var{h}, @dots{}) ## @deftypefnx {Function File} {@var{h} =} ezplot (@dots{}) ## -## Plots in two-dimensions the curve defined by @var{f}. The function +## Plot the curve defined by @var{f} in two dimensions. The function ## @var{f} may be a string, inline function or function handle and can ## have either one or two variables. If @var{f} has one variable, then ## the function is plotted over the domain @code{-2*pi < @var{x} < 2*pi}
--- a/scripts/plot/ezplot3.m +++ b/scripts/plot/ezplot3.m @@ -23,7 +23,7 @@ ## @deftypefnx {Function File} {} ezplot3 (@var{h}, @dots{}) ## @deftypefnx {Function File} {@var{h} =} ezplot3 (@dots{}) ## -## Plots in three-dimensions the curve defined parametrically. +## Plot a parametrically defined curve in three dimensions. ## @var{fx}, @var{fy}, and @var{fz} are strings, inline functions ## or function handles with one arguments defining the function. By ## default the plot is over the domain @code{-2*pi < @var{x} < 2*pi}
--- a/scripts/plot/ezpolar.m +++ b/scripts/plot/ezpolar.m @@ -23,7 +23,7 @@ ## @deftypefnx {Function File} {} ezpolar (@var{h}, @dots{}) ## @deftypefnx {Function File} {@var{h} =} ezpolar (@dots{}) ## -## Plots in polar plot defined by a function. The function @var{f} is either +## Plot a function in polar coordinates. The function @var{f} is either ## a string, inline function or function handle with one arguments defining ## the function. By default the plot is over the domain @code{0 < @var{x} < ## 2*pi} with 60 points.
--- a/scripts/plot/ezsurf.m +++ b/scripts/plot/ezsurf.m @@ -25,7 +25,7 @@ ## @deftypefnx {Function File} {} ezsurf (@var{h}, @dots{}) ## @deftypefnx {Function File} {@var{h} =} ezsurf (@dots{}) ## -## Plots the surface defined by a function. @var{f} is a string, inline +## Plot the surface defined by a function. @var{f} is a string, inline ## function or function handle with two arguments defining the function. By ## default the plot is over the domain @code{-2*pi < @var{x} < 2*pi} and ## @code{-2*pi < @var{y} < 2*pi} with 60 points in each dimension.
--- a/scripts/plot/ezsurfc.m +++ b/scripts/plot/ezsurfc.m @@ -25,7 +25,7 @@ ## @deftypefnx {Function File} {} ezsurfc (@var{h}, @dots{}) ## @deftypefnx {Function File} {@var{h} =} ezsurfc (@dots{}) ## -## Plots the surface and contour lines defined by a function. @var{f} is a +## Plot the surface and contour lines defined by a function. @var{f} is a ## string, inline function or function handle with two arguments defining the ## function. By default the plot is over the domain @code{-2*pi < @var{x} < ## 2*pi} and @code{-2*pi < @var{y} < 2*pi} with 60 points in each dimension.
--- a/scripts/testfun/assert.m +++ b/scripts/testfun/assert.m @@ -23,7 +23,7 @@ ## @deftypefnx {Function File} {} assert (@var{observed}, @var{expected}) ## @deftypefnx {Function File} {} assert (@var{observed}, @var{expected}, @var{tol}) ## -## Produces an error if the condition is not met. @code{assert} can be +## Produce an error if the condition is not met. @code{assert} can be ## called in three different ways. ## ## @table @code
--- a/src/DLD-FUNCTIONS/amd.cc +++ b/src/DLD-FUNCTIONS/amd.cc @@ -55,7 +55,7 @@ @deftypefn {Loadable Function} {@var{p} =} amd (@var{S})\n\ @deftypefnx {Loadable Function} {@var{p} =} amd (@var{S}, @var{opts})\n\ \n\ -Returns the approximate minimum degree permutation of a matrix. This\n\ +Return the approximate minimum degree permutation of a matrix. This\n\ permutation such that the Cholesky@tie{}factorization of @code{@var{S}\n\ (@var{p}, @var{p})} tends to be sparser than the Cholesky@tie{}factorization\n\ of @var{S} itself. @code{amd} is typically faster than @code{symamd} but\n\
--- a/src/DLD-FUNCTIONS/chol.cc +++ b/src/DLD-FUNCTIONS/chol.cc @@ -129,8 +129,9 @@ \n\ @end ifnottex\n\ \n\ -For full matrices, if the 'lower' flag is set only the lower triangular part of the matrix \ -is used for the factorization, otherwise the upper triangular part is used.\n\ +For full matrices, if the 'lower' flag is set only the lower triangular part\n\ +of the matrix is used for the factorization, otherwise the upper triangular\n\ +part is used.\n\ \n\ In general the lower triangular factorization is significantly faster for\n\ sparse matrices.\n\
--- a/src/DLD-FUNCTIONS/colamd.cc +++ b/src/DLD-FUNCTIONS/colamd.cc @@ -647,7 +647,7 @@ @deftypefnx {Loadable Function} {@var{p} =} etree (@var{S}, @var{typ})\n\ @deftypefnx {Loadable Function} {[@var{p}, @var{q}] =} etree (@var{S}, @var{typ})\n\ \n\ -Returns the elimination tree for the matrix @var{S}. By default @var{S}\n\ +Return the elimination tree for the matrix @var{S}. By default @var{S}\n\ is assumed to be symmetric and the symmetric elimination tree is\n\ returned. The argument @var{typ} controls whether a symmetric or\n\ column elimination tree is returned. Valid values of @var{typ} are\n\
--- a/src/DLD-FUNCTIONS/rand.cc +++ b/src/DLD-FUNCTIONS/rand.cc @@ -1024,9 +1024,9 @@ @deftypefnx {Loadable Function} {} randperm (@var{n}, @var{m})\n\ Return a row vector containing a random permutation of @code{1:@var{n}}.\n\ If @var{m} is supplied, return @var{m} unique entries, sampled without\n\ -replacement from @code{1:@var{n}}. The complexity is O(@var{n}) in\n\ +replacement from @code{1:@var{n}}. The complexity is O(@var{n}) in\n\ memory and O(@var{m}) in time, unless @var{m} < @var{n}/5, in which case\n\ -O(@var{m}) memory is used as well. The randomization is performed using\n\ +O(@var{m}) memory is used as well. The randomization is performed using\n\ rand(). All permutations are equally likely.\n\ @seealso{perms}\n\ @end deftypefn")