5164
|
1 @c Copyright (C) 2004, 2005 David Bateman |
|
2 @c This is part of the Octave manual. |
|
3 @c For copying conditions, see the file gpl.texi. |
|
4 |
5648
|
5 @ifhtml |
|
6 @set htmltex |
|
7 @end ifhtml |
|
8 @iftex |
|
9 @set htmltex |
|
10 @end iftex |
|
11 |
5164
|
12 @node Sparse Matrices |
|
13 @chapter Sparse Matrices |
|
14 |
|
15 @menu |
|
16 * Basics:: The Creation and Manipulation of Sparse Matrices |
|
17 * Sparse Linear Algebra:: Linear Algebra on Sparse Matrices |
|
18 * Iterative Techniques:: Iterative Techniques applied to Sparse Matrices |
5648
|
19 * Real Life Example:: Real Life Example of the use of Sparse Matrices |
5164
|
20 * Oct-Files:: Using Sparse Matrices in Oct-files |
|
21 * Function Reference:: Documentation from the Specific Sparse Functions |
|
22 @end menu |
|
23 |
5648
|
24 @node Basics, Sparse Linear Algebra, Sparse Matrices, Sparse Matrices |
5164
|
25 @section The Creation and Manipulation of Sparse Matrices |
|
26 |
|
27 The size of mathematical problems that can be treated at any particular |
|
28 time is generally limited by the available computing resources. Both, |
|
29 the speed of the computer and its available memory place limitation on |
|
30 the problem size. |
|
31 |
5506
|
32 There are many classes of mathematical problems which give rise to |
5164
|
33 matrices, where a large number of the elements are zero. In this case |
|
34 it makes sense to have a special matrix type to handle this class of |
|
35 problems where only the non-zero elements of the matrix are |
5506
|
36 stored. Not only does this reduce the amount of memory to store the |
5164
|
37 matrix, but it also means that operations on this type of matrix can |
|
38 take advantage of the a-priori knowledge of the positions of the |
|
39 non-zero elements to accelerate their calculations. |
|
40 |
|
41 A matrix type that stores only the non-zero elements is generally called |
|
42 sparse. It is the purpose of this document to discuss the basics of the |
|
43 storage and creation of sparse matrices and the fundamental operations |
|
44 on them. |
|
45 |
|
46 @menu |
|
47 * Storage:: Storage of Sparse Matrices |
|
48 * Creation:: Creating Sparse Matrices |
5648
|
49 * Information:: Finding out Information about Sparse Matrices |
5164
|
50 * Operators and Functions:: Basic Operators and Functions on Sparse Matrices |
|
51 @end menu |
|
52 |
|
53 @node Storage, Creation, Basics, Basics |
|
54 @subsection Storage of Sparse Matrices |
|
55 |
|
56 It is not strictly speaking necessary for the user to understand how |
|
57 sparse matrices are stored. However, such an understanding will help |
|
58 to get an understanding of the size of sparse matrices. Understanding |
|
59 the storage technique is also necessary for those users wishing to |
|
60 create their own oct-files. |
|
61 |
|
62 There are many different means of storing sparse matrix data. What all |
5648
|
63 of the methods have in common is that they attempt to reduce the complexity |
5164
|
64 and storage given a-priori knowledge of the particular class of problems |
|
65 that will be solved. A good summary of the available techniques for storing |
|
66 sparse matrix is given by Saad @footnote{Youcef Saad "SPARSKIT: A basic toolkit |
|
67 for sparse matrix computation", 1994, |
|
68 @url{ftp://ftp.cs.umn.edu/dept/sparse/SPARSKIT2/DOC/paper.ps}}. |
|
69 With full matrices, knowledge of the point of an element of the matrix |
|
70 within the matrix is implied by its position in the computers memory. |
|
71 However, this is not the case for sparse matrices, and so the positions |
|
72 of the non-zero elements of the matrix must equally be stored. |
|
73 |
|
74 An obvious way to do this is by storing the elements of the matrix as |
5506
|
75 triplets, with two elements being their position in the array |
5164
|
76 (rows and column) and the third being the data itself. This is conceptually |
|
77 easy to grasp, but requires more storage than is strictly needed. |
|
78 |
5648
|
79 The storage technique used within Octave is the compressed column |
5164
|
80 format. In this format the position of each element in a row and the |
|
81 data are stored as previously. However, if we assume that all elements |
|
82 in the same column are stored adjacent in the computers memory, then |
|
83 we only need to store information on the number of non-zero elements |
|
84 in each column, rather than their positions. Thus assuming that the |
|
85 matrix has more non-zero elements than there are columns in the |
|
86 matrix, we win in terms of the amount of memory used. |
|
87 |
|
88 In fact, the column index contains one more element than the number of |
|
89 columns, with the first element always being zero. The advantage of |
5648
|
90 this is a simplification in the code, in that their is no special case |
5164
|
91 for the first or last columns. A short example, demonstrating this in |
|
92 C is. |
|
93 |
|
94 @example |
|
95 for (j = 0; j < nc; j++) |
|
96 for (i = cidx (j); i < cidx(j+1); i++) |
5648
|
97 printf ("non-zero element (%i,%i) is %d\n", |
|
98 ridx(i), j, data(i)); |
5164
|
99 @end example |
|
100 |
|
101 A clear understanding might be had by considering an example of how the |
|
102 above applies to an example matrix. Consider the matrix |
|
103 |
|
104 @example |
|
105 @group |
|
106 1 2 0 0 |
|
107 0 0 0 3 |
|
108 0 0 0 4 |
|
109 @end group |
|
110 @end example |
|
111 |
|
112 The non-zero elements of this matrix are |
|
113 |
|
114 @example |
|
115 @group |
|
116 (1, 1) @result{} 1 |
|
117 (1, 2) @result{} 2 |
|
118 (2, 4) @result{} 3 |
|
119 (3, 4) @result{} 4 |
|
120 @end group |
|
121 @end example |
|
122 |
|
123 This will be stored as three vectors @var{cidx}, @var{ridx} and @var{data}, |
|
124 representing the column indexing, row indexing and data respectively. The |
|
125 contents of these three vectors for the above matrix will be |
|
126 |
|
127 @example |
|
128 @group |
5506
|
129 @var{cidx} = [0, 1, 2, 2, 4] |
5164
|
130 @var{ridx} = [0, 0, 1, 2] |
|
131 @var{data} = [1, 2, 3, 4] |
|
132 @end group |
|
133 @end example |
|
134 |
|
135 Note that this is the representation of these elements with the first row |
5648
|
136 and column assumed to start at zero, while in Octave itself the row and |
|
137 column indexing starts at one. Thus the number of elements in the |
5164
|
138 @var{i}-th column is given by @code{@var{cidx} (@var{i} + 1) - |
|
139 @var{cidx} (@var{i})}. |
|
140 |
5648
|
141 Although Octave uses a compressed column format, it should be noted |
|
142 that compressed row formats are equally possible. However, in the |
|
143 context of mixed operations between mixed sparse and dense matrices, |
|
144 it makes sense that the elements of the sparse matrices are in the |
|
145 same order as the dense matrices. Octave stores dense matrices in |
|
146 column major ordering, and so sparse matrices are equally stored in |
|
147 this manner. |
5164
|
148 |
5324
|
149 A further constraint on the sparse matrix storage used by Octave is that |
5164
|
150 all elements in the rows are stored in increasing order of their row |
5506
|
151 index, which makes certain operations faster. However, it imposes |
5164
|
152 the need to sort the elements on the creation of sparse matrices. Having |
|
153 dis-ordered elements is potentially an advantage in that it makes operations |
|
154 such as concatenating two sparse matrices together easier and faster, however |
|
155 it adds complexity and speed problems elsewhere. |
|
156 |
5648
|
157 @node Creation, Information, Storage, Basics |
5164
|
158 @subsection Creating Sparse Matrices |
|
159 |
|
160 There are several means to create sparse matrix. |
|
161 |
|
162 @table @asis |
|
163 @item Returned from a function |
|
164 There are many functions that directly return sparse matrices. These include |
|
165 @dfn{speye}, @dfn{sprand}, @dfn{spdiag}, etc. |
|
166 @item Constructed from matrices or vectors |
|
167 The function @dfn{sparse} allows a sparse matrix to be constructed from |
5506
|
168 three vectors representing the row, column and data. Alternatively, the |
5164
|
169 function @dfn{spconvert} uses a three column matrix format to allow easy |
|
170 importation of data from elsewhere. |
|
171 @item Created and then filled |
|
172 The function @dfn{sparse} or @dfn{spalloc} can be used to create an empty |
|
173 matrix that is then filled by the user |
|
174 @item From a user binary program |
|
175 The user can directly create the sparse matrix within an oct-file. |
|
176 @end table |
|
177 |
|
178 There are several basic functions to return specific sparse |
|
179 matrices. For example the sparse identity matrix, is a matrix that is |
|
180 often needed. It therefore has its own function to create it as |
|
181 @code{speye (@var{n})} or @code{speye (@var{r}, @var{c})}, which |
|
182 creates an @var{n}-by-@var{n} or @var{r}-by-@var{c} sparse identity |
|
183 matrix. |
|
184 |
|
185 Another typical sparse matrix that is often needed is a random distribution |
|
186 of random elements. The functions @dfn{sprand} and @dfn{sprandn} perform |
5506
|
187 this for uniform and normal random distributions of elements. They have exactly |
|
188 the same calling convention, where @code{sprand (@var{r}, @var{c}, @var{d})}, |
|
189 creates an @var{r}-by-@var{c} sparse matrix with a density of filled |
5164
|
190 elements of @var{d}. |
|
191 |
5506
|
192 Other functions of interest that directly creates a sparse matrices, are |
5164
|
193 @dfn{spdiag} or its generalization @dfn{spdiags}, that can take the |
|
194 definition of the diagonals of the matrix and create the sparse matrix |
|
195 that corresponds to this. For example |
|
196 |
|
197 @example |
6411
|
198 s = spdiag (sparse(randn(1,n)), -1); |
5164
|
199 @end example |
|
200 |
|
201 creates a sparse (@var{n}+1)-by-(@var{n}+1) sparse matrix with a single |
|
202 diagonal defined. |
|
203 |
|
204 The recommended way for the user to create a sparse matrix, is to create |
5648
|
205 two vectors containing the row and column index of the data and a third |
5164
|
206 vector of the same size containing the data to be stored. For example |
|
207 |
|
208 @example |
6421
|
209 ri = ci = d = []; |
|
210 for j = 1:c |
|
211 ri = [ri; randperm(r)(1:n)']; |
|
212 ci = [ci; j*ones(n,1)]; |
|
213 d = [d; rand(n,1)]; |
|
214 endfor |
|
215 s = sparse (ri, ci, d, r, c); |
5164
|
216 @end example |
|
217 |
6421
|
218 creates an @var{r}-by-@var{c} sparse matrix with a random distribution |
|
219 of @var{n} (<@var{r}) elements per column. The elements of the vectors |
|
220 do not need to be sorted in any particular order as Octave will sort |
|
221 them prior to storing the data. However, pre-sorting the data will |
|
222 make the creation of the sparse matrix faster. |
5164
|
223 |
|
224 The function @dfn{spconvert} takes a three or four column real matrix. |
|
225 The first two columns represent the row and column index respectively and |
|
226 the third and four columns, the real and imaginary parts of the sparse |
|
227 matrix. The matrix can contain zero elements and the elements can be |
|
228 sorted in any order. Adding zero elements is a convenient way to define |
|
229 the size of the sparse matrix. For example |
|
230 |
|
231 @example |
|
232 s = spconvert ([1 2 3 4; 1 3 4 4; 1 2 3 0]') |
|
233 @result{} Compressed Column Sparse (rows=4, cols=4, nnz=3) |
|
234 (1 , 1) -> 1 |
|
235 (2 , 3) -> 2 |
|
236 (3 , 4) -> 3 |
|
237 @end example |
|
238 |
|
239 An example of creating and filling a matrix might be |
|
240 |
|
241 @example |
|
242 k = 5; |
|
243 nz = r * k; |
|
244 s = spalloc (r, c, nz) |
|
245 for j = 1:c |
|
246 idx = randperm (r); |
5648
|
247 s (:, j) = [zeros(r - k, 1); ... |
|
248 rand(k, 1)] (idx); |
5164
|
249 endfor |
|
250 @end example |
|
251 |
5324
|
252 It should be noted, that due to the way that the Octave |
5164
|
253 assignment functions are written that the assignment will reallocate |
5506
|
254 the memory used by the sparse matrix at each iteration of the above loop. |
|
255 Therefore the @dfn{spalloc} function ignores the @var{nz} argument and |
|
256 does not preassign the memory for the matrix. Therefore, it is vitally |
5648
|
257 important that code using to above structure should be vectorized |
|
258 as much as possible to minimize the number of assignments and reduce the |
5164
|
259 number of memory allocations. |
|
260 |
|
261 The above problem can be avoided in oct-files. However, the |
|
262 construction of a sparse matrix from an oct-file is more complex than |
|
263 can be discussed in this brief introduction, and you are referred to |
|
264 section @ref{Oct-Files}, to have a full description of the techniques |
|
265 involved. |
|
266 |
5648
|
267 @node Information, Operators and Functions, Creation, Basics |
|
268 @subsection Finding out Information about Sparse Matrices |
|
269 |
|
270 There are a number of functions that allow information concerning |
|
271 sparse matrices to be obtained. The most basic of these is |
|
272 @dfn{issparse} that identifies whether a particular Octave object is |
|
273 in fact a sparse matrix. |
|
274 |
|
275 Another very basic function is @dfn{nnz} that returns the number of |
|
276 non-zero entries there are in a sparse matrix, while the function |
|
277 @dfn{nzmax} returns the amount of storage allocated to the sparse |
|
278 matrix. Note that Octave tends to crop unused memory at the first |
|
279 opportunity for sparse objects. There are some cases of user created |
|
280 sparse objects where the value returned by @dfn{nzmaz} will not be |
|
281 the same as @dfn{nnz}, but in general they will give the same |
|
282 result. The function @dfn{spstats} returns some basic statistics on |
|
283 the columns of a sparse matrix including the number of elements, the |
|
284 mean and the variance of each column. |
|
285 |
|
286 When solving linear equations involving sparse matrices Octave |
|
287 determines the means to solve the equation based on the type of the |
|
288 matrix as discussed in @ref{Sparse Linear Algebra}. Octave probes the |
|
289 matrix type when the div (/) or ldiv (\) operator is first used with |
|
290 the matrix and then caches the type. However the @dfn{matrix_type} |
|
291 function can be used to determine the type of the sparse matrix prior |
|
292 to use of the div or ldiv operators. For example |
|
293 |
|
294 @example |
|
295 a = tril (sprandn(1024, 1024, 0.02), -1) ... |
|
296 + speye(1024); |
|
297 matrix_type (a); |
|
298 ans = Lower |
|
299 @end example |
|
300 |
|
301 show that Octave correctly determines the matrix type for lower |
|
302 triangular matrices. @dfn{matrix_type} can also be used to force |
|
303 the type of a matrix to be a particular type. For example |
|
304 |
|
305 @example |
|
306 a = matrix_type (tril (sprandn (1024, ... |
|
307 1024, 0.02), -1) + speye(1024), 'Lower'); |
|
308 @end example |
|
309 |
|
310 This allows the cost of determining the matrix type to be |
|
311 avoided. However, incorrectly defining the matrix type will result in |
|
312 incorrect results from solutions of linear equations, and so it is |
|
313 entirely the responsibility of the user to correctly identify the |
|
314 matrix type |
|
315 |
|
316 There are several graphical means of finding out information about |
|
317 sparse matrices. The first is the @dfn{spy} command, which displays |
|
318 the structure of the non-zero elements of the |
|
319 matrix. @xref{fig:spmatrix}, for an exaple of the use of |
5704
|
320 @dfn{spy}. More advanced graphical information can be obtained with the |
5648
|
321 @dfn{treeplot}, @dfn{etreeplot} and @dfn{gplot} commands. |
|
322 |
|
323 @float Figure,fig:spmatrix |
|
324 @image{spmatrix,8cm} |
|
325 @caption{Structure of simple sparse matrix.} |
|
326 @end float |
|
327 |
|
328 One use of sparse matrices is in graph theory, where the |
|
329 interconnections between nodes is represented as an adjacency |
|
330 matrix. That is, if the i-th node in a graph is connected to the j-th |
|
331 node. Then the ij-th node (and in the case of undirected graphs the |
|
332 ji-th node) of the sparse adjacency matrix is non-zero. If each node |
|
333 is then associated with a set of co-ordinates, then the @dfn{gplot} |
|
334 command can be used to graphically display the interconnections |
|
335 between nodes. |
|
336 |
|
337 As a trivial example of the use of @dfn{gplot}, consider the example |
|
338 |
|
339 @example |
|
340 A = sparse([2,6,1,3,2,4,3,5,4,6,1,5], |
|
341 [1,1,2,2,3,3,4,4,5,5,6,6],1,6,6); |
|
342 xy = [0,4,8,6,4,2;5,0,5,7,5,7]'; |
|
343 gplot(A,xy) |
|
344 @end example |
|
345 |
|
346 which creates an adjacency matrix @code{A} where node 1 is connected |
|
347 to nodes 2 and 6, node 2 with nodes 1 and 3, etc. The co-ordinates of |
|
348 the nodes are given in the n-by-2 matrix @code{xy}. |
|
349 @ifset htmltex |
|
350 @xref{fig:gplot}. |
|
351 |
|
352 @float Figure,fig:gplot |
|
353 @image{gplot,8cm} |
|
354 @caption{Simple use of the @dfn{gplot} command.} |
|
355 @end float |
|
356 @end ifset |
|
357 |
|
358 The dependencies between the nodes of a Cholesky factorization can be |
|
359 calculated in linear time without explicitly needing to calculate the |
|
360 Cholesky factorization by the @code{etree} command. This command |
|
361 returns the elimination tree of the matrix and can be displayed |
|
362 graphically by the command @code{treeplot(etree(A))} if @code{A} is |
|
363 symmetric or @code{treeplot(etree(A+A'))} otherwise. |
|
364 |
|
365 @node Operators and Functions, , Information, Basics |
5164
|
366 @subsection Basic Operators and Functions on Sparse Matrices |
|
367 |
|
368 @menu |
5648
|
369 * Functions:: Sparse Functions |
5164
|
370 * ReturnType:: The Return Types of Operators and Functions |
|
371 * MathConsiderations:: Mathematical Considerations |
|
372 @end menu |
|
373 |
|
374 @node Functions, ReturnType, Operators and Functions, Operators and Functions |
5648
|
375 @subsubsection Sparse Functions |
|
376 |
|
377 An important consideration in the use of the sparse functions of |
|
378 Octave is that many of the internal functions of Octave, such as |
|
379 @dfn{diag}, can not accept sparse matrices as an input. The sparse |
|
380 implementation in Octave therefore uses the @dfn{dispatch} |
|
381 function to overload the normal Octave functions with equivalent |
|
382 functions that work with sparse matrices. However, at any time the |
|
383 sparse matrix specific version of the function can be used by |
|
384 explicitly calling its function name. |
|
385 |
|
386 The table below lists all of the sparse functions of Octave |
|
387 together (with possible future extensions that are currently |
|
388 unimplemented, listed last). Note that in this specific sparse forms |
|
389 of the functions are typically the same as the general versions with a |
|
390 @dfn{sp} prefix. In the table below, and the rest of this article |
|
391 the specific sparse versions of the functions are used. |
|
392 |
|
393 @table @asis |
|
394 @item Generate sparse matrices: |
|
395 @dfn{spalloc}, @dfn{spdiags}, @dfn{speye}, @dfn{sprand}, |
|
396 @dfn{sprandn}, @dfn{sprandsym} |
|
397 |
|
398 @item Sparse matrix conversion: |
|
399 @dfn{full}, @dfn{sparse}, @dfn{spconvert}, @dfn{spfind} |
|
400 |
|
401 @item Manipulate sparse matrices |
|
402 @dfn{issparse}, @dfn{nnz}, @dfn{nonzeros}, @dfn{nzmax}, |
|
403 @dfn{spfun}, @dfn{spones}, @dfn{spy}, |
5164
|
404 |
5648
|
405 @item Graph Theory: |
|
406 @dfn{etree}, @dfn{etreeplot}, @dfn{gplot}, |
|
407 @dfn{treeplot}, (treelayout) |
|
408 |
|
409 @item Sparse matrix reordering: |
|
410 @dfn{ccolamd}, @dfn{colamd}, @dfn{colperm}, |
|
411 @dfn{csymamd}, @dfn{dmperm}, @dfn{symamd}, @dfn{randperm}, (symrcm) |
|
412 |
|
413 @item Linear algebra: |
|
414 @dfn{matrix\_type}, @dfn{spchol}, @dfn{cpcholinv}, |
|
415 @dfn{spchol2inv}, @dfn{spdet}, @dfn{spinv}, @dfn{spkron}, |
|
416 @dfn{splchol}, @dfn{splu}, @dfn{spqr}, (condest, eigs, normest, |
|
417 sprank, svds, spaugment) |
|
418 |
|
419 @item Iterative techniques: |
5837
|
420 @dfn{luinc}, @dfn{pcg}, @dfn{pcr}, (bicg, bicgstab, cholinc, cgs, |
|
421 gmres, lsqr, minres, qmr, symmlq) |
5648
|
422 |
|
423 @item Miscellaneous: |
|
424 @dfn{spparms}, @dfn{symbfact}, @dfn{spstats}, |
|
425 @dfn{spprod}, @dfn{spcumsum}, @dfn{spsum}, |
|
426 @dfn{spsumsq}, @dfn{spmin}, @dfn{spmax}, @dfn{spatan2}, |
|
427 @dfn{spdiag} |
|
428 @end table |
|
429 |
|
430 In addition all of the standard Octave mapper functions (ie. basic |
|
431 math functions that take a single argument) such as @dfn{abs}, etc |
|
432 can accept sparse matrices. The reader is referred to the documentation |
|
433 supplied with these functions within Octave itself for further |
|
434 details. |
5164
|
435 |
|
436 @node ReturnType, MathConsiderations, Functions, Operators and Functions |
|
437 @subsubsection The Return Types of Operators and Functions |
|
438 |
5506
|
439 The two basic reasons to use sparse matrices are to reduce the memory |
5164
|
440 usage and to not have to do calculations on zero elements. The two are |
|
441 closely related in that the computation time on a sparse matrix operator |
5506
|
442 or function is roughly linear with the number of non-zero elements. |
5164
|
443 |
|
444 Therefore, there is a certain density of non-zero elements of a matrix |
|
445 where it no longer makes sense to store it as a sparse matrix, but rather |
|
446 as a full matrix. For this reason operators and functions that have a |
|
447 high probability of returning a full matrix will always return one. For |
|
448 example adding a scalar constant to a sparse matrix will almost always |
|
449 make it a full matrix, and so the example |
|
450 |
|
451 @example |
|
452 speye(3) + 0 |
|
453 @result{} 1 0 0 |
|
454 0 1 0 |
|
455 0 0 1 |
|
456 @end example |
|
457 |
|
458 returns a full matrix as can be seen. Additionally all sparse functions |
|
459 test the amount of memory occupied by the sparse matrix to see if the |
|
460 amount of storage used is larger than the amount used by the full |
|
461 equivalent. Therefore @code{speye (2) * 1} will return a full matrix as |
|
462 the memory used is smaller for the full version than the sparse version. |
|
463 |
|
464 As all of the mixed operators and functions between full and sparse |
|
465 matrices exist, in general this does not cause any problems. However, |
|
466 one area where it does cause a problem is where a sparse matrix is |
|
467 promoted to a full matrix, where subsequent operations would resparsify |
5648
|
468 the matrix. Such cases are rare, but can be artificially created, for |
5164
|
469 example @code{(fliplr(speye(3)) + speye(3)) - speye(3)} gives a full |
|
470 matrix when it should give a sparse one. In general, where such cases |
|
471 occur, they impose only a small memory penalty. |
|
472 |
5648
|
473 There is however one known case where this behavior of Octave's |
5164
|
474 sparse matrices will cause a problem. That is in the handling of the |
|
475 @dfn{diag} function. Whether @dfn{diag} returns a sparse or full matrix |
|
476 depending on the type of its input arguments. So |
|
477 |
|
478 @example |
|
479 a = diag (sparse([1,2,3]), -1); |
|
480 @end example |
|
481 |
|
482 should return a sparse matrix. To ensure this actually happens, the |
|
483 @dfn{sparse} function, and other functions based on it like @dfn{speye}, |
|
484 always returns a sparse matrix, even if the memory used will be larger |
|
485 than its full representation. |
|
486 |
|
487 @node MathConsiderations, , ReturnType, Operators and Functions |
|
488 @subsubsection Mathematical Considerations |
|
489 |
|
490 The attempt has been made to make sparse matrices behave in exactly the |
|
491 same manner as there full counterparts. However, there are certain differences |
|
492 and especially differences with other products sparse implementations. |
|
493 |
|
494 Firstly, the "./" and ".^" operators must be used with care. Consider what |
|
495 the examples |
|
496 |
|
497 @example |
|
498 s = speye (4); |
|
499 a1 = s .^ 2; |
|
500 a2 = s .^ s; |
|
501 a3 = s .^ -2; |
|
502 a4 = s ./ 2; |
|
503 a5 = 2 ./ s; |
|
504 a6 = s ./ s; |
|
505 @end example |
|
506 |
|
507 will give. The first example of @var{s} raised to the power of 2 causes |
|
508 no problems. However @var{s} raised element-wise to itself involves a |
6431
|
509 large number of terms @code{0 .^ 0} which is 1. There @code{@var{s} .^ |
5164
|
510 @var{s}} is a full matrix. |
|
511 |
|
512 Likewise @code{@var{s} .^ -2} involves terms terms like @code{0 .^ -2} which |
|
513 is infinity, and so @code{@var{s} .^ -2} is equally a full matrix. |
|
514 |
|
515 For the "./" operator @code{@var{s} ./ 2} has no problems, but |
|
516 @code{2 ./ @var{s}} involves a large number of infinity terms as well |
|
517 and is equally a full matrix. The case of @code{@var{s} ./ @var{s}} |
|
518 involves terms like @code{0 ./ 0} which is a @code{NaN} and so this |
|
519 is equally a full matrix with the zero elements of @var{s} filled with |
|
520 @code{NaN} values. |
|
521 |
5648
|
522 The above behavior is consistent with full matrices, but is not |
5164
|
523 consistent with sparse implementations in other products. |
|
524 |
|
525 A particular problem of sparse matrices comes about due to the fact that |
|
526 as the zeros are not stored, the sign-bit of these zeros is equally not |
5506
|
527 stored. In certain cases the sign-bit of zero is important. For example |
5164
|
528 |
|
529 @example |
|
530 a = 0 ./ [-1, 1; 1, -1]; |
|
531 b = 1 ./ a |
|
532 @result{} -Inf Inf |
|
533 Inf -Inf |
|
534 c = 1 ./ sparse (a) |
|
535 @result{} Inf Inf |
|
536 Inf Inf |
|
537 @end example |
|
538 |
5648
|
539 To correct this behavior would mean that zero elements with a negative |
5164
|
540 sign-bit would need to be stored in the matrix to ensure that their |
|
541 sign-bit was respected. This is not done at this time, for reasons of |
|
542 efficient, and so the user is warned that calculations where the sign-bit |
|
543 of zero is important must not be done using sparse matrices. |
|
544 |
5648
|
545 In general any function or operator used on a sparse matrix will |
|
546 result in a sparse matrix with the same or a larger number of non-zero |
|
547 elements than the original matrix. This is particularly true for the |
|
548 important case of sparse matrix factorizations. The usual way to |
|
549 address this is to reorder the matrix, such that its factorization is |
|
550 sparser than the factorization of the original matrix. That is the |
|
551 factorization of @code{L * U = P * S * Q} has sparser terms @code{L} |
|
552 and @code{U} than the equivalent factorization @code{L * U = S}. |
|
553 |
|
554 Several functions are available to reorder depending on the type of the |
|
555 matrix to be factorized. If the matrix is symmetric positive-definite, |
|
556 then @dfn{symamd} or @dfn{csymamd} should be used. Otherwise |
|
557 @dfn{colamd} or @dfn{ccolamd} should be used. For completeness |
|
558 the reordering functions @dfn{colperm} and @dfn{randperm} are |
|
559 also available. |
|
560 |
|
561 @xref{fig:simplematrix}, for an example of the structure of a simple |
|
562 positive definite matrix. |
5506
|
563 |
5648
|
564 @float Figure,fig:simplematrix |
|
565 @image{spmatrix,8cm} |
|
566 @caption{Structure of simple sparse matrix.} |
|
567 @end float |
5506
|
568 |
5648
|
569 The standard Cholesky factorization of this matrix, can be |
|
570 obtained by the same command that would be used for a full |
5652
|
571 matrix. This can be visualized with the command |
|
572 @code{r = chol(A); spy(r);}. |
|
573 @ifset HAVE_CHOLMOD |
|
574 @ifset HAVE_COLAMD |
|
575 @xref{fig:simplechol}. |
|
576 @end ifset |
|
577 @end ifset |
|
578 The original matrix had |
5648
|
579 @ifinfo |
|
580 @ifnothtml |
|
581 43 |
|
582 @end ifnothtml |
|
583 @end ifinfo |
|
584 @ifset htmltex |
|
585 598 |
|
586 @end ifset |
|
587 non-zero terms, while this Cholesky factorization has |
|
588 @ifinfo |
|
589 @ifnothtml |
|
590 71, |
|
591 @end ifnothtml |
|
592 @end ifinfo |
|
593 @ifset htmltex |
|
594 10200, |
|
595 @end ifset |
|
596 with only half of the symmetric matrix being stored. This |
|
597 is a significant level of fill in, and although not an issue |
|
598 for such a small test case, can represents a large overhead |
|
599 in working with other sparse matrices. |
5164
|
600 |
5648
|
601 The appropriate sparsity preserving permutation of the original |
|
602 matrix is given by @dfn{symamd} and the factorization using this |
|
603 reordering can be visualized using the command @code{q = symamd(A); |
|
604 r = chol(A(q,q)); spy(r)}. This gives |
|
605 @ifinfo |
|
606 @ifnothtml |
|
607 29 |
|
608 @end ifnothtml |
|
609 @end ifinfo |
|
610 @ifset htmltex |
|
611 399 |
|
612 @end ifset |
|
613 non-zero terms which is a significant improvement. |
5164
|
614 |
5648
|
615 The Cholesky factorization itself can be used to determine the |
|
616 appropriate sparsity preserving reordering of the matrix during the |
|
617 factorization, In that case this might be obtained with three return |
|
618 arguments as r@code{[r, p, q] = chol(A); spy(r)}. |
5164
|
619 |
5648
|
620 @ifset HAVE_CHOLMOD |
|
621 @ifset HAVE_COLAMD |
|
622 @float Figure,fig:simplechol |
|
623 @image{spchol,8cm} |
|
624 @caption{Structure of the un-permuted Cholesky factorization of the above matrix.} |
|
625 @end float |
5164
|
626 |
5648
|
627 @float Figure,fig:simplecholperm |
|
628 @image{spcholperm,8cm} |
|
629 @caption{Structure of the permuted Cholesky factorization of the above matrix.} |
|
630 @end float |
|
631 @end ifset |
|
632 @end ifset |
5164
|
633 |
5648
|
634 In the case of an asymmetric matrix, the appropriate sparsity |
|
635 preserving permutation is @dfn{colamd} and the factorization using |
|
636 this reordering can be visualized using the command @code{q = |
|
637 colamd(A); [l, u, p] = lu(A(:,q)); spy(l+u)}. |
5164
|
638 |
5648
|
639 Finally, Octave implicitly reorders the matrix when using the div (/) |
|
640 and ldiv (\) operators, and so no the user does not need to explicitly |
|
641 reorder the matrix to maximize performance. |
|
642 |
|
643 @node Sparse Linear Algebra, Iterative Techniques, Basics, Sparse Matrices |
5164
|
644 @section Linear Algebra on Sparse Matrices |
|
645 |
5324
|
646 Octave includes a poly-morphic solver for sparse matrices, where |
5164
|
647 the exact solver used to factorize the matrix, depends on the properties |
5648
|
648 of the sparse matrix itself. Generally, the cost of determining the matrix type |
5322
|
649 is small relative to the cost of factorizing the matrix itself, but in any |
|
650 case the matrix type is cached once it is calculated, so that it is not |
|
651 re-determined each time it is used in a linear equation. |
5164
|
652 |
|
653 The selection tree for how the linear equation is solve is |
|
654 |
|
655 @enumerate 1 |
5648
|
656 @item If the matrix is diagonal, solve directly and goto 8 |
5164
|
657 |
|
658 @item If the matrix is a permuted diagonal, solve directly taking into |
5648
|
659 account the permutations. Goto 8 |
5164
|
660 |
5648
|
661 @item If the matrix is square, banded and if the band density is less |
|
662 than that given by @code{spparms ("bandden")} continue, else goto 4. |
5164
|
663 |
|
664 @enumerate a |
|
665 @item If the matrix is tridiagonal and the right-hand side is not sparse |
5648
|
666 continue, else goto 3b. |
5164
|
667 |
|
668 @enumerate |
|
669 @item If the matrix is hermitian, with a positive real diagonal, attempt |
|
670 Cholesky factorization using @sc{Lapack} xPTSV. |
|
671 |
|
672 @item If the above failed or the matrix is not hermitian with a positive |
|
673 real diagonal use Gaussian elimination with pivoting using |
5648
|
674 @sc{Lapack} xGTSV, and goto 8. |
5164
|
675 @end enumerate |
|
676 |
|
677 @item If the matrix is hermitian with a positive real diagonal, attempt |
|
678 Cholesky factorization using @sc{Lapack} xPBTRF. |
|
679 |
|
680 @item if the above failed or the matrix is not hermitian with a positive |
|
681 real diagonal use Gaussian elimination with pivoting using |
5648
|
682 @sc{Lapack} xGBTRF, and goto 8. |
5164
|
683 @end enumerate |
|
684 |
|
685 @item If the matrix is upper or lower triangular perform a sparse forward |
5648
|
686 or backward substitution, and goto 8 |
5164
|
687 |
5322
|
688 @item If the matrix is a upper triangular matrix with column permutations |
|
689 or lower triangular matrix with row permutations, perform a sparse forward |
5648
|
690 or backward substitution, and goto 8 |
5164
|
691 |
5648
|
692 @item If the matrix is square, hermitian with a real positive diagonal, attempt |
5506
|
693 sparse Cholesky factorization using CHOLMOD. |
5164
|
694 |
|
695 @item If the sparse Cholesky factorization failed or the matrix is not |
5648
|
696 hermitian with a real positive diagonal, and the matrix is square, factorize |
|
697 using UMFPACK. |
5164
|
698 |
|
699 @item If the matrix is not square, or any of the previous solvers flags |
5648
|
700 a singular or near singular matrix, find a minimum norm solution using |
|
701 CXSPARSE@footnote{CHOLMOD, UMFPACK and CXSPARSE are written by Tim Davis |
|
702 and are available at http://www.cise.ufl.edu/research/sparse/}. |
5164
|
703 @end enumerate |
|
704 |
|
705 The band density is defined as the number of non-zero values in the matrix |
|
706 divided by the number of non-zero values in the matrix. The banded matrix |
|
707 solvers can be entirely disabled by using @dfn{spparms} to set @code{bandden} |
|
708 to 1 (i.e. @code{spparms ("bandden", 1)}). |
|
709 |
5681
|
710 The QR solver factorizes the problem with a Dulmage-Mendhelsohn, to |
|
711 seperate the problem into blocks that can be treated as over-determined, |
|
712 multiple well determined blocks, and a final over-determined block. For |
|
713 matrices with blocks of strongly connectted nodes this is a big win as |
|
714 LU decomposition can be used for many blocks. It also significantly |
|
715 improves the chance of finding a solution to over-determined problems |
|
716 rather than just returning a vector of @dfn{NaN}'s. |
|
717 |
|
718 All of the solvers above, can calculate an estimate of the condition |
|
719 number. This can be used to detect numerical stability problems in the |
|
720 solution and force a minimum norm solution to be used. However, for |
|
721 narrow banded, triangular or diagonal matrices, the cost of |
|
722 calculating the condition number is significant, and can in fact |
|
723 exceed the cost of factoring the matrix. Therefore the condition |
|
724 number is not calculated in these case, and octave relies on simplier |
|
725 techniques to detect sinular matrices or the underlying LAPACK code in |
|
726 the case of banded matrices. |
5164
|
727 |
5322
|
728 The user can force the type of the matrix with the @code{matrix_type} |
|
729 function. This overcomes the cost of discovering the type of the matrix. |
|
730 However, it should be noted incorrectly identifying the type of the matrix |
|
731 will lead to unpredictable results, and so @code{matrix_type} should be |
5506
|
732 used with care. |
5322
|
733 |
5648
|
734 @node Iterative Techniques, Real Life Example, Sparse Linear Algebra, Sparse Matrices |
5164
|
735 @section Iterative Techniques applied to sparse matrices |
|
736 |
5837
|
737 There are three functions currently to document here, these being |
|
738 @dfn{luinc}, @dfn{pcg} and @dfn{pcr}. |
|
739 |
|
740 WRITE ME. |
5648
|
741 |
|
742 @node Real Life Example, Oct-Files, Iterative Techniques, Sparse Matrices |
|
743 @section Real Life Example of the use of Sparse Matrices |
|
744 |
|
745 A common application for sparse matrices is in the solution of Finite |
|
746 Element Models. Finite element models allow numerical solution of |
|
747 partial differential equations that do not have closed form solutions, |
|
748 typically because of the complex shape of the domain. |
|
749 |
|
750 In order to motivate this application, we consider the boundary value |
|
751 Laplace equation. This system can model scalar potential fields, such |
|
752 as heat or electrical potential. Given a medium |
|
753 @iftex |
|
754 @tex |
|
755 $\Omega$ |
|
756 @end tex |
|
757 @end iftex |
|
758 @ifinfo |
|
759 Omega |
|
760 @end ifinfo |
|
761 with boundary |
|
762 @iftex |
|
763 @tex |
|
764 $\partial\Omega$ |
|
765 @end tex |
|
766 @end iftex |
|
767 @ifinfo |
|
768 dOmega |
|
769 @end ifinfo |
|
770 . At all points on the |
|
771 @iftex |
|
772 @tex |
|
773 $\partial\Omega$ |
|
774 @end tex |
|
775 @end iftex |
|
776 @ifinfo |
|
777 dOmega |
|
778 @end ifinfo |
|
779 the boundary conditions are known, and we wish to calculate the potential in |
|
780 @iftex |
|
781 @tex |
|
782 $\Omega$ |
|
783 @end tex |
|
784 @end iftex |
|
785 @ifinfo |
|
786 Omega |
|
787 @end ifinfo |
|
788 . Boundary conditions may specify the potential (Dirichlet |
|
789 boundary condition), its normal derivative across the boundary |
|
790 (Neumann boundary condition), or a weighted sum of the potential and |
|
791 its derivative (Cauchy boundary condition). |
|
792 |
|
793 In a thermal model, we want to calculate the temperature in |
|
794 @iftex |
|
795 @tex |
|
796 $\Omega$ |
|
797 @end tex |
|
798 @end iftex |
|
799 @ifinfo |
|
800 Omega |
|
801 @end ifinfo |
|
802 and know the boundary temperature (Dirichlet condition) |
|
803 or heat flux (from which we can calculate the Neumann condition |
|
804 by dividing by the thermal conductivity at the boundary). Similarly, |
|
805 in an electrical model, we want to calculate the voltage in |
|
806 @iftex |
|
807 @tex |
|
808 $\Omega$ |
|
809 @end tex |
|
810 @end iftex |
|
811 @ifinfo |
|
812 Omega |
|
813 @end ifinfo |
|
814 and know the boundary voltage (Dirichlet) or current |
|
815 (Neumann condition after diving by the electrical conductivity). |
|
816 In an electrical model, it is common for much of the boundary |
|
817 to be electrically isolated; this is a Neumann boundary condition |
|
818 with the current equal to zero. |
|
819 |
|
820 The simplest finite element models will divide |
|
821 @iftex |
|
822 @tex |
|
823 $\Omega$ |
|
824 @end tex |
|
825 @end iftex |
|
826 @ifinfo |
|
827 Omega |
|
828 @end ifinfo |
|
829 into simplexes (triangles in 2D, pyramids in 3D). |
|
830 @ifset htmltex |
|
831 We take as an 3D example a cylindrical liquid filled tank with a small |
|
832 non-conductive ball from the EIDORS project@footnote{EIDORS - Electrical |
|
833 Impedance Tomography and Diffuse optical Tomography Reconstruction Software |
|
834 @url{http://eidors3d.sourceforge.net}}. This is model is designed to reflect |
|
835 an application of electrical impedance tomography, where current patterns |
|
836 are applied to such a tank in order to image the internal conductivity |
|
837 distribution. In order to describe the FEM geometry, we have a matrix of |
|
838 vertices @code{nodes} and simplices @code{elems}. |
|
839 @end ifset |
|
840 |
|
841 The following example creates a simple rectangular 2D electrically |
|
842 conductive medium with 10 V and 20 V imposed on opposite sides |
|
843 (Dirichlet boundary conditions). All other edges are electrically |
|
844 isolated. |
|
845 |
|
846 @example |
|
847 node_y= [1;1.2;1.5;1.8;2]*ones(1,11); |
|
848 node_x= ones(5,1)*[1,1.05,1.1,1.2, ... |
|
849 1.3,1.5,1.7,1.8,1.9,1.95,2]; |
|
850 nodes= [node_x(:), node_y(:)]; |
|
851 |
|
852 [h,w]= size(node_x); |
|
853 elems= []; |
|
854 for idx= 1:w-1 |
|
855 widx= (idx-1)*h; |
|
856 elems= [elems; ... |
|
857 widx+[(1:h-1);(2:h);h+(1:h-1)]'; ... |
|
858 widx+[(2:h);h+(2:h);h+(1:h-1)]' ]; |
|
859 endfor |
|
860 |
|
861 E= size(elems,1); # No. of simplices |
|
862 N= size(nodes,1); # No. of vertices |
|
863 D= size(elems,2); # dimensions+1 |
|
864 @end example |
|
865 |
|
866 This creates a N-by-2 matrix @code{nodes} and a E-by-3 matrix |
|
867 @code{elems} with values, which define finite element triangles: |
5164
|
868 |
5648
|
869 @example |
|
870 nodes(1:7,:)' |
|
871 1.00 1.00 1.00 1.00 1.00 1.05 1.05 ... |
|
872 1.00 1.20 1.50 1.80 2.00 1.00 1.20 ... |
|
873 |
|
874 elems(1:7,:)' |
|
875 1 2 3 4 2 3 4 ... |
|
876 2 3 4 5 7 8 9 ... |
|
877 6 7 8 9 6 7 8 ... |
|
878 @end example |
|
879 |
|
880 Using a first order FEM, we approximate the electrical conductivity |
|
881 distribution in |
|
882 @iftex |
|
883 @tex |
|
884 $\Omega$ |
|
885 @end tex |
|
886 @end iftex |
|
887 @ifinfo |
|
888 Omega |
|
889 @end ifinfo |
|
890 as constant on each simplex (represented by the vector @code{conductivity}). |
|
891 Based on the finite element geometry, we first calculate a system (or |
|
892 stiffness) matrix for each simplex (represented as 3-by-3 elements on the |
|
893 diagonal of the element-wise system matrix @code{SE}. Based on @code{SE} |
|
894 and a N-by-DE connectivity matrix @code{C}, representing the connections |
|
895 between simplices and vectices, the global connectivity matrix @code{S} is |
|
896 calculated. |
|
897 |
|
898 @example |
|
899 # Element conductivity |
|
900 conductivity= [1*ones(1,16), ... |
|
901 2*ones(1,48), 1*ones(1,16)]; |
|
902 |
|
903 # Connectivity matrix |
|
904 C = sparse ((1:D*E), reshape (elems', ... |
|
905 D*E, 1), 1, D*E, N); |
|
906 |
|
907 # Calculate system matrix |
|
908 Siidx = floor ([0:D*E-1]'/D) * D * ... |
|
909 ones(1,D) + ones(D*E,1)*(1:D) ; |
|
910 Sjidx = [1:D*E]'*ones(1,D); |
|
911 Sdata = zeros(D*E,D); |
|
912 dfact = factorial(D-1); |
|
913 for j=1:E |
|
914 a = inv([ones(D,1), ... |
|
915 nodes(elems(j,:), :)]); |
|
916 const = conductivity(j) * 2 / ... |
|
917 dfact / abs(det(a)); |
|
918 Sdata(D*(j-1)+(1:D),:) = const * ... |
|
919 a(2:D,:)' * a(2:D,:); |
|
920 endfor |
|
921 # Element-wise system matrix |
|
922 SE= sparse(Siidx,Sjidx,Sdata); |
|
923 # Global system matrix |
|
924 S= C'* SE *C; |
|
925 @end example |
|
926 |
|
927 The system matrix acts like the conductivity |
|
928 @iftex |
|
929 @tex |
|
930 $S$ |
|
931 @end tex |
|
932 @end iftex |
|
933 @ifinfo |
|
934 @code{S} |
|
935 @end ifinfo |
|
936 in Ohm's law |
|
937 @iftex |
|
938 @tex |
|
939 $SV = I$. |
|
940 @end tex |
|
941 @end iftex |
|
942 @ifinfo |
|
943 @code{S * V = I}. |
|
944 @end ifinfo |
|
945 Based on the Dirichlet and Neumann boundary conditions, we are able to |
|
946 solve for the voltages at each vertex @code{V}. |
|
947 |
|
948 @example |
|
949 # Dirichlet boundary conditions |
|
950 D_nodes=[1:5, 51:55]; |
|
951 D_value=[10*ones(1,5), 20*ones(1,5)]; |
|
952 |
|
953 V= zeros(N,1); |
|
954 V(D_nodes) = D_value; |
|
955 idx = 1:N; # vertices without Dirichlet |
|
956 # boundary condns |
|
957 idx(D_nodes) = []; |
|
958 |
|
959 # Neumann boundary conditions. Note that |
|
960 # N_value must be normalized by the |
|
961 # boundary length and element conductivity |
|
962 N_nodes=[]; |
|
963 N_value=[]; |
|
964 |
|
965 Q = zeros(N,1); |
|
966 Q(N_nodes) = N_value; |
|
967 |
|
968 V(idx) = S(idx,idx) \ ( Q(idx) - ... |
|
969 S(idx,D_nodes) * V(D_nodes)); |
|
970 @end example |
|
971 |
|
972 Finally, in order to display the solution, we show each solved voltage |
|
973 value in the z-axis for each simplex vertex. |
|
974 @ifset htmltex |
5652
|
975 @ifset HAVE_CHOLMOD |
|
976 @ifset HAVE_UMFPACK |
|
977 @ifset HAVE_COLAMD |
5648
|
978 @xref{fig:femmodel}. |
|
979 @end ifset |
5652
|
980 @end ifset |
|
981 @end ifset |
|
982 @end ifset |
5648
|
983 |
|
984 @example |
|
985 elemx = elems(:,[1,2,3,1])'; |
|
986 xelems = reshape (nodes(elemx, 1), 4, E); |
|
987 yelems = reshape (nodes(elemx, 2), 4, E); |
|
988 velems = reshape (V(elemx), 4, E); |
|
989 plot3 (xelems,yelems,velems,'k'); |
|
990 print ('grid.eps'); |
|
991 @end example |
|
992 |
|
993 |
|
994 @ifset htmltex |
|
995 @ifset HAVE_CHOLMOD |
|
996 @ifset HAVE_UMFPACK |
|
997 @ifset HAVE_COLAMD |
|
998 @float Figure,fig:femmodel |
|
999 @image{grid,8cm} |
|
1000 @caption{Example finite element model the showing triangular elements. |
|
1001 The height of each vertex corresponds to the solution value.} |
|
1002 @end float |
|
1003 @end ifset |
|
1004 @end ifset |
|
1005 @end ifset |
|
1006 @end ifset |
|
1007 |
|
1008 @node Oct-Files, Function Reference, Real Life Example, Sparse Matrices |
5164
|
1009 @section Using Sparse Matrices in Oct-files |
|
1010 |
5324
|
1011 An oct-file is a means of writing an Octave function in a compilable |
5164
|
1012 language like C++, rather than as a script file. This results in a |
|
1013 significant acceleration in the code. It is not the purpose of this |
|
1014 section to discuss how to write an oct-file, or discuss what they |
|
1015 are. There are already two @footnote{Paul Thomas "Dal Segno al Coda |
|
1016 - The octave dynamically linked function cookbook", |
|
1017 @url{http://perso.wanadoo.fr/prthomas/intro.html}, and Cristophe Spiel |
|
1018 "Del Coda Al Fine - Pushing Octave's Limits", |
|
1019 @url{http://octave.sourceforge.net/coda/coda.pdf}} very good |
|
1020 references on oct-files themselves. Users who are not familiar with |
|
1021 oct-files are urged to read these references to fully understand this |
|
1022 chapter. The examples discussed here assume that the oct-file is written |
|
1023 entirely in C++. |
|
1024 |
|
1025 There are three classes of sparse objects that are of interest to the |
|
1026 user. |
|
1027 |
|
1028 @table @asis |
|
1029 @item SparseMatrix |
|
1030 A double precision sparse matrix class |
|
1031 @item SparseComplexMatrix |
5648
|
1032 A complex sparse matrix class |
5164
|
1033 @item SparseBoolMatrix |
5648
|
1034 A boolean sparse matrix class |
5164
|
1035 @end table |
|
1036 |
|
1037 All of these classes inherit from the @code{Sparse<T>} template class, |
|
1038 and so all have similar capabilities and usage. The @code{Sparse<T>} |
5648
|
1039 class was based on Octave @code{Array<T>} class, and so users familiar |
5324
|
1040 with Octave's Array classes will be comfortable with the use of |
5164
|
1041 the sparse classes. |
|
1042 |
|
1043 The sparse classes will not be entirely described in this section, due |
|
1044 to their similar with the existing Array classes. However, there are a |
|
1045 few differences due the different nature of sparse objects, and these |
|
1046 will be described. Firstly, although it is fundamentally possible to |
5324
|
1047 have N-dimensional sparse objects, the Octave sparse classes do |
5164
|
1048 not allow them at this time. So all operations of the sparse classes |
|
1049 must be 2-dimensional. This means that in fact @code{SparseMatrix} is |
5324
|
1050 similar to Octave's @code{Matrix} class rather than its |
5164
|
1051 @code{NDArray} class. |
|
1052 |
|
1053 @menu |
|
1054 * OctDifferences:: The Differences between the Array and Sparse Classes |
|
1055 * OctCreation:: Creating Spare Matrices in Oct-Files |
|
1056 * OctUse:: Using Sparse Matrices in Oct-Files |
|
1057 @end menu |
|
1058 |
|
1059 @node OctDifferences, OctCreation, Oct-Files, Oct-Files |
|
1060 @subsection The Differences between the Array and Sparse Classes |
|
1061 |
|
1062 The number of elements in a sparse matrix is considered to be the number |
|
1063 of non-zero elements rather than the product of the dimensions. Therefore |
|
1064 |
|
1065 @example |
|
1066 SparseMatrix sm; |
|
1067 @dots{} |
|
1068 int nel = sm.nelem (); |
|
1069 @end example |
|
1070 |
|
1071 returns the number of non-zero elements. If the user really requires the |
|
1072 number of elements in the matrix, including the non-zero elements, they |
|
1073 should use @code{numel} rather than @code{nelem}. Note that for very |
|
1074 large matrices, where the product of the two dimensions is large that |
|
1075 the representation of the an unsigned int, then @code{numel} can overflow. |
|
1076 An example is @code{speye(1e6)} which will create a matrix with a million |
|
1077 rows and columns, but only a million non-zero elements. Therefore the |
|
1078 number of rows by the number of columns in this case is more than two |
|
1079 hundred times the maximum value that can be represented by an unsigned int. |
|
1080 The use of @code{numel} should therefore be avoided useless it is known |
|
1081 it won't overflow. |
|
1082 |
|
1083 Extreme care must be take with the elem method and the "()" operator, |
|
1084 which perform basically the same function. The reason is that if a |
5324
|
1085 sparse object is non-const, then Octave will assume that a |
5164
|
1086 request for a zero element in a sparse matrix is in fact a request |
|
1087 to create this element so it can be filled. Therefore a piece of |
|
1088 code like |
|
1089 |
|
1090 @example |
|
1091 SparseMatrix sm; |
|
1092 @dots{} |
|
1093 for (int j = 0; j < nc; j++) |
|
1094 for (int i = 0; i < nr; i++) |
|
1095 std::cerr << " (" << i << "," << j << "): " << sm(i,j) |
|
1096 << std::endl; |
|
1097 @end example |
|
1098 |
|
1099 is a great way of turning the sparse matrix into a dense one, and a |
|
1100 very slow way at that since it reallocates the sparse object at each |
|
1101 zero element in the matrix. |
|
1102 |
5648
|
1103 An easy way of preventing the above from happening is to create a temporary |
5164
|
1104 constant version of the sparse matrix. Note that only the container for |
|
1105 the sparse matrix will be copied, while the actual representation of the |
|
1106 data will be shared between the two versions of the sparse matrix. So this |
|
1107 is not a costly operation. For example, the above would become |
|
1108 |
|
1109 @example |
|
1110 SparseMatrix sm; |
|
1111 @dots{} |
|
1112 const SparseMatrix tmp (sm); |
|
1113 for (int j = 0; j < nc; j++) |
|
1114 for (int i = 0; i < nr; i++) |
|
1115 std::cerr << " (" << i << "," << j << "): " << tmp(i,j) |
|
1116 << std::endl; |
|
1117 @end example |
|
1118 |
|
1119 Finally, as the sparse types aren't just represented as a contiguous |
|
1120 block of memory, the @code{fortran_vec} method of the @code{Array<T>} |
5648
|
1121 is not available. It is however replaced by three separate methods |
5164
|
1122 @code{ridx}, @code{cidx} and @code{data}, that access the raw compressed |
5324
|
1123 column format that the Octave sparse matrices are stored in. |
5164
|
1124 Additionally, these methods can be used in a manner similar to @code{elem}, |
|
1125 to allow the matrix to be accessed or filled. However, in that case it is |
5648
|
1126 up to the user to respect the sparse matrix compressed column format |
5164
|
1127 discussed previous. |
|
1128 |
|
1129 @node OctCreation, OctUse, OctDifferences, Oct-Files |
|
1130 @subsection Creating Spare Matrices in Oct-Files |
|
1131 |
|
1132 The user has several alternatives in how to create a sparse matrix. |
|
1133 They can first create the data as three vectors representing the |
|
1134 row and column indexes and the data, and from those create the matrix. |
|
1135 Or alternatively, they can create a sparse matrix with the appropriate |
|
1136 amount of space and then fill in the values. Both techniques have their |
|
1137 advantages and disadvantages. |
|
1138 |
|
1139 An example of how to create a small sparse matrix with the first technique |
|
1140 might be seen the example |
|
1141 |
|
1142 @example |
|
1143 int nz = 4, nr = 3, nc = 4; |
|
1144 ColumnVector ridx (nz); |
|
1145 ColumnVector cidx (nz); |
|
1146 ColumnVector data (nz); |
|
1147 |
|
1148 ridx(0) = 0; ridx(1) = 0; ridx(2) = 1; ridx(3) = 2; |
|
1149 cidx(0) = 0; cidx(1) = 1; cidx(2) = 3; cidx(3) = 3; |
|
1150 data(0) = 1; data(1) = 2; data(2) = 3; data(3) = 4; |
|
1151 |
|
1152 SparseMatrix sm (data, ridx, cidx, nr, nc); |
|
1153 @end example |
|
1154 |
|
1155 which creates the matrix given in section @ref{Storage}. Note that |
|
1156 the compressed matrix format is not used at the time of the creation |
|
1157 of the matrix itself, however it is used internally. |
|
1158 |
|
1159 As previously mentioned, the values of the sparse matrix are stored |
|
1160 in increasing column-major ordering. Although the data passed by the |
|
1161 user does not need to respect this requirement, the pre-sorting the |
|
1162 data significantly speeds up the creation of the sparse matrix. |
|
1163 |
|
1164 The disadvantage of this technique of creating a sparse matrix is |
|
1165 that there is a brief time where two copies of the data exists. Therefore |
|
1166 for extremely memory constrained problems this might not be the right |
|
1167 technique to create the sparse matrix. |
|
1168 |
|
1169 The alternative is to first create the sparse matrix with the desired |
|
1170 number of non-zero elements and then later fill those elements in. The |
|
1171 easiest way to do this is |
|
1172 |
|
1173 @example |
|
1174 int nz = 4, nr = 3, nc = 4; |
|
1175 SparseMatrix sm (nr, nc, nz); |
|
1176 sm(0,0) = 1; sm(0,1) = 2; sm(1,3) = 3; sm(2,3) = 4; |
|
1177 @end example |
|
1178 |
|
1179 That creates the same matrix as previously. Again, although it is not |
|
1180 strictly necessary, it is significantly faster if the sparse matrix is |
|
1181 created in this manner that the elements are added in column-major |
|
1182 ordering. The reason for this is that if the elements are inserted |
|
1183 at the end of the current list of known elements then no element |
|
1184 in the matrix needs to be moved to allow the new element to be |
|
1185 inserted. Only the column indexes need to be updated. |
|
1186 |
|
1187 There are a few further points to note about this technique of creating |
|
1188 a sparse matrix. Firstly, it is not illegal to create a sparse matrix |
|
1189 with fewer elements than are actually inserted in the matrix. Therefore |
|
1190 |
|
1191 @example |
|
1192 int nz = 4, nr = 3, nc = 4; |
|
1193 SparseMatrix sm (nr, nc, 0); |
|
1194 sm(0,0) = 1; sm(0,1) = 2; sm(1,3) = 3; sm(2,3) = 4; |
|
1195 @end example |
|
1196 |
|
1197 is perfectly legal. However it is a very bad idea. The reason is that |
|
1198 as each new element is added to the sparse matrix the space allocated |
|
1199 to it is increased by reallocating the memory. This is an expensive |
|
1200 operation, that will significantly slow this means of creating a sparse |
|
1201 matrix. Furthermore, it is not illegal to create a sparse matrix with |
|
1202 too much storage, so having @var{nz} above equaling 6 is also legal. |
|
1203 The disadvantage is that the matrix occupies more memory than strictly |
|
1204 needed. |
|
1205 |
|
1206 It is not always easy to known the number of non-zero elements prior |
|
1207 to filling a matrix. For this reason the additional storage for the |
|
1208 sparse matrix can be removed after its creation with the |
|
1209 @dfn{maybe_compress} function. Furthermore, the maybe_compress can |
|
1210 deallocate the unused storage, but it can equally remove zero elements |
|
1211 from the matrix. The removal of zero elements from the matrix is |
|
1212 controlled by setting the argument of the @dfn{maybe_compress} function |
|
1213 to be 'true'. However, the cost of removing the zeros is high because it |
|
1214 implies resorting the elements. Therefore, if possible it is better |
|
1215 is the user doesn't add the zeros in the first place. An example of |
|
1216 the use of @dfn{maybe_compress} is |
|
1217 |
|
1218 @example |
|
1219 int nz = 6, nr = 3, nc = 4; |
|
1220 SparseMatrix sm1 (nr, nc, nz); |
|
1221 sm1(0,0) = 1; sm1(0,1) = 2; sm1(1,3) = 3; sm1(2,3) = 4; |
|
1222 sm1.maybe_compress (); // No zero elements were added |
|
1223 |
|
1224 SparseMatrix sm2 (nr, nc, nz); |
|
1225 sm2(0,0) = 1; sm2(0,1) = 2; sm(0,2) = 0; sm(1,2) = 0; |
|
1226 sm1(1,3) = 3; sm1(2,3) = 4; |
|
1227 sm2.maybe_compress (true); // Zero elements were added |
|
1228 @end example |
|
1229 |
|
1230 The use of the @dfn{maybe_compress} function should be avoided if |
|
1231 possible, as it will slow the creation of the matrices. |
|
1232 |
|
1233 A third means of creating a sparse matrix is to work directly with |
|
1234 the data in compressed row format. An example of this technique might |
|
1235 be |
|
1236 |
|
1237 @c Note the @verbatim environment is a relatively new addition to texinfo. |
|
1238 @c Therefore use the @example environment and replace @, with @@, |
|
1239 @c { with @{, etc |
|
1240 |
|
1241 @example |
|
1242 octave_value arg; |
|
1243 |
|
1244 @dots{} |
|
1245 |
|
1246 int nz = 6, nr = 3, nc = 4; // Assume we know the max no nz |
|
1247 SparseMatrix sm (nr, nc, nz); |
|
1248 Matrix m = arg.matrix_value (); |
|
1249 |
|
1250 int ii = 0; |
|
1251 sm.cidx (0) = 0; |
|
1252 for (int j = 1; j < nc; j++) |
|
1253 @{ |
|
1254 for (int i = 0; i < nr; i++) |
|
1255 @{ |
|
1256 double tmp = foo (m(i,j)); |
|
1257 if (tmp != 0.) |
|
1258 @{ |
|
1259 sm.data(ii) = tmp; |
|
1260 sm.ridx(ii) = i; |
|
1261 ii++; |
|
1262 @} |
|
1263 @} |
|
1264 sm.cidx(j+1) = ii; |
|
1265 @} |
5506
|
1266 sm.maybe_compress (); // If don't know a-priori the final no of nz. |
5164
|
1267 @end example |
|
1268 |
|
1269 which is probably the most efficient means of creating the sparse matrix. |
|
1270 |
|
1271 Finally, it might sometimes arise that the amount of storage initially |
|
1272 created is insufficient to completely store the sparse matrix. Therefore, |
|
1273 the method @code{change_capacity} exists to reallocate the sparse memory. |
|
1274 The above example would then be modified as |
|
1275 |
|
1276 @example |
|
1277 octave_value arg; |
|
1278 |
|
1279 @dots{} |
|
1280 |
|
1281 int nz = 6, nr = 3, nc = 4; // Assume we know the max no nz |
|
1282 SparseMatrix sm (nr, nc, nz); |
|
1283 Matrix m = arg.matrix_value (); |
|
1284 |
|
1285 int ii = 0; |
|
1286 sm.cidx (0) = 0; |
|
1287 for (int j = 1; j < nc; j++) |
|
1288 @{ |
|
1289 for (int i = 0; i < nr; i++) |
|
1290 @{ |
|
1291 double tmp = foo (m(i,j)); |
|
1292 if (tmp != 0.) |
|
1293 @{ |
|
1294 if (ii == nz) |
|
1295 @{ |
|
1296 nz += 2; // Add 2 more elements |
|
1297 sm.change_capacity (nz); |
|
1298 @} |
|
1299 sm.data(ii) = tmp; |
|
1300 sm.ridx(ii) = i; |
|
1301 ii++; |
|
1302 @} |
|
1303 @} |
|
1304 sm.cidx(j+1) = ii; |
|
1305 @} |
|
1306 sm.maybe_mutate (); // If don't know a-priori the final no of nz. |
|
1307 @end example |
|
1308 |
|
1309 Note that both increasing and decreasing the number of non-zero elements in |
|
1310 a sparse matrix is expensive, as it involves memory reallocation. Also as |
|
1311 parts of the matrix, though not its entirety, exist as the old and new copy |
|
1312 at the same time, additional memory is needed. Therefore if possible this |
|
1313 should be avoided. |
|
1314 |
|
1315 @node OctUse, , OctCreation, Oct-Files |
|
1316 @subsection Using Sparse Matrices in Oct-Files |
|
1317 |
|
1318 Most of the same operators and functions on sparse matrices that are |
5324
|
1319 available from the Octave are equally available with oct-files. |
5164
|
1320 The basic means of extracting a sparse matrix from an @code{octave_value} |
|
1321 and returning them as an @code{octave_value}, can be seen in the |
|
1322 following example |
|
1323 |
|
1324 @example |
|
1325 octave_value_list retval; |
|
1326 |
|
1327 SparseMatrix sm = args(0).sparse_matrix_value (); |
|
1328 SparseComplexMatrix scm = args(1).sparse_complex_matrix_value (); |
|
1329 SparseBoolMatrix sbm = args(2).sparse_bool_matrix_value (); |
|
1330 |
|
1331 @dots{} |
|
1332 |
|
1333 retval(2) = sbm; |
|
1334 retval(1) = scm; |
|
1335 retval(0) = sm; |
|
1336 @end example |
|
1337 |
|
1338 The conversion to an octave-value is handled by the sparse |
|
1339 @code{octave_value} constructors, and so no special care is needed. |
|
1340 |
5506
|
1341 @node Function Reference, , Oct-Files, Sparse Matrices |
5164
|
1342 @section Function Reference |
|
1343 |
5648
|
1344 @ifset htmltex |
5164
|
1345 @subsection Functions by Category |
|
1346 @subsubsection Generate sparse matrix |
|
1347 @table @asis |
5648
|
1348 @item @ref{spdiags} |
5164
|
1349 A generalization of the function `spdiag'. |
5648
|
1350 @item @ref{speye} |
5164
|
1351 Returns a sparse identity matrix. |
5648
|
1352 @item @ref{sprand} |
5164
|
1353 Generate a random sparse matrix. |
5648
|
1354 @item @ref{sprandn} |
5164
|
1355 Generate a random sparse matrix. |
5648
|
1356 @item @ref{sprandsym} |
|
1357 Generate a symmetric random sparse matrix. |
5164
|
1358 @end table |
|
1359 @subsubsection Sparse matrix conversion |
|
1360 @table @asis |
5648
|
1361 @item @ref{full} |
5164
|
1362 returns a full storage matrix from a sparse one See also: sparse |
5648
|
1363 @item @ref{sparse} |
5164
|
1364 SPARSE: create a sparse matrix |
5648
|
1365 @item @ref{spconvert} |
5164
|
1366 This function converts for a simple sparse matrix format easily produced by other programs into Octave's internal sparse format. |
5648
|
1367 @item @ref{spfind} |
5164
|
1368 SPFIND: a sparse version of the find operator 1. |
|
1369 @end table |
|
1370 @subsubsection Manipulate sparse matrices |
|
1371 @table @asis |
5648
|
1372 @item @ref{issparse} |
5164
|
1373 Return 1 if the value of the expression EXPR is a sparse matrix. |
5648
|
1374 @item @ref{nnz} |
5164
|
1375 returns number of non zero elements in SM See also: sparse |
5648
|
1376 @item @ref{nonzeros} |
5164
|
1377 Returns a vector of the non-zero values of the sparse matrix S |
5648
|
1378 @item @ref{nzmax} |
5164
|
1379 Returns the amount of storage allocated to the sparse matrix SM. |
5648
|
1380 @item @ref{spalloc} |
5164
|
1381 Returns an empty sparse matrix of size R-by-C. |
5648
|
1382 @item @ref{spfun} |
5164
|
1383 Compute `f(X)' for the non-zero values of X This results in a sparse matrix with the same structure as X. |
5648
|
1384 @item @ref{spones} |
5164
|
1385 Replace the non-zero entries of X with ones. |
5648
|
1386 @item @ref{spy} |
5164
|
1387 Plot the sparsity pattern of the sparse matrix X |
|
1388 @end table |
|
1389 @subsubsection Graph Theory |
|
1390 @table @asis |
5648
|
1391 @item @ref{etree} |
5164
|
1392 Returns the elimination tree for the matrix S. |
5648
|
1393 @item @ref{etreeplot} |
5576
|
1394 Plots the elimination tree of the matrix @var{s} or @code{@var{s}+@var{s}'} |
|
1395 if @var{s} in non-symmetric. |
5648
|
1396 @item @ref{gplot} |
5576
|
1397 Plots a graph defined by @var{A} and @var{xy} in the graph theory sense. |
5164
|
1398 @item treelayout |
|
1399 @emph{Not implemented} |
5648
|
1400 @item @ref{treeplot} |
5576
|
1401 Produces a graph of a tree or forest. |
5164
|
1402 @end table |
|
1403 @subsubsection Sparse matrix reordering |
|
1404 @table @asis |
5648
|
1405 @item @ref{ccolamd} |
5506
|
1406 Constrained column approximate minimum degree permutation. |
5648
|
1407 @item @ref{colamd} |
5164
|
1408 Column approximate minimum degree permutation. |
5648
|
1409 @item @ref{colperm} |
5164
|
1410 Returns the column permutations such that the columns of `S (:, P)' are ordered in terms of increase number of non-zero elements. |
5648
|
1411 @item @ref{csymamd} |
5506
|
1412 For a symmetric positive definite matrix S, returns the permutation vector p such that `S (P, P)' tends to have a sparser Cholesky factor than S. |
5648
|
1413 @item @ref{dmperm} |
5322
|
1414 Perform a Deulmage-Mendelsohn permutation on the sparse matrix S. |
5648
|
1415 @item @ref{symamd} |
5164
|
1416 For a symmetric positive definite matrix S, returns the permutation vector p such that `S (P, P)' tends to have a sparser Cholesky factor than S. |
|
1417 @item symrcm |
5648
|
1418 @emph{Not implemented} |
5164
|
1419 @end table |
|
1420 @subsubsection Linear algebra |
|
1421 @table @asis |
|
1422 @item cholinc |
|
1423 @emph{Not implemented} |
|
1424 @item condest |
|
1425 @emph{Not implemented} |
|
1426 @item eigs |
|
1427 @emph{Not implemented} |
6334
|
1428 @item @ref{normest} |
|
1429 Estimates the 2-norm of the matrix @var{a} using a power series analysis. |
5648
|
1430 @item @ref{spchol} |
5506
|
1431 Compute the Cholesky factor, R, of the symmetric positive definite. |
5648
|
1432 @item @ref{spcholinv} |
5506
|
1433 Use the Cholesky factorization to compute the inverse of the |
|
1434 sparse symmetric positive definite matrix A. |
5648
|
1435 @item @ref{spchol2inv} |
5506
|
1436 Invert a sparse symmetric, positive definite square matrix from its |
|
1437 Cholesky decomposition, U. |
5648
|
1438 @item @ref{spdet} |
5164
|
1439 Compute the determinant of sparse matrix A using UMFPACK. |
5648
|
1440 @item @ref{spinv} |
5164
|
1441 Compute the inverse of the square matrix A. |
5648
|
1442 @item @ref{spkron} |
5322
|
1443 Form the kronecker product of two sparse matrices. |
5648
|
1444 @item @ref{splchol} |
5506
|
1445 Compute the Cholesky factor, L, of the symmetric positive definite. |
5648
|
1446 @item @ref{splu} |
5164
|
1447 Compute the LU decomposition of the sparse matrix A, using subroutines from UMFPACK. |
5648
|
1448 @item @ref{spqr} |
|
1449 Compute the sparse QR factorization of @var{a}, using CSPARSE. |
6334
|
1450 @item @ref{sprank} |
|
1451 Calculates the structural rank of a sparse matrix @var{s}. |
5164
|
1452 @item svds |
|
1453 @emph{Not implemented} |
|
1454 @end table |
|
1455 @subsubsection Iterative techniques |
|
1456 @table @asis |
|
1457 @item bicg |
|
1458 @emph{Not implemented} |
|
1459 @item bicgstab |
|
1460 @emph{Not implemented} |
|
1461 @item cgs |
|
1462 @emph{Not implemented} |
|
1463 @item gmres |
|
1464 @emph{Not implemented} |
5648
|
1465 @item @ref{luinc} |
5282
|
1466 Produce the incomplete LU factorization of the sparse matrix A. |
5164
|
1467 @item lsqr |
|
1468 @emph{Not implemented} |
|
1469 @item minres |
|
1470 @emph{Not implemented} |
|
1471 @item pcg |
5837
|
1472 Solves the linear system of equations @code{@var{A} * @var{x} = |
|
1473 @var{b}} by means of the Preconditioned Conjugate Gradient iterative |
|
1474 method. |
5164
|
1475 @item pcr |
5837
|
1476 Solves the linear system of equations @code{@var{A} * @var{x} = |
|
1477 @var{b}} by means of the Preconditioned Conjugate Residual iterative |
|
1478 method. |
5164
|
1479 @item qmr |
|
1480 @emph{Not implemented} |
|
1481 @item symmlq |
|
1482 @emph{Not implemented} |
|
1483 @end table |
|
1484 @subsubsection Miscellaneous |
|
1485 @table @asis |
|
1486 @item spaugment |
|
1487 @emph{Not implemented} |
5648
|
1488 @item @ref{spparms} |
5164
|
1489 Sets or displays the parameters used by the sparse solvers and factorization functions. |
5648
|
1490 @item @ref{symbfact} |
5164
|
1491 Performs a symbolic factorization analysis on the sparse matrix S. |
5648
|
1492 @item @ref{spstats} |
5164
|
1493 Return the stats for the non-zero elements of the sparse matrix S COUNT is the number of non-zeros in each column, MEAN is the mean of the non-zeros in each column, and VAR is the variance of the non-zeros in each column |
5648
|
1494 @item @ref{spprod} |
5164
|
1495 Product of elements along dimension DIM. |
5648
|
1496 @item @ref{spcumprod} |
5164
|
1497 Cumulative product of elements along dimension DIM. |
5648
|
1498 @item @ref{spcumsum} |
5164
|
1499 Cumulative sum of elements along dimension DIM. |
5648
|
1500 @item @ref{spsum} |
5164
|
1501 Sum of elements along dimension DIM. |
5648
|
1502 @item @ref{spsumsq} |
5164
|
1503 Sum of squares of elements along dimension DIM. |
5648
|
1504 @item @ref{spmin} |
5164
|
1505 For a vector argument, return the minimum value. |
5648
|
1506 @item @ref{spmax} |
5164
|
1507 For a vector argument, return the maximum value. |
5648
|
1508 @item @ref{spatan2} |
5164
|
1509 Compute atan (Y / X) for corresponding sparse matrix elements of Y and X. |
5648
|
1510 @item @ref{spdiag} |
5164
|
1511 Return a diagonal matrix with the sparse vector V on diagonal K. |
|
1512 @end table |
|
1513 |
|
1514 @subsection Functions Alphabetically |
5648
|
1515 @end ifset |
5164
|
1516 |
|
1517 @menu |
5506
|
1518 * ccolamd:: Constrained column approximate minimum degree permutation. |
5164
|
1519 * colamd:: Column approximate minimum degree permutation. |
|
1520 * colperm:: Returns the column permutations such that the columns of `S |
|
1521 (:, P)' are ordered in terms of increase number of non-zero |
|
1522 elements. |
5506
|
1523 * csymamd:: For a symmetric positive definite matrix S, returns the |
|
1524 permutation vector p such that `S (P, P)' tends to have a |
|
1525 sparser Cholesky factor than S. |
5164
|
1526 * dmperm:: Perfrom a Deulmage-Mendelsohn permutation on the sparse |
|
1527 matrix S. |
|
1528 * etree:: Returns the elimination tree for the matrix S. |
5576
|
1529 * etreeplot:: Plots the elimination tree of the matrix @var{s} or |
|
1530 @code{@var{s}+@var{s}'} if @var{s} in non-symmetric. |
5164
|
1531 * full:: returns a full storage matrix from a sparse one See also: |
|
1532 sparse |
5576
|
1533 * gplot:: Plots a graph defined by @var{A} and @var{xy} in the graph |
|
1534 theory sense. |
5164
|
1535 * issparse:: Return 1 if the value of the expression EXPR is a sparse |
|
1536 matrix. |
5282
|
1537 * luinc:: Produce the incomplete LU factorization of the sparse |
|
1538 A. |
6334
|
1539 * normest:: Estimates the 2-norm of the matrix @var{a} using a power |
|
1540 series analysis. |
5164
|
1541 * nnz:: returns number of non zero elements in SM See also: sparse |
|
1542 * nonzeros:: Returns a vector of the non-zero values of the sparse |
|
1543 matrix S |
|
1544 * nzmax:: Returns the amount of storage allocated to the sparse |
|
1545 matrix SM. |
5837
|
1546 * pcg:: Solves linear system of equations by means of the |
|
1547 Preconditioned Conjugate Gradient iterative method. |
|
1548 * pcr:: Solves linear system of equations by means of the |
|
1549 Preconditioned Conjugate Residual iterative method. |
5164
|
1550 * spalloc:: Returns an empty sparse matrix of size R-by-C. |
|
1551 * sparse:: SPARSE: create a sparse matrix |
|
1552 * spatan2:: Compute atan (Y / X) for corresponding sparse matrix |
|
1553 elements of Y and X. |
5506
|
1554 * spchol:: Compute the Cholesky factor, R, of the symmetric |
|
1555 positive definite. |
|
1556 * spcholinv:: Use the Cholesky factorization to compute the inverse of the |
|
1557 sparse symmetric positive definite matrix A. |
|
1558 * spchol2inv:: Invert a sparse symmetric, positive definite square matrix |
|
1559 from its Cholesky decomposition, U. |
5164
|
1560 * spconvert:: This function converts for a simple sparse matrix format |
|
1561 easily produced by other programs into Octave's internal |
|
1562 sparse format. |
|
1563 * spcumprod:: Cumulative product of elements along dimension DIM. |
|
1564 * spcumsum:: Cumulative sum of elements along dimension DIM. |
|
1565 * spdet:: Compute the determinant of sparse matrix A using UMFPACK. |
|
1566 * spdiag:: Return a diagonal matrix with the sparse vector V on |
|
1567 diagonal K. |
|
1568 * spdiags:: A generalization of the function `spdiag'. |
|
1569 * speye:: Returns a sparse identity matrix. |
|
1570 * spfind:: SPFIND: a sparse version of the find operator 1. |
|
1571 * spfun:: Compute `f(X)' for the non-zero values of X This results in |
|
1572 a sparse matrix with the same structure as X. |
|
1573 * spinv:: Compute the inverse of the square matrix A. |
5322
|
1574 * spkron:: Form the kronecker product of two sparse matrices. |
5506
|
1575 * splchol:: Compute the Cholesky factor, L, of the symmetric positive |
|
1576 definite. |
5164
|
1577 * splu:: Compute the LU decomposition of the sparse matrix A, using |
|
1578 subroutines from UMFPACK. |
|
1579 * spmax:: For a vector argument, return the maximum value. |
|
1580 * spmin:: For a vector argument, return the minimum value. |
|
1581 * spones:: Replace the non-zero entries of X with ones. |
|
1582 * spparms:: Sets or displays the parameters used by the sparse solvers |
|
1583 and factorization functions. |
|
1584 * spprod:: Product of elements along dimension DIM. |
5648
|
1585 * spqr:: Compute the sparse QR factorization of @var{a}, using CSPARSE. |
5164
|
1586 * sprand:: Generate a random sparse matrix. |
|
1587 * sprandn:: Generate a random sparse matrix. |
5648
|
1588 * sprandsym:: Generate a symmetric random sparse matrix. |
6334
|
1589 * sprank:: Calculates the structural rank of a sparse matrix @var{s}. |
5164
|
1590 * spstats:: Return the stats for the non-zero elements of the sparse |
|
1591 matrix S COUNT is the number of non-zeros in each column, |
|
1592 MEAN is the mean of the non-zeros in each column, and VAR |
|
1593 is the variance of the non-zeros in each column |
|
1594 * spsum:: Sum of elements along dimension DIM. |
|
1595 * spsumsq:: Sum of squares of elements along dimension DIM. |
|
1596 * spy:: Plot the sparsity pattern of the sparse matrix X |
|
1597 * symamd:: For a symmetric positive definite matrix S, returns the |
|
1598 permutation vector p such that `S (P, P)' tends to have a |
|
1599 sparser Cholesky factor than S. |
|
1600 * symbfact:: Performs a symbolic factorization analysis on the sparse |
|
1601 matrix S. |
5576
|
1602 * treeplot:: Produces a graph of a tree or forest. |
5164
|
1603 @end menu |
|
1604 |
5506
|
1605 @node colamd, ccolamd, , Function Reference |
5164
|
1606 @subsubsection colamd |
|
1607 |
|
1608 @DOCSTRING(colamd) |
|
1609 |
5506
|
1610 @node ccolamd, colperm, colamd, Function Reference |
|
1611 @subsubsection ccolamd |
|
1612 |
|
1613 @DOCSTRING(ccolamd) |
|
1614 |
|
1615 @node colperm, csymamd, ccolamd, Function Reference |
5164
|
1616 @subsubsection colperm |
|
1617 |
|
1618 @DOCSTRING(colperm) |
|
1619 |
5506
|
1620 @node csymamd, dmperm, colperm, Function Reference |
|
1621 @subsubsection csymamd |
|
1622 |
|
1623 @DOCSTRING(csymamd) |
|
1624 |
|
1625 @node dmperm, etree, csymamd, Function Reference |
5164
|
1626 @subsubsection dmperm |
|
1627 |
|
1628 @DOCSTRING(dmperm) |
|
1629 |
5576
|
1630 @node etree, etreeplot, dmperm, Function Reference |
5164
|
1631 @subsubsection etree |
|
1632 |
|
1633 @DOCSTRING(etree) |
|
1634 |
5576
|
1635 @node etreeplot, full, etree, Function Reference |
|
1636 @subsubsection etreeplot |
|
1637 |
|
1638 @DOCSTRING(etreeplot) |
|
1639 |
|
1640 @node full, gplot, etreeplot, Function Reference |
5164
|
1641 @subsubsection full |
|
1642 |
|
1643 @DOCSTRING(full) |
|
1644 |
5576
|
1645 @node gplot, issparse, full, Function Reference |
|
1646 @subsubsection gplot |
|
1647 |
|
1648 @DOCSTRING(gplot) |
|
1649 |
|
1650 @node issparse, luinc, gplot, Function Reference |
5164
|
1651 @subsubsection issparse |
|
1652 |
|
1653 @DOCSTRING(issparse) |
|
1654 |
6531
|
1655 @node luinc, normest, issparse, Function Reference |
5282
|
1656 @subsubsection luinc |
|
1657 |
|
1658 @DOCSTRING(luinc) |
|
1659 |
6531
|
1660 @node normest, nnz, luinc, Function Reference |
6334
|
1661 @subsubsection normest |
|
1662 |
|
1663 @DOCSTRING(normest) |
|
1664 |
|
1665 @node nnz, nonzeros, normest, Function Reference |
5164
|
1666 @subsubsection nnz |
|
1667 |
|
1668 @DOCSTRING(nnz) |
|
1669 |
|
1670 @node nonzeros, nzmax, nnz, Function Reference |
|
1671 @subsubsection nonzeros |
|
1672 |
|
1673 @DOCSTRING(nonzeros) |
|
1674 |
5837
|
1675 @node nzmax, pcg, nonzeros, Function Reference |
5164
|
1676 @subsubsection nzmax |
|
1677 |
|
1678 @DOCSTRING(nzmax) |
|
1679 |
5837
|
1680 @node pcg, pcr, nzmax, Function Reference |
|
1681 @subsubsection pcg |
|
1682 |
|
1683 @DOCSTRING(pcg) |
|
1684 |
|
1685 @node pcr, spalloc, pcg, Function Reference |
|
1686 @subsubsection pcr |
|
1687 |
|
1688 @DOCSTRING(pcr) |
|
1689 |
|
1690 @node spalloc, sparse, pcr, Function Reference |
5164
|
1691 @subsubsection spalloc |
|
1692 |
|
1693 @DOCSTRING(spalloc) |
|
1694 |
|
1695 @node sparse, spatan2, spalloc, Function Reference |
|
1696 @subsubsection sparse |
|
1697 |
|
1698 @DOCSTRING(sparse) |
|
1699 |
5506
|
1700 @node spatan2, spchol, sparse, Function Reference |
5164
|
1701 @subsubsection spatan2 |
|
1702 |
|
1703 @DOCSTRING(spatan2) |
|
1704 |
5506
|
1705 @node spchol, spcholinv, spatan2, Function Reference |
|
1706 @subsubsection spchol |
|
1707 |
|
1708 @DOCSTRING(spchol) |
|
1709 |
|
1710 @node spcholinv, spchol2inv, spchol, Function Reference |
|
1711 @subsubsection spcholinv |
|
1712 |
|
1713 @DOCSTRING(spcholinv) |
|
1714 |
|
1715 @node spchol2inv, spconvert, spcholinv, Function Reference |
|
1716 @subsubsection spchol2inv |
|
1717 |
|
1718 @DOCSTRING(spchol2inv) |
|
1719 |
|
1720 @node spconvert, spcumprod, spchol2inv, Function Reference |
5164
|
1721 @subsubsection spconvert |
|
1722 |
|
1723 @DOCSTRING(spconvert) |
|
1724 |
|
1725 @node spcumprod, spcumsum, spconvert, Function Reference |
|
1726 @subsubsection spcumprod |
|
1727 |
|
1728 @DOCSTRING(spcumprod) |
|
1729 |
|
1730 @node spcumsum, spdet, spcumprod, Function Reference |
|
1731 @subsubsection spcumsum |
|
1732 |
|
1733 @DOCSTRING(spcumsum) |
|
1734 |
|
1735 @node spdet, spdiag, spcumsum, Function Reference |
|
1736 @subsubsection spdet |
|
1737 |
|
1738 @DOCSTRING(spdet) |
|
1739 |
|
1740 @node spdiag, spdiags, spdet, Function Reference |
|
1741 @subsubsection spdiag |
|
1742 |
|
1743 @DOCSTRING(spdiag) |
|
1744 |
|
1745 @node spdiags, speye, spdiag, Function Reference |
|
1746 @subsubsection spdiags |
|
1747 |
|
1748 @DOCSTRING(spdiags) |
|
1749 |
|
1750 @node speye, spfind, spdiags, Function Reference |
|
1751 @subsubsection speye |
|
1752 |
|
1753 @DOCSTRING(speye) |
|
1754 |
|
1755 @node spfind, spfun, speye, Function Reference |
|
1756 @subsubsection spfind |
|
1757 |
|
1758 @DOCSTRING(spfind) |
|
1759 |
|
1760 @node spfun, spinv, spfind, Function Reference |
|
1761 @subsubsection spfun |
|
1762 |
|
1763 @DOCSTRING(spfun) |
|
1764 |
5322
|
1765 @node spinv, spkron, spfun, Function Reference |
5164
|
1766 @subsubsection spinv |
|
1767 |
|
1768 @DOCSTRING(spinv) |
|
1769 |
5506
|
1770 @node spkron, splchol, spinv, Function Reference |
5322
|
1771 @subsubsection spkron |
|
1772 |
|
1773 @DOCSTRING(spkron) |
|
1774 |
5506
|
1775 @node splchol, splu, spkron, Function Reference |
|
1776 @subsubsection splchol |
|
1777 |
|
1778 @DOCSTRING(splchol) |
|
1779 |
|
1780 @node splu, spmax, splchol, Function Reference |
5164
|
1781 @subsubsection splu |
|
1782 |
|
1783 @DOCSTRING(splu) |
|
1784 |
|
1785 @node spmax, spmin, splu, Function Reference |
|
1786 @subsubsection spmax |
|
1787 |
|
1788 @DOCSTRING(spmax) |
|
1789 |
|
1790 @node spmin, spones, spmax, Function Reference |
|
1791 @subsubsection spmin |
|
1792 |
|
1793 @DOCSTRING(spmin) |
|
1794 |
|
1795 @node spones, spparms, spmin, Function Reference |
|
1796 @subsubsection spones |
|
1797 |
|
1798 @DOCSTRING(spones) |
|
1799 |
|
1800 @node spparms, spprod, spones, Function Reference |
|
1801 @subsubsection spparms |
|
1802 |
|
1803 @DOCSTRING(spparms) |
|
1804 |
5648
|
1805 @node spprod, spqr, spparms, Function Reference |
5164
|
1806 @subsubsection spprod |
|
1807 |
|
1808 @DOCSTRING(spprod) |
|
1809 |
5648
|
1810 @node spqr, sprand, spprod, Function Reference |
|
1811 @subsubsection spqr |
|
1812 |
|
1813 @DOCSTRING(spqr) |
|
1814 |
|
1815 @node sprand, sprandn, spqr, Function Reference |
5164
|
1816 @subsubsection sprand |
|
1817 |
|
1818 @DOCSTRING(sprand) |
|
1819 |
5648
|
1820 @node sprandn, sprandsym, sprand, Function Reference |
5164
|
1821 @subsubsection sprandn |
|
1822 |
|
1823 @DOCSTRING(sprandn) |
|
1824 |
6334
|
1825 @node sprandsym, sprank, sprandn, Function Reference |
5648
|
1826 @subsubsection sprandsym |
|
1827 |
|
1828 @DOCSTRING(sprandsym) |
|
1829 |
6334
|
1830 @node sprank, spstats, sprandsym, Function Reference |
|
1831 @subsubsection sprank |
|
1832 |
|
1833 @DOCSTRING(sprank) |
|
1834 |
|
1835 @node spstats, spsum, sprank, Function Reference |
5164
|
1836 @subsubsection spstats |
|
1837 |
|
1838 @DOCSTRING(spstats) |
|
1839 |
|
1840 @node spsum, spsumsq, spstats, Function Reference |
|
1841 @subsubsection spsum |
|
1842 |
|
1843 @DOCSTRING(spsum) |
|
1844 |
|
1845 @node spsumsq, spy, spsum, Function Reference |
|
1846 @subsubsection spsumsq |
|
1847 |
|
1848 @DOCSTRING(spsumsq) |
|
1849 |
|
1850 @node spy, symamd, spsumsq, Function Reference |
|
1851 @subsubsection spy |
|
1852 |
|
1853 @DOCSTRING(spy) |
|
1854 |
|
1855 @node symamd, symbfact, spy, Function Reference |
|
1856 @subsubsection symamd |
|
1857 |
|
1858 @DOCSTRING(symamd) |
|
1859 |
5648
|
1860 @node symbfact, treeplot, symamd, Function Reference |
5164
|
1861 @subsubsection symbfact |
|
1862 |
|
1863 @DOCSTRING(symbfact) |
|
1864 |
5648
|
1865 @node treeplot, ,symbfact, Function Reference |
5576
|
1866 @subsubsection treeplot |
|
1867 |
|
1868 @DOCSTRING(treeplot) |
|
1869 |
5164
|
1870 @c Local Variables: *** |
|
1871 @c Mode: texinfo *** |
|
1872 @c End: *** |