Mercurial > hg > octave-lyh
annotate doc/interpreter/sparse.txi @ 8828:8463d1a2e544
Doc fixes.
* 2]$$. => 2].$$
* @var{extrapval} => @var{extrapval}.
* call helloworld.oct => called @file{helloworld.oct}
* @itemize => @table @code
* shows. => shows:
* save => @code{save}
* @ref{Breakpoints} => @pxref{Breakpoints}
* add @noindent following example
* which is computed => and compute it
* clarify wording
* remove comma
* good => well
* set => number
* by writing => with the command
* has the option of directly calling => can call
* [-like-] {+of the right size,+}
* solvers => routines
* handle => test for
* add introductory section
* add following
* {+the+} [0..bitmax] => [0,bitmax]
* of the => with
* number => value
* add usual
* Besides when doing comparisons, logical => Logical {+also+}
* array comparison => array, comparisons
* param => parameter
* works very similar => is similar
* strings, => strings
* most simple => simplest
* easier => more easily
* like => as
* called => called,
* clarify wording
* you should simply type => use
* clarify wording
* means => way
* equally => also
* [-way much-] {+way+}
* add with mean value parameter given by the first argument, @var{l}
* add Functions described as @dfn{mapping functions} apply the given
operation to each element when given a matrix argument.
* in this brief introduction => here
* It is worth noticing => Note
* add following
* means => ways
author | Brian Gough <bjg@network-theory.co.uk> |
---|---|
date | Fri, 20 Feb 2009 11:17:01 -0500 |
parents | 03b7f618ab3d |
children | 235d71d77221 |
rev | line source |
---|---|
7018 | 1 @c Copyright (C) 2004, 2005, 2006, 2007 David Bateman |
2 @c | |
3 @c This file is part of Octave. | |
4 @c | |
5 @c Octave is free software; you can redistribute it and/or modify it | |
6 @c under the terms of the GNU General Public License as published by the | |
7 @c Free Software Foundation; either version 3 of the License, or (at | |
8 @c your option) any later version. | |
9 @c | |
10 @c Octave is distributed in the hope that it will be useful, but WITHOUT | |
11 @c ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or | |
12 @c FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License | |
13 @c for more details. | |
14 @c | |
15 @c You should have received a copy of the GNU General Public License | |
16 @c along with Octave; see the file COPYING. If not, see | |
17 @c <http://www.gnu.org/licenses/>. | |
5164 | 18 |
5648 | 19 @ifhtml |
20 @set htmltex | |
21 @end ifhtml | |
22 @iftex | |
23 @set htmltex | |
24 @end iftex | |
25 | |
5164 | 26 @node Sparse Matrices |
27 @chapter Sparse Matrices | |
28 | |
29 @menu | |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
30 * Basics:: Creation and Manipulation of Sparse Matrices |
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
31 * Sparse Linear Algebra:: Linear Algebra on Sparse Matrices |
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
32 * Iterative Techniques:: Iterative Techniques |
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
33 * Real Life Example:: Using Sparse Matrices |
5164 | 34 @end menu |
35 | |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
36 @node Basics |
5164 | 37 @section The Creation and Manipulation of Sparse Matrices |
38 | |
39 The size of mathematical problems that can be treated at any particular | |
40 time is generally limited by the available computing resources. Both, | |
41 the speed of the computer and its available memory place limitation on | |
42 the problem size. | |
43 | |
5506 | 44 There are many classes of mathematical problems which give rise to |
5164 | 45 matrices, where a large number of the elements are zero. In this case |
46 it makes sense to have a special matrix type to handle this class of | |
47 problems where only the non-zero elements of the matrix are | |
5506 | 48 stored. Not only does this reduce the amount of memory to store the |
5164 | 49 matrix, but it also means that operations on this type of matrix can |
50 take advantage of the a-priori knowledge of the positions of the | |
51 non-zero elements to accelerate their calculations. | |
52 | |
53 A matrix type that stores only the non-zero elements is generally called | |
54 sparse. It is the purpose of this document to discuss the basics of the | |
55 storage and creation of sparse matrices and the fundamental operations | |
56 on them. | |
57 | |
58 @menu | |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
59 * Storage of Sparse Matrices:: |
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
60 * Creating Sparse Matrices:: |
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
61 * Information:: |
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
62 * Operators and Functions:: |
5164 | 63 @end menu |
64 | |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
65 @node Storage of Sparse Matrices |
5164 | 66 @subsection Storage of Sparse Matrices |
67 | |
68 It is not strictly speaking necessary for the user to understand how | |
69 sparse matrices are stored. However, such an understanding will help | |
70 to get an understanding of the size of sparse matrices. Understanding | |
71 the storage technique is also necessary for those users wishing to | |
72 create their own oct-files. | |
73 | |
74 There are many different means of storing sparse matrix data. What all | |
5648 | 75 of the methods have in common is that they attempt to reduce the complexity |
5164 | 76 and storage given a-priori knowledge of the particular class of problems |
77 that will be solved. A good summary of the available techniques for storing | |
78 sparse matrix is given by Saad @footnote{Youcef Saad "SPARSKIT: A basic toolkit | |
79 for sparse matrix computation", 1994, | |
6620 | 80 @url{http://www-users.cs.umn.edu/~saad/software/SPARSKIT/paper.ps}}. |
5164 | 81 With full matrices, knowledge of the point of an element of the matrix |
82 within the matrix is implied by its position in the computers memory. | |
83 However, this is not the case for sparse matrices, and so the positions | |
84 of the non-zero elements of the matrix must equally be stored. | |
85 | |
86 An obvious way to do this is by storing the elements of the matrix as | |
5506 | 87 triplets, with two elements being their position in the array |
5164 | 88 (rows and column) and the third being the data itself. This is conceptually |
89 easy to grasp, but requires more storage than is strictly needed. | |
90 | |
5648 | 91 The storage technique used within Octave is the compressed column |
5164 | 92 format. In this format the position of each element in a row and the |
93 data are stored as previously. However, if we assume that all elements | |
94 in the same column are stored adjacent in the computers memory, then | |
95 we only need to store information on the number of non-zero elements | |
96 in each column, rather than their positions. Thus assuming that the | |
97 matrix has more non-zero elements than there are columns in the | |
98 matrix, we win in terms of the amount of memory used. | |
99 | |
100 In fact, the column index contains one more element than the number of | |
101 columns, with the first element always being zero. The advantage of | |
7001 | 102 this is a simplification in the code, in that there is no special case |
5164 | 103 for the first or last columns. A short example, demonstrating this in |
104 C is. | |
105 | |
106 @example | |
107 for (j = 0; j < nc; j++) | |
108 for (i = cidx (j); i < cidx(j+1); i++) | |
5648 | 109 printf ("non-zero element (%i,%i) is %d\n", |
110 ridx(i), j, data(i)); | |
5164 | 111 @end example |
112 | |
113 A clear understanding might be had by considering an example of how the | |
114 above applies to an example matrix. Consider the matrix | |
115 | |
116 @example | |
117 @group | |
118 1 2 0 0 | |
119 0 0 0 3 | |
120 0 0 0 4 | |
121 @end group | |
122 @end example | |
123 | |
124 The non-zero elements of this matrix are | |
125 | |
126 @example | |
127 @group | |
128 (1, 1) @result{} 1 | |
129 (1, 2) @result{} 2 | |
130 (2, 4) @result{} 3 | |
131 (3, 4) @result{} 4 | |
132 @end group | |
133 @end example | |
134 | |
135 This will be stored as three vectors @var{cidx}, @var{ridx} and @var{data}, | |
136 representing the column indexing, row indexing and data respectively. The | |
137 contents of these three vectors for the above matrix will be | |
138 | |
139 @example | |
140 @group | |
5506 | 141 @var{cidx} = [0, 1, 2, 2, 4] |
5164 | 142 @var{ridx} = [0, 0, 1, 2] |
143 @var{data} = [1, 2, 3, 4] | |
144 @end group | |
145 @end example | |
146 | |
147 Note that this is the representation of these elements with the first row | |
5648 | 148 and column assumed to start at zero, while in Octave itself the row and |
149 column indexing starts at one. Thus the number of elements in the | |
5164 | 150 @var{i}-th column is given by @code{@var{cidx} (@var{i} + 1) - |
151 @var{cidx} (@var{i})}. | |
152 | |
5648 | 153 Although Octave uses a compressed column format, it should be noted |
154 that compressed row formats are equally possible. However, in the | |
155 context of mixed operations between mixed sparse and dense matrices, | |
156 it makes sense that the elements of the sparse matrices are in the | |
157 same order as the dense matrices. Octave stores dense matrices in | |
158 column major ordering, and so sparse matrices are equally stored in | |
159 this manner. | |
5164 | 160 |
5324 | 161 A further constraint on the sparse matrix storage used by Octave is that |
5164 | 162 all elements in the rows are stored in increasing order of their row |
5506 | 163 index, which makes certain operations faster. However, it imposes |
5164 | 164 the need to sort the elements on the creation of sparse matrices. Having |
7001 | 165 disordered elements is potentially an advantage in that it makes operations |
5164 | 166 such as concatenating two sparse matrices together easier and faster, however |
167 it adds complexity and speed problems elsewhere. | |
168 | |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
169 @node Creating Sparse Matrices |
5164 | 170 @subsection Creating Sparse Matrices |
171 | |
172 There are several means to create sparse matrix. | |
173 | |
174 @table @asis | |
175 @item Returned from a function | |
176 There are many functions that directly return sparse matrices. These include | |
7515
f3c00dc0912b
Eliminate the rest of the dispatched sparse functions
David Bateman <dbateman@free.fr>
parents:
7505
diff
changeset
|
177 @dfn{speye}, @dfn{sprand}, @dfn{diag}, etc. |
5164 | 178 @item Constructed from matrices or vectors |
179 The function @dfn{sparse} allows a sparse matrix to be constructed from | |
5506 | 180 three vectors representing the row, column and data. Alternatively, the |
5164 | 181 function @dfn{spconvert} uses a three column matrix format to allow easy |
182 importation of data from elsewhere. | |
183 @item Created and then filled | |
184 The function @dfn{sparse} or @dfn{spalloc} can be used to create an empty | |
185 matrix that is then filled by the user | |
186 @item From a user binary program | |
187 The user can directly create the sparse matrix within an oct-file. | |
188 @end table | |
189 | |
190 There are several basic functions to return specific sparse | |
191 matrices. For example the sparse identity matrix, is a matrix that is | |
192 often needed. It therefore has its own function to create it as | |
193 @code{speye (@var{n})} or @code{speye (@var{r}, @var{c})}, which | |
194 creates an @var{n}-by-@var{n} or @var{r}-by-@var{c} sparse identity | |
195 matrix. | |
196 | |
197 Another typical sparse matrix that is often needed is a random distribution | |
198 of random elements. The functions @dfn{sprand} and @dfn{sprandn} perform | |
5506 | 199 this for uniform and normal random distributions of elements. They have exactly |
200 the same calling convention, where @code{sprand (@var{r}, @var{c}, @var{d})}, | |
201 creates an @var{r}-by-@var{c} sparse matrix with a density of filled | |
5164 | 202 elements of @var{d}. |
203 | |
7001 | 204 Other functions of interest that directly create sparse matrices, are |
7515
f3c00dc0912b
Eliminate the rest of the dispatched sparse functions
David Bateman <dbateman@free.fr>
parents:
7505
diff
changeset
|
205 @dfn{diag} or its generalization @dfn{spdiags}, that can take the |
5164 | 206 definition of the diagonals of the matrix and create the sparse matrix |
207 that corresponds to this. For example | |
208 | |
209 @example | |
7515
f3c00dc0912b
Eliminate the rest of the dispatched sparse functions
David Bateman <dbateman@free.fr>
parents:
7505
diff
changeset
|
210 s = diag (sparse(randn(1,n)), -1); |
5164 | 211 @end example |
212 | |
213 creates a sparse (@var{n}+1)-by-(@var{n}+1) sparse matrix with a single | |
214 diagonal defined. | |
215 | |
6620 | 216 |
217 @DOCSTRING(spdiags) | |
218 | |
219 @DOCSTRING(speye) | |
220 | |
221 @DOCSTRING(spfun) | |
222 | |
223 @DOCSTRING(spmax) | |
224 | |
225 @DOCSTRING(spmin) | |
226 | |
227 @DOCSTRING(spones) | |
228 | |
229 @DOCSTRING(sprand) | |
230 | |
231 @DOCSTRING(sprandn) | |
232 | |
233 @DOCSTRING(sprandsym) | |
234 | |
5164 | 235 The recommended way for the user to create a sparse matrix, is to create |
5648 | 236 two vectors containing the row and column index of the data and a third |
5164 | 237 vector of the same size containing the data to be stored. For example |
238 | |
239 @example | |
6421 | 240 ri = ci = d = []; |
241 for j = 1:c | |
242 ri = [ri; randperm(r)(1:n)']; | |
243 ci = [ci; j*ones(n,1)]; | |
244 d = [d; rand(n,1)]; | |
245 endfor | |
246 s = sparse (ri, ci, d, r, c); | |
5164 | 247 @end example |
248 | |
6421 | 249 creates an @var{r}-by-@var{c} sparse matrix with a random distribution |
250 of @var{n} (<@var{r}) elements per column. The elements of the vectors | |
251 do not need to be sorted in any particular order as Octave will sort | |
252 them prior to storing the data. However, pre-sorting the data will | |
253 make the creation of the sparse matrix faster. | |
5164 | 254 |
255 The function @dfn{spconvert} takes a three or four column real matrix. | |
256 The first two columns represent the row and column index respectively and | |
257 the third and four columns, the real and imaginary parts of the sparse | |
258 matrix. The matrix can contain zero elements and the elements can be | |
259 sorted in any order. Adding zero elements is a convenient way to define | |
260 the size of the sparse matrix. For example | |
261 | |
262 @example | |
263 s = spconvert ([1 2 3 4; 1 3 4 4; 1 2 3 0]') | |
264 @result{} Compressed Column Sparse (rows=4, cols=4, nnz=3) | |
265 (1 , 1) -> 1 | |
266 (2 , 3) -> 2 | |
267 (3 , 4) -> 3 | |
268 @end example | |
269 | |
270 An example of creating and filling a matrix might be | |
271 | |
272 @example | |
273 k = 5; | |
274 nz = r * k; | |
275 s = spalloc (r, c, nz) | |
276 for j = 1:c | |
277 idx = randperm (r); | |
5648 | 278 s (:, j) = [zeros(r - k, 1); ... |
279 rand(k, 1)] (idx); | |
5164 | 280 endfor |
281 @end example | |
282 | |
5324 | 283 It should be noted, that due to the way that the Octave |
5164 | 284 assignment functions are written that the assignment will reallocate |
5506 | 285 the memory used by the sparse matrix at each iteration of the above loop. |
286 Therefore the @dfn{spalloc} function ignores the @var{nz} argument and | |
287 does not preassign the memory for the matrix. Therefore, it is vitally | |
5648 | 288 important that code using to above structure should be vectorized |
289 as much as possible to minimize the number of assignments and reduce the | |
5164 | 290 number of memory allocations. |
291 | |
6620 | 292 @DOCSTRING(full) |
293 | |
294 @DOCSTRING(spalloc) | |
295 | |
296 @DOCSTRING(sparse) | |
297 | |
298 @DOCSTRING(spconvert) | |
299 | |
8106
8a42498edb30
Clarify doc for sparse function
David Bateman <dbateman@free.fr>
parents:
7681
diff
changeset
|
300 The above problem of memory reallocation can be avoided in |
8a42498edb30
Clarify doc for sparse function
David Bateman <dbateman@free.fr>
parents:
7681
diff
changeset
|
301 oct-files. However, the construction of a sparse matrix from an oct-file |
8828 | 302 is more complex than can be discussed here, and |
8106
8a42498edb30
Clarify doc for sparse function
David Bateman <dbateman@free.fr>
parents:
7681
diff
changeset
|
303 you are referred to chapter @ref{Dynamically Linked Functions}, to have |
8a42498edb30
Clarify doc for sparse function
David Bateman <dbateman@free.fr>
parents:
7681
diff
changeset
|
304 a full description of the techniques involved. |
5164 | 305 |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
306 @node Information |
5648 | 307 @subsection Finding out Information about Sparse Matrices |
308 | |
309 There are a number of functions that allow information concerning | |
310 sparse matrices to be obtained. The most basic of these is | |
311 @dfn{issparse} that identifies whether a particular Octave object is | |
312 in fact a sparse matrix. | |
313 | |
314 Another very basic function is @dfn{nnz} that returns the number of | |
315 non-zero entries there are in a sparse matrix, while the function | |
316 @dfn{nzmax} returns the amount of storage allocated to the sparse | |
317 matrix. Note that Octave tends to crop unused memory at the first | |
318 opportunity for sparse objects. There are some cases of user created | |
319 sparse objects where the value returned by @dfn{nzmaz} will not be | |
320 the same as @dfn{nnz}, but in general they will give the same | |
321 result. The function @dfn{spstats} returns some basic statistics on | |
322 the columns of a sparse matrix including the number of elements, the | |
323 mean and the variance of each column. | |
324 | |
6620 | 325 @DOCSTRING(issparse) |
326 | |
327 @DOCSTRING(nnz) | |
328 | |
329 @DOCSTRING(nonzeros) | |
330 | |
331 @DOCSTRING(nzmax) | |
332 | |
333 @DOCSTRING(spstats) | |
334 | |
5648 | 335 When solving linear equations involving sparse matrices Octave |
336 determines the means to solve the equation based on the type of the | |
337 matrix as discussed in @ref{Sparse Linear Algebra}. Octave probes the | |
338 matrix type when the div (/) or ldiv (\) operator is first used with | |
339 the matrix and then caches the type. However the @dfn{matrix_type} | |
340 function can be used to determine the type of the sparse matrix prior | |
341 to use of the div or ldiv operators. For example | |
342 | |
343 @example | |
344 a = tril (sprandn(1024, 1024, 0.02), -1) ... | |
345 + speye(1024); | |
346 matrix_type (a); | |
347 ans = Lower | |
348 @end example | |
349 | |
350 show that Octave correctly determines the matrix type for lower | |
351 triangular matrices. @dfn{matrix_type} can also be used to force | |
352 the type of a matrix to be a particular type. For example | |
353 | |
354 @example | |
355 a = matrix_type (tril (sprandn (1024, ... | |
356 1024, 0.02), -1) + speye(1024), 'Lower'); | |
357 @end example | |
358 | |
359 This allows the cost of determining the matrix type to be | |
360 avoided. However, incorrectly defining the matrix type will result in | |
361 incorrect results from solutions of linear equations, and so it is | |
362 entirely the responsibility of the user to correctly identify the | |
363 matrix type | |
364 | |
365 There are several graphical means of finding out information about | |
366 sparse matrices. The first is the @dfn{spy} command, which displays | |
367 the structure of the non-zero elements of the | |
7007 | 368 matrix. @xref{fig:spmatrix}, for an example of the use of |
5704 | 369 @dfn{spy}. More advanced graphical information can be obtained with the |
5648 | 370 @dfn{treeplot}, @dfn{etreeplot} and @dfn{gplot} commands. |
371 | |
372 @float Figure,fig:spmatrix | |
373 @image{spmatrix,8cm} | |
374 @caption{Structure of simple sparse matrix.} | |
375 @end float | |
376 | |
377 One use of sparse matrices is in graph theory, where the | |
7001 | 378 interconnections between nodes are represented as an adjacency |
5648 | 379 matrix. That is, if the i-th node in a graph is connected to the j-th |
380 node. Then the ij-th node (and in the case of undirected graphs the | |
381 ji-th node) of the sparse adjacency matrix is non-zero. If each node | |
382 is then associated with a set of co-ordinates, then the @dfn{gplot} | |
383 command can be used to graphically display the interconnections | |
384 between nodes. | |
385 | |
386 As a trivial example of the use of @dfn{gplot}, consider the example | |
387 | |
388 @example | |
389 A = sparse([2,6,1,3,2,4,3,5,4,6,1,5], | |
390 [1,1,2,2,3,3,4,4,5,5,6,6],1,6,6); | |
391 xy = [0,4,8,6,4,2;5,0,5,7,5,7]'; | |
392 gplot(A,xy) | |
393 @end example | |
394 | |
395 which creates an adjacency matrix @code{A} where node 1 is connected | |
396 to nodes 2 and 6, node 2 with nodes 1 and 3, etc. The co-ordinates of | |
397 the nodes are given in the n-by-2 matrix @code{xy}. | |
398 @ifset htmltex | |
399 @xref{fig:gplot}. | |
400 | |
401 @float Figure,fig:gplot | |
402 @image{gplot,8cm} | |
403 @caption{Simple use of the @dfn{gplot} command.} | |
404 @end float | |
405 @end ifset | |
406 | |
407 The dependencies between the nodes of a Cholesky factorization can be | |
408 calculated in linear time without explicitly needing to calculate the | |
409 Cholesky factorization by the @code{etree} command. This command | |
410 returns the elimination tree of the matrix and can be displayed | |
411 graphically by the command @code{treeplot(etree(A))} if @code{A} is | |
412 symmetric or @code{treeplot(etree(A+A'))} otherwise. | |
413 | |
6620 | 414 @DOCSTRING(spy) |
415 | |
416 @DOCSTRING(etree) | |
417 | |
418 @DOCSTRING(etreeplot) | |
419 | |
420 @DOCSTRING(gplot) | |
421 | |
422 @DOCSTRING(treeplot) | |
423 | |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
424 @DOCSTRING(treelayout) |
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
425 |
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
426 @node Operators and Functions |
5164 | 427 @subsection Basic Operators and Functions on Sparse Matrices |
428 | |
429 @menu | |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
430 * Sparse Functions:: |
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
431 * Return Types of Operators and Functions:: |
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
432 * Mathematical Considerations:: |
5164 | 433 @end menu |
434 | |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
435 @node Sparse Functions |
5648 | 436 @subsubsection Sparse Functions |
437 | |
438 An important consideration in the use of the sparse functions of | |
439 Octave is that many of the internal functions of Octave, such as | |
8347
fa78cb8d8a5c
corrections for typos
Brian Gough<bjg@network-theory.co.uk>
parents:
8286
diff
changeset
|
440 @dfn{diag}, cannot accept sparse matrices as an input. The sparse |
5648 | 441 implementation in Octave therefore uses the @dfn{dispatch} |
442 function to overload the normal Octave functions with equivalent | |
443 functions that work with sparse matrices. However, at any time the | |
444 sparse matrix specific version of the function can be used by | |
445 explicitly calling its function name. | |
446 | |
6620 | 447 The table below lists all of the sparse functions of Octave. Note that |
7001 | 448 the names of the |
449 specific sparse forms of the functions are typically the same as | |
6620 | 450 the general versions with a @dfn{sp} prefix. In the table below, and the |
451 rest of this article the specific sparse versions of the functions are | |
452 used. | |
453 | |
454 @c Table includes in comments the missing sparse functions | |
5648 | 455 |
456 @table @asis | |
457 @item Generate sparse matrices: | |
458 @dfn{spalloc}, @dfn{spdiags}, @dfn{speye}, @dfn{sprand}, | |
459 @dfn{sprandn}, @dfn{sprandsym} | |
460 | |
461 @item Sparse matrix conversion: | |
7505
f5005d9510f4
Remove dispatched sparse functions and treat in the generic versions of the functions
David Bateman <dbateman@free.fr>
parents:
7455
diff
changeset
|
462 @dfn{full}, @dfn{sparse}, @dfn{spconvert} |
5648 | 463 |
464 @item Manipulate sparse matrices | |
465 @dfn{issparse}, @dfn{nnz}, @dfn{nonzeros}, @dfn{nzmax}, | |
6620 | 466 @dfn{spfun}, @dfn{spones}, @dfn{spy} |
5164 | 467 |
5648 | 468 @item Graph Theory: |
469 @dfn{etree}, @dfn{etreeplot}, @dfn{gplot}, | |
6620 | 470 @dfn{treeplot} |
471 @c @dfn{treelayout} | |
5648 | 472 |
473 @item Sparse matrix reordering: | |
7619 | 474 @dfn{amd}, @dfn{ccolamd}, @dfn{colamd}, @dfn{colperm}, @dfn{csymamd}, |
6620 | 475 @dfn{dmperm}, @dfn{symamd}, @dfn{randperm}, @dfn{symrcm} |
5648 | 476 |
477 @item Linear algebra: | |
8417
654bcfb937bf
Add the eigs and svds functions
David Bateman <dbateman@free.fr>
parents:
8347
diff
changeset
|
478 @dfn{condest}, @dfn{eigs}, @dfn{matrix_type}, @dfn{normest}, @dfn{sprank}, |
654bcfb937bf
Add the eigs and svds functions
David Bateman <dbateman@free.fr>
parents:
8347
diff
changeset
|
479 @dfn{spaugment}, @dfn{svds} |
5648 | 480 |
481 @item Iterative techniques: | |
6620 | 482 @dfn{luinc}, @dfn{pcg}, @dfn{pcr} |
483 @c @dfn{bicg}, @dfn{bicgstab}, @dfn{cholinc}, @dfn{cgs}, @dfn{gmres}, | |
484 @c @dfn{lsqr}, @dfn{minres}, @dfn{qmr}, @dfn{symmlq} | |
5648 | 485 |
486 @item Miscellaneous: | |
7515
f3c00dc0912b
Eliminate the rest of the dispatched sparse functions
David Bateman <dbateman@free.fr>
parents:
7505
diff
changeset
|
487 @dfn{spparms}, @dfn{symbfact}, @dfn{spstats} |
5648 | 488 @end table |
489 | |
490 In addition all of the standard Octave mapper functions (ie. basic | |
491 math functions that take a single argument) such as @dfn{abs}, etc | |
492 can accept sparse matrices. The reader is referred to the documentation | |
493 supplied with these functions within Octave itself for further | |
494 details. | |
5164 | 495 |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
496 @node Return Types of Operators and Functions |
5164 | 497 @subsubsection The Return Types of Operators and Functions |
498 | |
5506 | 499 The two basic reasons to use sparse matrices are to reduce the memory |
5164 | 500 usage and to not have to do calculations on zero elements. The two are |
501 closely related in that the computation time on a sparse matrix operator | |
5506 | 502 or function is roughly linear with the number of non-zero elements. |
5164 | 503 |
504 Therefore, there is a certain density of non-zero elements of a matrix | |
505 where it no longer makes sense to store it as a sparse matrix, but rather | |
506 as a full matrix. For this reason operators and functions that have a | |
507 high probability of returning a full matrix will always return one. For | |
508 example adding a scalar constant to a sparse matrix will almost always | |
509 make it a full matrix, and so the example | |
510 | |
511 @example | |
512 speye(3) + 0 | |
513 @result{} 1 0 0 | |
514 0 1 0 | |
515 0 0 1 | |
516 @end example | |
517 | |
7330 | 518 returns a full matrix as can be seen. |
519 | |
520 | |
521 Additionally, if @code{sparse_auto_mutate} is true, all sparse functions | |
522 test the amount of memory occupied by the sparse matrix to see if the | |
523 amount of storage used is larger than the amount used by the full | |
5164 | 524 equivalent. Therefore @code{speye (2) * 1} will return a full matrix as |
525 the memory used is smaller for the full version than the sparse version. | |
526 | |
527 As all of the mixed operators and functions between full and sparse | |
528 matrices exist, in general this does not cause any problems. However, | |
529 one area where it does cause a problem is where a sparse matrix is | |
530 promoted to a full matrix, where subsequent operations would resparsify | |
5648 | 531 the matrix. Such cases are rare, but can be artificially created, for |
5164 | 532 example @code{(fliplr(speye(3)) + speye(3)) - speye(3)} gives a full |
533 matrix when it should give a sparse one. In general, where such cases | |
534 occur, they impose only a small memory penalty. | |
535 | |
5648 | 536 There is however one known case where this behavior of Octave's |
5164 | 537 sparse matrices will cause a problem. That is in the handling of the |
538 @dfn{diag} function. Whether @dfn{diag} returns a sparse or full matrix | |
539 depending on the type of its input arguments. So | |
540 | |
541 @example | |
542 a = diag (sparse([1,2,3]), -1); | |
543 @end example | |
544 | |
545 should return a sparse matrix. To ensure this actually happens, the | |
546 @dfn{sparse} function, and other functions based on it like @dfn{speye}, | |
547 always returns a sparse matrix, even if the memory used will be larger | |
548 than its full representation. | |
549 | |
7330 | 550 @DOCSTRING(sparse_auto_mutate) |
551 | |
552 Note that the @code{sparse_auto_mutate} option is incompatible with | |
553 @sc{Matlab}, and so it is off by default. | |
554 | |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
555 @node Mathematical Considerations |
5164 | 556 @subsubsection Mathematical Considerations |
557 | |
558 The attempt has been made to make sparse matrices behave in exactly the | |
559 same manner as there full counterparts. However, there are certain differences | |
560 and especially differences with other products sparse implementations. | |
561 | |
562 Firstly, the "./" and ".^" operators must be used with care. Consider what | |
563 the examples | |
564 | |
565 @example | |
566 s = speye (4); | |
567 a1 = s .^ 2; | |
568 a2 = s .^ s; | |
569 a3 = s .^ -2; | |
570 a4 = s ./ 2; | |
571 a5 = 2 ./ s; | |
572 a6 = s ./ s; | |
573 @end example | |
574 | |
575 will give. The first example of @var{s} raised to the power of 2 causes | |
576 no problems. However @var{s} raised element-wise to itself involves a | |
6431 | 577 large number of terms @code{0 .^ 0} which is 1. There @code{@var{s} .^ |
5164 | 578 @var{s}} is a full matrix. |
579 | |
8347
fa78cb8d8a5c
corrections for typos
Brian Gough<bjg@network-theory.co.uk>
parents:
8286
diff
changeset
|
580 Likewise @code{@var{s} .^ -2} involves terms like @code{0 .^ -2} which |
5164 | 581 is infinity, and so @code{@var{s} .^ -2} is equally a full matrix. |
582 | |
583 For the "./" operator @code{@var{s} ./ 2} has no problems, but | |
584 @code{2 ./ @var{s}} involves a large number of infinity terms as well | |
585 and is equally a full matrix. The case of @code{@var{s} ./ @var{s}} | |
586 involves terms like @code{0 ./ 0} which is a @code{NaN} and so this | |
587 is equally a full matrix with the zero elements of @var{s} filled with | |
588 @code{NaN} values. | |
589 | |
5648 | 590 The above behavior is consistent with full matrices, but is not |
5164 | 591 consistent with sparse implementations in other products. |
592 | |
593 A particular problem of sparse matrices comes about due to the fact that | |
594 as the zeros are not stored, the sign-bit of these zeros is equally not | |
5506 | 595 stored. In certain cases the sign-bit of zero is important. For example |
5164 | 596 |
597 @example | |
598 a = 0 ./ [-1, 1; 1, -1]; | |
599 b = 1 ./ a | |
600 @result{} -Inf Inf | |
601 Inf -Inf | |
602 c = 1 ./ sparse (a) | |
603 @result{} Inf Inf | |
604 Inf Inf | |
605 @end example | |
606 | |
5648 | 607 To correct this behavior would mean that zero elements with a negative |
5164 | 608 sign-bit would need to be stored in the matrix to ensure that their |
609 sign-bit was respected. This is not done at this time, for reasons of | |
6750 | 610 efficiency, and so the user is warned that calculations where the sign-bit |
5164 | 611 of zero is important must not be done using sparse matrices. |
612 | |
5648 | 613 In general any function or operator used on a sparse matrix will |
614 result in a sparse matrix with the same or a larger number of non-zero | |
615 elements than the original matrix. This is particularly true for the | |
616 important case of sparse matrix factorizations. The usual way to | |
617 address this is to reorder the matrix, such that its factorization is | |
618 sparser than the factorization of the original matrix. That is the | |
619 factorization of @code{L * U = P * S * Q} has sparser terms @code{L} | |
620 and @code{U} than the equivalent factorization @code{L * U = S}. | |
621 | |
622 Several functions are available to reorder depending on the type of the | |
623 matrix to be factorized. If the matrix is symmetric positive-definite, | |
624 then @dfn{symamd} or @dfn{csymamd} should be used. Otherwise | |
7619 | 625 @dfn{amd}, @dfn{colamd} or @dfn{ccolamd} should be used. For completeness |
5648 | 626 the reordering functions @dfn{colperm} and @dfn{randperm} are |
627 also available. | |
628 | |
8828 | 629 @xref{fig:simplematrix} for an example of the structure of a simple |
5648 | 630 positive definite matrix. |
5506 | 631 |
5648 | 632 @float Figure,fig:simplematrix |
633 @image{spmatrix,8cm} | |
634 @caption{Structure of simple sparse matrix.} | |
635 @end float | |
5506 | 636 |
7001 | 637 The standard Cholesky factorization of this matrix can be |
5648 | 638 obtained by the same command that would be used for a full |
5652 | 639 matrix. This can be visualized with the command |
640 @code{r = chol(A); spy(r);}. | |
641 @ifset HAVE_CHOLMOD | |
642 @ifset HAVE_COLAMD | |
643 @xref{fig:simplechol}. | |
644 @end ifset | |
645 @end ifset | |
646 The original matrix had | |
5648 | 647 @ifinfo |
648 @ifnothtml | |
649 43 | |
650 @end ifnothtml | |
651 @end ifinfo | |
652 @ifset htmltex | |
653 598 | |
654 @end ifset | |
655 non-zero terms, while this Cholesky factorization has | |
656 @ifinfo | |
657 @ifnothtml | |
658 71, | |
659 @end ifnothtml | |
660 @end ifinfo | |
661 @ifset htmltex | |
662 10200, | |
663 @end ifset | |
664 with only half of the symmetric matrix being stored. This | |
665 is a significant level of fill in, and although not an issue | |
666 for such a small test case, can represents a large overhead | |
667 in working with other sparse matrices. | |
5164 | 668 |
5648 | 669 The appropriate sparsity preserving permutation of the original |
670 matrix is given by @dfn{symamd} and the factorization using this | |
671 reordering can be visualized using the command @code{q = symamd(A); | |
672 r = chol(A(q,q)); spy(r)}. This gives | |
673 @ifinfo | |
674 @ifnothtml | |
675 29 | |
676 @end ifnothtml | |
677 @end ifinfo | |
678 @ifset htmltex | |
679 399 | |
680 @end ifset | |
681 non-zero terms which is a significant improvement. | |
5164 | 682 |
5648 | 683 The Cholesky factorization itself can be used to determine the |
684 appropriate sparsity preserving reordering of the matrix during the | |
685 factorization, In that case this might be obtained with three return | |
686 arguments as r@code{[r, p, q] = chol(A); spy(r)}. | |
5164 | 687 |
5648 | 688 @ifset HAVE_CHOLMOD |
689 @ifset HAVE_COLAMD | |
690 @float Figure,fig:simplechol | |
691 @image{spchol,8cm} | |
692 @caption{Structure of the un-permuted Cholesky factorization of the above matrix.} | |
693 @end float | |
5164 | 694 |
5648 | 695 @float Figure,fig:simplecholperm |
696 @image{spcholperm,8cm} | |
697 @caption{Structure of the permuted Cholesky factorization of the above matrix.} | |
698 @end float | |
699 @end ifset | |
700 @end ifset | |
5164 | 701 |
5648 | 702 In the case of an asymmetric matrix, the appropriate sparsity |
703 preserving permutation is @dfn{colamd} and the factorization using | |
704 this reordering can be visualized using the command @code{q = | |
705 colamd(A); [l, u, p] = lu(A(:,q)); spy(l+u)}. | |
5164 | 706 |
5648 | 707 Finally, Octave implicitly reorders the matrix when using the div (/) |
708 and ldiv (\) operators, and so no the user does not need to explicitly | |
709 reorder the matrix to maximize performance. | |
710 | |
7619 | 711 @DOCSTRING(amd) |
712 | |
6620 | 713 @DOCSTRING(ccolamd) |
714 | |
715 @DOCSTRING(colamd) | |
716 | |
717 @DOCSTRING(colperm) | |
718 | |
719 @DOCSTRING(csymamd) | |
720 | |
721 @DOCSTRING(dmperm) | |
722 | |
723 @DOCSTRING(symamd) | |
724 | |
725 @DOCSTRING(symrcm) | |
726 | |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
727 @node Sparse Linear Algebra |
5164 | 728 @section Linear Algebra on Sparse Matrices |
729 | |
8488
cdb4788879b3
[docs] poly-morphic => polymorphic
Brian Gough <bjg@gnu.org>
parents:
8417
diff
changeset
|
730 Octave includes a polymorphic solver for sparse matrices, where |
5164 | 731 the exact solver used to factorize the matrix, depends on the properties |
5648 | 732 of the sparse matrix itself. Generally, the cost of determining the matrix type |
5322 | 733 is small relative to the cost of factorizing the matrix itself, but in any |
734 case the matrix type is cached once it is calculated, so that it is not | |
735 re-determined each time it is used in a linear equation. | |
5164 | 736 |
737 The selection tree for how the linear equation is solve is | |
738 | |
739 @enumerate 1 | |
5648 | 740 @item If the matrix is diagonal, solve directly and goto 8 |
5164 | 741 |
742 @item If the matrix is a permuted diagonal, solve directly taking into | |
5648 | 743 account the permutations. Goto 8 |
5164 | 744 |
5648 | 745 @item If the matrix is square, banded and if the band density is less |
746 than that given by @code{spparms ("bandden")} continue, else goto 4. | |
5164 | 747 |
748 @enumerate a | |
749 @item If the matrix is tridiagonal and the right-hand side is not sparse | |
5648 | 750 continue, else goto 3b. |
5164 | 751 |
752 @enumerate | |
753 @item If the matrix is hermitian, with a positive real diagonal, attempt | |
754 Cholesky factorization using @sc{Lapack} xPTSV. | |
755 | |
756 @item If the above failed or the matrix is not hermitian with a positive | |
757 real diagonal use Gaussian elimination with pivoting using | |
5648 | 758 @sc{Lapack} xGTSV, and goto 8. |
5164 | 759 @end enumerate |
760 | |
761 @item If the matrix is hermitian with a positive real diagonal, attempt | |
762 Cholesky factorization using @sc{Lapack} xPBTRF. | |
763 | |
764 @item if the above failed or the matrix is not hermitian with a positive | |
765 real diagonal use Gaussian elimination with pivoting using | |
5648 | 766 @sc{Lapack} xGBTRF, and goto 8. |
5164 | 767 @end enumerate |
768 | |
769 @item If the matrix is upper or lower triangular perform a sparse forward | |
5648 | 770 or backward substitution, and goto 8 |
5164 | 771 |
5322 | 772 @item If the matrix is a upper triangular matrix with column permutations |
773 or lower triangular matrix with row permutations, perform a sparse forward | |
5648 | 774 or backward substitution, and goto 8 |
5164 | 775 |
5648 | 776 @item If the matrix is square, hermitian with a real positive diagonal, attempt |
5506 | 777 sparse Cholesky factorization using CHOLMOD. |
5164 | 778 |
779 @item If the sparse Cholesky factorization failed or the matrix is not | |
5648 | 780 hermitian with a real positive diagonal, and the matrix is square, factorize |
781 using UMFPACK. | |
5164 | 782 |
783 @item If the matrix is not square, or any of the previous solvers flags | |
5648 | 784 a singular or near singular matrix, find a minimum norm solution using |
7096 | 785 CXSPARSE@footnote{The CHOLMOD, UMFPACK and CXSPARSE packages were |
786 written by Tim Davis and are available at | |
787 http://www.cise.ufl.edu/research/sparse/}. | |
5164 | 788 @end enumerate |
789 | |
790 The band density is defined as the number of non-zero values in the matrix | |
791 divided by the number of non-zero values in the matrix. The banded matrix | |
792 solvers can be entirely disabled by using @dfn{spparms} to set @code{bandden} | |
793 to 1 (i.e. @code{spparms ("bandden", 1)}). | |
794 | |
5681 | 795 The QR solver factorizes the problem with a Dulmage-Mendhelsohn, to |
6939 | 796 separate the problem into blocks that can be treated as over-determined, |
5681 | 797 multiple well determined blocks, and a final over-determined block. For |
6939 | 798 matrices with blocks of strongly connected nodes this is a big win as |
5681 | 799 LU decomposition can be used for many blocks. It also significantly |
800 improves the chance of finding a solution to over-determined problems | |
801 rather than just returning a vector of @dfn{NaN}'s. | |
802 | |
803 All of the solvers above, can calculate an estimate of the condition | |
804 number. This can be used to detect numerical stability problems in the | |
805 solution and force a minimum norm solution to be used. However, for | |
806 narrow banded, triangular or diagonal matrices, the cost of | |
807 calculating the condition number is significant, and can in fact | |
808 exceed the cost of factoring the matrix. Therefore the condition | |
6939 | 809 number is not calculated in these cases, and Octave relies on simpler |
810 techniques to detect singular matrices or the underlying LAPACK code in | |
5681 | 811 the case of banded matrices. |
5164 | 812 |
5322 | 813 The user can force the type of the matrix with the @code{matrix_type} |
814 function. This overcomes the cost of discovering the type of the matrix. | |
8347
fa78cb8d8a5c
corrections for typos
Brian Gough<bjg@network-theory.co.uk>
parents:
8286
diff
changeset
|
815 However, it should be noted that identifying the type of the matrix incorrectly |
5322 | 816 will lead to unpredictable results, and so @code{matrix_type} should be |
5506 | 817 used with care. |
5322 | 818 |
6620 | 819 @DOCSTRING(normest) |
820 | |
8286
6f2d95255911
fix @seealso references to point to existing anchors
Thorsten Meyer <thorsten.meyier@gmx.de>
parents:
8106
diff
changeset
|
821 @DOCSTRING(onenormest) |
6f2d95255911
fix @seealso references to point to existing anchors
Thorsten Meyer <thorsten.meyier@gmx.de>
parents:
8106
diff
changeset
|
822 |
7189 | 823 @DOCSTRING(condest) |
824 | |
6620 | 825 @DOCSTRING(spparms) |
826 | |
827 @DOCSTRING(sprank) | |
828 | |
829 @DOCSTRING(symbfact) | |
830 | |
7681
b1c1133641ee
Add the spaugment function
David Bateman <dbateman@free.fr>
parents:
7619
diff
changeset
|
831 For non square matrices, the user can also utilize the @code{spaugment} |
b1c1133641ee
Add the spaugment function
David Bateman <dbateman@free.fr>
parents:
7619
diff
changeset
|
832 function to find a least squares solution to a linear equation. |
b1c1133641ee
Add the spaugment function
David Bateman <dbateman@free.fr>
parents:
7619
diff
changeset
|
833 |
b1c1133641ee
Add the spaugment function
David Bateman <dbateman@free.fr>
parents:
7619
diff
changeset
|
834 @DOCSTRING(spaugment) |
b1c1133641ee
Add the spaugment function
David Bateman <dbateman@free.fr>
parents:
7619
diff
changeset
|
835 |
8417
654bcfb937bf
Add the eigs and svds functions
David Bateman <dbateman@free.fr>
parents:
8347
diff
changeset
|
836 Finally, the function @code{eigs} can be used to calculate a limited |
654bcfb937bf
Add the eigs and svds functions
David Bateman <dbateman@free.fr>
parents:
8347
diff
changeset
|
837 number of eigenvalues and eigenvectors based on a selection criteria |
654bcfb937bf
Add the eigs and svds functions
David Bateman <dbateman@free.fr>
parents:
8347
diff
changeset
|
838 and likewise for @code{svds} which calculates a limited number of |
654bcfb937bf
Add the eigs and svds functions
David Bateman <dbateman@free.fr>
parents:
8347
diff
changeset
|
839 singular values and vectors. |
654bcfb937bf
Add the eigs and svds functions
David Bateman <dbateman@free.fr>
parents:
8347
diff
changeset
|
840 |
654bcfb937bf
Add the eigs and svds functions
David Bateman <dbateman@free.fr>
parents:
8347
diff
changeset
|
841 @DOCSTRING(eigs) |
654bcfb937bf
Add the eigs and svds functions
David Bateman <dbateman@free.fr>
parents:
8347
diff
changeset
|
842 |
654bcfb937bf
Add the eigs and svds functions
David Bateman <dbateman@free.fr>
parents:
8347
diff
changeset
|
843 @DOCSTRING(svds) |
654bcfb937bf
Add the eigs and svds functions
David Bateman <dbateman@free.fr>
parents:
8347
diff
changeset
|
844 |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
845 @node Iterative Techniques |
5164 | 846 @section Iterative Techniques applied to sparse matrices |
847 | |
6620 | 848 The left division @code{\} and right division @code{/} operators, |
849 discussed in the previous section, use direct solvers to resolve a | |
850 linear equation of the form @code{@var{x} = @var{A} \ @var{b}} or | |
851 @code{@var{x} = @var{b} / @var{A}}. Octave equally includes a number of | |
852 functions to solve sparse linear equations using iterative techniques. | |
853 | |
854 @DOCSTRING(pcg) | |
855 | |
856 @DOCSTRING(pcr) | |
5837 | 857 |
6620 | 858 The speed with which an iterative solver converges to a solution can be |
859 accelerated with the use of a pre-conditioning matrix @var{M}. In this | |
860 case the linear equation @code{@var{M}^-1 * @var{x} = @var{M}^-1 * | |
861 @var{A} \ @var{b}} is solved instead. Typical pre-conditioning matrices | |
862 are partial factorizations of the original matrix. | |
5648 | 863 |
6620 | 864 @DOCSTRING(luinc) |
865 | |
8817
03b7f618ab3d
include docstrings for new functions in the manual
John W. Eaton <jwe@octave.org>
parents:
8488
diff
changeset
|
866 @node Real Life Example |
5648 | 867 @section Real Life Example of the use of Sparse Matrices |
868 | |
869 A common application for sparse matrices is in the solution of Finite | |
870 Element Models. Finite element models allow numerical solution of | |
871 partial differential equations that do not have closed form solutions, | |
872 typically because of the complex shape of the domain. | |
873 | |
874 In order to motivate this application, we consider the boundary value | |
875 Laplace equation. This system can model scalar potential fields, such | |
876 as heat or electrical potential. Given a medium | |
877 @iftex | |
878 @tex | |
879 $\Omega$ | |
880 @end tex | |
881 @end iftex | |
882 @ifinfo | |
883 Omega | |
884 @end ifinfo | |
885 with boundary | |
886 @iftex | |
887 @tex | |
888 $\partial\Omega$ | |
889 @end tex | |
890 @end iftex | |
891 @ifinfo | |
892 dOmega | |
893 @end ifinfo | |
894 . At all points on the | |
895 @iftex | |
896 @tex | |
897 $\partial\Omega$ | |
898 @end tex | |
899 @end iftex | |
900 @ifinfo | |
901 dOmega | |
902 @end ifinfo | |
903 the boundary conditions are known, and we wish to calculate the potential in | |
904 @iftex | |
905 @tex | |
906 $\Omega$ | |
907 @end tex | |
908 @end iftex | |
909 @ifinfo | |
910 Omega | |
911 @end ifinfo | |
912 . Boundary conditions may specify the potential (Dirichlet | |
913 boundary condition), its normal derivative across the boundary | |
914 (Neumann boundary condition), or a weighted sum of the potential and | |
915 its derivative (Cauchy boundary condition). | |
916 | |
917 In a thermal model, we want to calculate the temperature in | |
918 @iftex | |
919 @tex | |
920 $\Omega$ | |
921 @end tex | |
922 @end iftex | |
923 @ifinfo | |
924 Omega | |
925 @end ifinfo | |
926 and know the boundary temperature (Dirichlet condition) | |
927 or heat flux (from which we can calculate the Neumann condition | |
928 by dividing by the thermal conductivity at the boundary). Similarly, | |
929 in an electrical model, we want to calculate the voltage in | |
930 @iftex | |
931 @tex | |
932 $\Omega$ | |
933 @end tex | |
934 @end iftex | |
935 @ifinfo | |
936 Omega | |
937 @end ifinfo | |
938 and know the boundary voltage (Dirichlet) or current | |
939 (Neumann condition after diving by the electrical conductivity). | |
940 In an electrical model, it is common for much of the boundary | |
941 to be electrically isolated; this is a Neumann boundary condition | |
942 with the current equal to zero. | |
943 | |
944 The simplest finite element models will divide | |
945 @iftex | |
946 @tex | |
947 $\Omega$ | |
948 @end tex | |
949 @end iftex | |
950 @ifinfo | |
951 Omega | |
952 @end ifinfo | |
953 into simplexes (triangles in 2D, pyramids in 3D). | |
954 @ifset htmltex | |
955 We take as an 3D example a cylindrical liquid filled tank with a small | |
956 non-conductive ball from the EIDORS project@footnote{EIDORS - Electrical | |
957 Impedance Tomography and Diffuse optical Tomography Reconstruction Software | |
958 @url{http://eidors3d.sourceforge.net}}. This is model is designed to reflect | |
959 an application of electrical impedance tomography, where current patterns | |
960 are applied to such a tank in order to image the internal conductivity | |
961 distribution. In order to describe the FEM geometry, we have a matrix of | |
962 vertices @code{nodes} and simplices @code{elems}. | |
963 @end ifset | |
964 | |
965 The following example creates a simple rectangular 2D electrically | |
966 conductive medium with 10 V and 20 V imposed on opposite sides | |
967 (Dirichlet boundary conditions). All other edges are electrically | |
968 isolated. | |
969 | |
970 @example | |
971 node_y= [1;1.2;1.5;1.8;2]*ones(1,11); | |
972 node_x= ones(5,1)*[1,1.05,1.1,1.2, ... | |
973 1.3,1.5,1.7,1.8,1.9,1.95,2]; | |
974 nodes= [node_x(:), node_y(:)]; | |
975 | |
976 [h,w]= size(node_x); | |
977 elems= []; | |
978 for idx= 1:w-1 | |
979 widx= (idx-1)*h; | |
980 elems= [elems; ... | |
981 widx+[(1:h-1);(2:h);h+(1:h-1)]'; ... | |
982 widx+[(2:h);h+(2:h);h+(1:h-1)]' ]; | |
983 endfor | |
984 | |
985 E= size(elems,1); # No. of simplices | |
986 N= size(nodes,1); # No. of vertices | |
987 D= size(elems,2); # dimensions+1 | |
988 @end example | |
989 | |
990 This creates a N-by-2 matrix @code{nodes} and a E-by-3 matrix | |
991 @code{elems} with values, which define finite element triangles: | |
5164 | 992 |
5648 | 993 @example |
994 nodes(1:7,:)' | |
995 1.00 1.00 1.00 1.00 1.00 1.05 1.05 ... | |
996 1.00 1.20 1.50 1.80 2.00 1.00 1.20 ... | |
997 | |
998 elems(1:7,:)' | |
999 1 2 3 4 2 3 4 ... | |
1000 2 3 4 5 7 8 9 ... | |
1001 6 7 8 9 6 7 8 ... | |
1002 @end example | |
1003 | |
1004 Using a first order FEM, we approximate the electrical conductivity | |
1005 distribution in | |
1006 @iftex | |
1007 @tex | |
1008 $\Omega$ | |
1009 @end tex | |
1010 @end iftex | |
1011 @ifinfo | |
1012 Omega | |
1013 @end ifinfo | |
1014 as constant on each simplex (represented by the vector @code{conductivity}). | |
1015 Based on the finite element geometry, we first calculate a system (or | |
1016 stiffness) matrix for each simplex (represented as 3-by-3 elements on the | |
1017 diagonal of the element-wise system matrix @code{SE}. Based on @code{SE} | |
1018 and a N-by-DE connectivity matrix @code{C}, representing the connections | |
7001 | 1019 between simplices and vertices, the global connectivity matrix @code{S} is |
5648 | 1020 calculated. |
1021 | |
1022 @example | |
1023 # Element conductivity | |
1024 conductivity= [1*ones(1,16), ... | |
1025 2*ones(1,48), 1*ones(1,16)]; | |
1026 | |
1027 # Connectivity matrix | |
1028 C = sparse ((1:D*E), reshape (elems', ... | |
1029 D*E, 1), 1, D*E, N); | |
1030 | |
1031 # Calculate system matrix | |
1032 Siidx = floor ([0:D*E-1]'/D) * D * ... | |
1033 ones(1,D) + ones(D*E,1)*(1:D) ; | |
1034 Sjidx = [1:D*E]'*ones(1,D); | |
1035 Sdata = zeros(D*E,D); | |
1036 dfact = factorial(D-1); | |
1037 for j=1:E | |
1038 a = inv([ones(D,1), ... | |
1039 nodes(elems(j,:), :)]); | |
1040 const = conductivity(j) * 2 / ... | |
1041 dfact / abs(det(a)); | |
1042 Sdata(D*(j-1)+(1:D),:) = const * ... | |
1043 a(2:D,:)' * a(2:D,:); | |
1044 endfor | |
1045 # Element-wise system matrix | |
1046 SE= sparse(Siidx,Sjidx,Sdata); | |
1047 # Global system matrix | |
1048 S= C'* SE *C; | |
1049 @end example | |
1050 | |
1051 The system matrix acts like the conductivity | |
1052 @iftex | |
1053 @tex | |
1054 $S$ | |
1055 @end tex | |
1056 @end iftex | |
1057 @ifinfo | |
1058 @code{S} | |
1059 @end ifinfo | |
1060 in Ohm's law | |
1061 @iftex | |
1062 @tex | |
1063 $SV = I$. | |
1064 @end tex | |
1065 @end iftex | |
1066 @ifinfo | |
1067 @code{S * V = I}. | |
1068 @end ifinfo | |
1069 Based on the Dirichlet and Neumann boundary conditions, we are able to | |
1070 solve for the voltages at each vertex @code{V}. | |
1071 | |
1072 @example | |
1073 # Dirichlet boundary conditions | |
1074 D_nodes=[1:5, 51:55]; | |
1075 D_value=[10*ones(1,5), 20*ones(1,5)]; | |
1076 | |
1077 V= zeros(N,1); | |
1078 V(D_nodes) = D_value; | |
1079 idx = 1:N; # vertices without Dirichlet | |
1080 # boundary condns | |
1081 idx(D_nodes) = []; | |
1082 | |
1083 # Neumann boundary conditions. Note that | |
1084 # N_value must be normalized by the | |
1085 # boundary length and element conductivity | |
1086 N_nodes=[]; | |
1087 N_value=[]; | |
1088 | |
1089 Q = zeros(N,1); | |
1090 Q(N_nodes) = N_value; | |
1091 | |
1092 V(idx) = S(idx,idx) \ ( Q(idx) - ... | |
1093 S(idx,D_nodes) * V(D_nodes)); | |
1094 @end example | |
1095 | |
1096 Finally, in order to display the solution, we show each solved voltage | |
1097 value in the z-axis for each simplex vertex. | |
1098 @ifset htmltex | |
5652 | 1099 @ifset HAVE_CHOLMOD |
1100 @ifset HAVE_UMFPACK | |
1101 @ifset HAVE_COLAMD | |
5648 | 1102 @xref{fig:femmodel}. |
1103 @end ifset | |
5652 | 1104 @end ifset |
1105 @end ifset | |
1106 @end ifset | |
5648 | 1107 |
1108 @example | |
1109 elemx = elems(:,[1,2,3,1])'; | |
1110 xelems = reshape (nodes(elemx, 1), 4, E); | |
1111 yelems = reshape (nodes(elemx, 2), 4, E); | |
1112 velems = reshape (V(elemx), 4, E); | |
1113 plot3 (xelems,yelems,velems,'k'); | |
1114 print ('grid.eps'); | |
1115 @end example | |
1116 | |
1117 | |
1118 @ifset htmltex | |
1119 @ifset HAVE_CHOLMOD | |
1120 @ifset HAVE_UMFPACK | |
1121 @ifset HAVE_COLAMD | |
1122 @float Figure,fig:femmodel | |
1123 @image{grid,8cm} | |
1124 @caption{Example finite element model the showing triangular elements. | |
1125 The height of each vertex corresponds to the solution value.} | |
1126 @end float | |
1127 @end ifset | |
1128 @end ifset | |
1129 @end ifset | |
1130 @end ifset |