{VERSION 5 0 "IBM INTEL NT" "5.0" }
{USTYLETAB {CSTYLE "Maple Input" -1 0 "Courier" 0 1 255 0 0 1 0 1 0 0
1 0 0 0 0 1 }{CSTYLE "2D Math" -1 2 "Times" 0 1 0 0 0 0 0 0 2 0 0 0 0
0 0 1 }{CSTYLE "2D Comment" 2 18 "" 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 }
{CSTYLE "" -1 256 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 }{CSTYLE "" -1
257 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 }{CSTYLE "" -1 258 "" 0 1 0 0
0 0 1 0 0 0 0 0 0 0 0 0 }{CSTYLE "" -1 259 "" 0 1 0 0 0 0 1 0 0 0 0 0
0 0 0 0 }{CSTYLE "" -1 260 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 }
{CSTYLE "" -1 261 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 }{CSTYLE "" -1
262 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 }{CSTYLE "" -1 263 "" 0 1 0 0
0 0 1 0 0 0 0 0 0 0 0 0 }{PSTYLE "Normal" -1 0 1 {CSTYLE "" -1 -1 "Tim
es" 1 12 0 0 0 1 2 2 2 2 2 2 1 1 1 1 }1 1 0 0 0 0 1 0 1 0 2 2 0 1 }
{PSTYLE "Heading 1" -1 3 1 {CSTYLE "" -1 -1 "Times" 1 18 0 0 0 1 2 1
2 2 2 2 1 1 1 1 }1 1 0 0 8 4 1 0 1 0 2 2 0 1 }{PSTYLE "Heading 2" -1
4 1 {CSTYLE "" -1 -1 "Times" 1 14 0 0 0 1 2 1 2 2 2 2 1 1 1 1 }1 1 0
0 8 2 1 0 1 0 2 2 0 1 }{PSTYLE "Bullet Item" -1 15 1 {CSTYLE "" -1 -1
"Times" 1 12 0 0 0 1 2 2 2 2 2 2 1 1 1 1 }1 1 0 0 3 3 1 0 1 0 2 2 15
2 }{PSTYLE "Title" -1 18 1 {CSTYLE "" -1 -1 "Times" 1 18 0 0 0 1 2 1
1 2 2 2 1 1 1 1 }3 1 0 0 12 12 1 0 1 0 2 2 19 1 }}
{SECT 0 {EXCHG {PARA 18 "" 0 "" {TEXT -1 49 "Iterative Solution of Lin
ear Systems of Equations" }}{PARA 0 "" 0 "" {TEXT -1 114 "In this work
sheet, we will investigate iterative methods for the solution of (lar
ge) systems of linear equations." }}{PARA 0 "" 0 "" {TEXT -1 175 "Our \+
model problem is not really large, a direct method (LU-decomposition, \+
e.g.) could solve it quickly (especially due to the special - tridiago
nal - structure of the matrix)." }}{PARA 0 "" 0 "" {TEXT -1 250 "Our i
mplementation of the iterative methods is very inefficient - in fact, \+
in the process of analyzing the methods, we will assemble the matrices
explicitly and even invert them - which is neither necessary nor adv
isable in a real-world application." }}{PARA 0 "" 0 "" {TEXT -1 95 "Th
erefore, keep in mind that our goal is not to program an efficient so
lver, but to understand" }}{PARA 15 "" 0 "" {TEXT -1 82 "how iterative
methods (here: Jacobi iteration and a simple multigrid method) work,
" }}{PARA 15 "" 0 "" {TEXT -1 88 "why many iterative methods (the Jaco
bi iteration, e.g.) are slow for large problems, and" }}{PARA 15 "" 0
"" {TEXT -1 58 "how such problems can be overcome by a multigrid appro
ach." }}{PARA 0 "" 0 "" {TEXT -1 216 "Unfortunately, we will have to d
eal with many matrices, functions generating matrices and so on, but i
n the end, we will produce pictures that should give insight in the fi
eld of iterative solvers for linear systems." }}{PARA 0 "> " 0 ""
{MPLTEXT 1 0 8 "restart;" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 12 "with(pl
ots):" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 20 "with(LinearAlgebra):" }}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{PARA 0 "" 0 "" {TEXT
-1 38 "We will solve the discrete version of " }{XPPEDIT 18 0 "-diff(u
(x),`$`(x,2)) = f(x);" "6#/,$-%%diffG6$-%\"uG6#%\"xG-%\"$G6$F+\"\"#!\"
\"-%\"fG6#F+" }{TEXT -1 28 " on the interval [0,1] with " }{XPPEDIT
18 0 "u(0) = 0,u(1) = 0;" "6$/-%\"uG6#\"\"!F'/-F%6#\"\"\"F'" }{TEXT
-1 1 "." }}{PARA 0 "" 0 "" {TEXT -1 205 "N as a function parameter wil
l denote the number of subintervals (all of the same size 1/N), the di
screte values are stored in a vector with N-1 element that correspond \+
to the grid points 1/N ... (N-1)/N. " }}{SECT 1 {PARA 3 "" 0 "" {TEXT
-1 19 "Auxiliary functions" }}{SECT 1 {PARA 3 "" 0 "" {TEXT -1 27 "Dra
wing a (solution-)vector" }}{PARA 0 "" 0 "" {TEXT -1 96 "First, we cre
ate a function that allows us to plot such a vector with the correct x
-coordinates:" }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 91 "vecplot :=p
roc(v)\n local N1,i;\n N1 := Dimension(v);\n [seq([i/(N1+1),v[i]],i
=1..N1)]\nend;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 25 "vecplot(V
ector([4,2,3]));" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 36 "pointpl
ot(%,style=line,thickness=2);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1
0 0 "" }}}}{SECT 1 {PARA 3 "" 0 "" {TEXT -1 59 "Drawing a sequence of \+
vectors (a set of Eigenvectors, e.g.)" }}{PARA 0 "" 0 "" {TEXT -1 75 "
We will be interested in the eigenvalues and eigenvectors of some matr
ices." }}{PARA 0 "" 0 "" {TEXT -1 134 "The following procedure, that g
ives us a graphical representation of a sequence of vectors and associ
ated scalars, has two arguments:" }}{PARA 15 "" 0 "" {TEXT -1 9 "a ve
ctor " }{TEXT 0 7 "scalars" }{TEXT -1 41 " (the eigenvalues of a matri
x, e.g.), and" }}{PARA 15 "" 0 "" {TEXT -1 9 "a matrix " }{TEXT 0 7 "v
ectors" }{TEXT -1 100 ", where the columns are interpreted as vectors
, the i-th column corresponding to the i-th entry in " }{TEXT 0 7 "sca
lars" }{TEXT -1 1 "." }}{PARA 0 "" 0 "" {TEXT -1 313 "The vectors are \+
sorted with respect to increasing scalars (if there are complex number
s, the imaginary part is omitted as in our matrices, as it will be pur
ely due to numerical errors) and for every vector, a plot is created t
hat contains the vector (blue) and the vector scaled by the correspond
ing scalar (red)." }}{PARA 0 "" 0 "" {TEXT -1 48 "All pictures are col
lected in a Maple-animation." }}{PARA 0 "" 0 "" {TEXT -1 80 "The imple
mentation is not too interesting, so we will only look at the result..
." }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 712 "vecsequence := proc(sc
alars, vectors)\n local i,ind_sort,s1,v1;\n s1 := map(Im,scalars);\n
# print(Norm(s1,infinity));\n s1 := map(Re,scalars);\n v1 := map(R
e,vectors);\n ind_sort := sort([seq(i,i=1..Dimension(s1))],\n (i,j
)->evalb(evalf(s1[i] " 0 "" {MPLTEXT 1 0 53 "vecsequence(<
1/4,1/2,1/3>,<<1,4,2>|<4,2,3>|<3,4,2>>);" }}}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 0 "" }}}}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}
{SECT 1 {PARA 3 "" 0 "" {TEXT -1 36 "System matrix and (Jacobi) iterat
ion" }}{PARA 0 "" 0 "" {TEXT -1 106 "To test the numerous functions th
at will follow, we will use them for a small problem with 4 subinterva
ls." }}{PARA 0 "" 0 "" {TEXT -1 84 "To be able to change the number of
subintervals easily, we store it in the variable " }{TEXT 0 4 "N_ex"
}{TEXT -1 1 "." }}{PARA 0 "" 0 "" {TEXT -1 83 "Later in the definition
of the (damped) Jacobi iteration, we will need a parameter " }
{XPPEDIT 18 0 "omega;" "6#%&omegaG" }{TEXT -1 57 " - we will use 1/2 \+
and store this value in the variable " }{TEXT 0 8 "omega_ex" }{TEXT
-1 1 "." }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 12 "N_ex := 2^2;" }}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 16 "omega_ex := 1/2;" }}}{EXCHG
{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{PARA 0 "" 0 "" {TEXT -1 0 "" }
}{SECT 1 {PARA 4 "" 0 "" {TEXT -1 58 "mk_SysMat: System matrix for the
discrete Laplace operator" }}{PARA 0 "" 0 "" {TEXT -1 23 "The discret
ization for " }{XPPEDIT 18 0 "-diff(u(x),`$`(x,2));" "6#,$-%%diffG6$-%
\"uG6#%\"xG-%\"$G6$F*\"\"#!\"\"" }{TEXT -1 40 " in this example is the
3-point-stencil " }{XPPEDIT 18 0 "[-1, 2, -1]/(h^2);" "6#*&7%,$\"\"\"
!\"\"\"\"#,$F&F'F&*$%\"hGF(F'" }{TEXT -1 19 " with the distance " }
{XPPEDIT 18 0 "h = 1/N;" "6#/%\"hG*&\"\"\"F&%\"NG!\"\"" }{TEXT -1 104
" between two grid points (this is the same discretization that we use
d for the heat transport equation)." }}{PARA 0 "" 0 "" {TEXT -1 80 "Th
e result is a tridiagonal (N-1)*(N-1)-matrix and is computed by the pr
ocedure " }{TEXT 0 9 "mk_SysMat" }{TEXT -1 115 " (as mentioned above, \+
we would not store the matrix in a real-life-application, but it is us
eful for the analysis):" }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 95 "m
k_SysMat := proc(N)\n if N=2 then <<8>> \n else BandMatrix([-N^2,2*N
^2,-N^2],1,N-1)\n end\nend;" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 21 "A :
= mk_SysMat(N_ex);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}
{SECT 1 {PARA 4 "" 0 "" {TEXT -1 34 "mk_Vec: vector with random number
s" }}{PARA 0 "" 0 "" {TEXT -1 58 "For testing our method, we will use \+
some right hand sides " }{TEXT 0 1 "b" }{TEXT -1 21 " and initial gues
ses " }{TEXT 0 1 "x" }{TEXT -1 36 " that contain simply random numbers
:" }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 33 "mk_Vec := N -> RandomVe
ctor(N-1);" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 18 "b := mk_Vec(N_ex);" }
}{PARA 0 "> " 0 "" {MPLTEXT 1 0 18 "x := mk_Vec(N_ex);" }}}{EXCHG
{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{SECT 1 {PARA 4 "" 0 "" {TEXT
-1 21 "Compute residual b-Ax" }}{PARA 0 "" 0 "" {TEXT -1 20 "For a lin
ear system " }{XPPEDIT 18 0 "A*x = b;" "6#/*&%\"AG\"\"\"%\"xGF&%\"bG"
}{TEXT -1 6 ", the " }{TEXT 256 8 "residual" }{TEXT -1 15 " is defined
as " }{XPPEDIT 18 0 "r^i = b-A*x^i;" "6#/)%\"rG%\"iG,&%\"bG\"\"\"*&%
\"AGF))%\"xGF&F)!\"\"" }{TEXT -1 27 ". For a nonsingular matrix " }
{XPPEDIT 18 0 "A;" "6#%\"AG" }{TEXT -1 38 ", the residual is zero if a
nd only if " }{XPPEDIT 18 0 "x^i;" "6#)%\"xG%\"iG" }{TEXT -1 38 " is t
he solution of the linear system." }}{PARA 0 "" 0 "" {TEXT -1 106 "In \+
the process of iterative solution, the residual is frequently used to \+
construct a direction to improve " }{XPPEDIT 18 0 "x^i;" "6#)%\"xG%\"i
G" }{TEXT -1 1 "." }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 31 "residua
l := (A,x,b) -> b - A.x;" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 42 "'A'=A,'
x'=x,'b'=b,'b-A.x'=residual(A,x,b);" }}}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 0 "" }}}}{SECT 1 {PARA 4 "" 0 "" {TEXT -1 39 "sol_error: \+
difference to exact solution" }}{PARA 0 "" 0 "" {TEXT -1 66 "Another i
mportant term in the context of iterative solvers is the " }{TEXT 257
5 "error" }{TEXT -1 17 ", the difference " }{XPPEDIT 18 0 "e^i = x^i-x
;" "6#/)%\"eG%\"iG,&)%\"xGF&\"\"\"F)!\"\"" }{TEXT -1 8 ", where " }
{XPPEDIT 18 0 "x;" "6#%\"xG" }{TEXT -1 23 " is the exact solution " }
{XPPEDIT 18 0 "A^(-1)*b;" "6#*&)%\"AG,$\"\"\"!\"\"F'%\"bGF'" }{TEXT
-1 1 "." }}{PARA 0 "" 0 "" {TEXT -1 146 "In practice, we do not know t
he error (if we would know, we could easily compute the exact solution
), but for our toy problems, we can compute it." }}{PARA 0 "" 0 ""
{TEXT -1 35 "Note the relation to the residual: " }{XPPEDIT 18 0 "r^i \+
= -A*e^i;" "6#/)%\"rG%\"iG,$*&%\"AG\"\"\")%\"eGF&F*!\"\"" }{TEXT -1 1
"." }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 45 "sol_error := (A,x,b) -
> x - LinearSolve(A,b);" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 42 "'A'=A,'x
'=x,'b'=b,\n'err'=sol_error(A,x,b);" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0
60 "'solution'=LinearSolve(A,b),\n'residual'=-A.sol_error(A,x,b);" }}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{SECT 1 {PARA 4 "" 0 "
" {TEXT -1 49 "mk_MI_jac: inverse of M from the iteration scheme" }}
{PARA 0 "" 0 "" {TEXT -1 84 "In the lectures, the linear iterative met
hods were defined in terms of the residual:" }}{PARA 0 "" 0 ""
{XPPEDIT 18 0 "x^(i+1) = x^i+M^(-1)*r^i;" "6#/)%\"xG,&%\"iG\"\"\"F(F(,
&)F%F'F(*&)%\"MG,$F(!\"\"F()%\"rGF'F(F(" }{TEXT -1 8 ", where " }
{XPPEDIT 18 0 "M;" "6#%\"MG" }{TEXT -1 67 " depends on the iteration s
cheme. For the damped Jacobi iteration, " }{XPPEDIT 18 0 "M;" "6#%\"MG
" }{TEXT -1 15 " is chosen as " }{XPPEDIT 18 0 "1/omega.D[A];" "6#-%
\".G6$*&\"\"\"F'%&omegaG!\"\"&%\"DG6#%\"AG" }{TEXT -1 25 ", with the d
iagonal part " }{XPPEDIT 18 0 "D[A];" "6#&%\"DG6#%\"AG" }{TEXT -1 4 " \+
of " }{XPPEDIT 18 0 "A;" "6#%\"AG" }{TEXT -1 1 "." }}{PARA 0 "" 0 ""
{TEXT -1 17 "We will not need " }{XPPEDIT 18 0 "M;" "6#%\"MG" }{TEXT
-1 74 " itself, but only its inverse. Therefore, in the following, we \+
will store " }{XPPEDIT 18 0 "M^(-1);" "6#)%\"MG,$\"\"\"!\"\"" }{TEXT
-1 2 " (" }{TEXT 0 2 "MI" }{TEXT -1 1 ")" }}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 83 "mk_MI_jac := (A,omega) -> DiagonalMatrix([seq(omega/A
[i,i],i=1..RowDimension(A))]);" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 28 "M
I := mk_MI_jac(A,omega_ex);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0
0 "" }}}}{SECT 1 {PARA 4 "" 0 "" {TEXT -1 43 "lin_iter: perform steps \+
of linear iteration" }}{PARA 0 "" 0 "" {TEXT -1 41 "Now we can impleme
nt the iteration scheme" }}{PARA 0 "" 0 "" {XPPEDIT 18 0 "x^(i+1) = x^
i+M^(-1)*r^i;" "6#/)%\"xG,&%\"iG\"\"\"F(F(,&)F%F'F(*&)%\"MG,$F(!\"\"F(
)%\"rGF'F(F(" }{TEXT -1 34 " (the following procedure applies " }
{XPPEDIT 18 0 "k;" "6#%\"kG" }{TEXT -1 50 " steps of the iteration, st
arting with the vector " }{XPPEDIT 18 0 "x;" "6#%\"xG" }{TEXT -1 27 " \+
passed to the procedure): " }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{EXCHG
{PARA 0 "> " 0 "" {MPLTEXT 1 0 127 "lin_iter := proc(A,x,b,MI,k)\n lo
cal i,xit; \n xit := x;\n for i to k do\n xit := xit + MI.residua
l(A,xit,b)\n od;\n xit\nend;" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 ""
}}{PARA 0 "> " 0 "" {MPLTEXT 1 0 41 "seq(lin_iter(A,evalf(x),b,MI,i),i
=0..10);" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 31 "sol := evalf(LinearSolv
e(A,b));" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 92 "We can investigate th
e speed of convergence by printing the norm of the error (we could use
" }{TEXT 0 9 "sol_error" }{TEXT -1 68 ", but on the other hand, we ha
ve just computed the exact solution):" }}{PARA 0 "> " 0 "" {MPLTEXT
1 0 51 "seq(Norm(lin_iter(A,evalf(x),b,MI,i)-sol),i=0..10);" }}}
{EXCHG {PARA 0 "" 0 "" {TEXT -1 54 "We wil have a closer look on this \+
in the next section." }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{SECT
1 {PARA 4 "" 0 "" {TEXT -1 26 "mk_itmat: iteration matrix" }}{PARA 0 "
" 0 "" {TEXT -1 69 "As a preparation of the analysis, we write a funct
ion to compute the " }{TEXT 258 16 "iteration matrix" }{TEXT -1 101 " \+
that describes the change of the error in one iteration step and there
fore the speed of convergence." }}{PARA 0 "" 0 "" {TEXT -1 8 "We have \+
" }{XPPEDIT 18 0 "x^(i+1) = x^i+M^(-1)*r^i;" "6#/)%\"xG,&%\"iG\"\"\"F(
F(,&)F%F'F(*&)%\"MG,$F(!\"\"F()%\"rGF'F(F(" }{TEXT -1 29 " which can b
e expanded using " }{XPPEDIT 18 0 "r^i = -A*e^i;" "6#/)%\"rG%\"iG,$*&%
\"AG\"\"\")%\"eGF&F*!\"\"" }{TEXT -1 4 " to " }{XPPEDIT 18 0 "x^(i+1) \+
= x^i-M^(-1)*A*e^i;" "6#/)%\"xG,&%\"iG\"\"\"F(F(,&)F%F'F(*()%\"MG,$F(!
\"\"F(%\"AGF()%\"eGF'F(F/" }{TEXT -1 37 " and, subtracting the exact s
olution " }{XPPEDIT 18 0 "x;" "6#%\"xG" }{TEXT -1 11 ", see that " }
{XPPEDIT 18 0 "e^(i+1) = e^i-M^(-1)*A*e^i;" "6#/)%\"eG,&%\"iG\"\"\"F(F
(,&)F%F'F(*()%\"MG,$F(!\"\"F(%\"AGF()F%F'F(F/" }{TEXT -1 5 ", or " }
{XPPEDIT 18 0 "e^(i+1) = (I-M^(-1)*A)*e^i;" "6#/)%\"eG,&%\"iG\"\"\"F(F
(*&,&%\"IGF(*&)%\"MG,$F(!\"\"F(%\"AGF(F0F()F%F'F(" }{TEXT -1 1 "." }}
{PARA 0 "" 0 "" {TEXT -1 11 "The matrix " }{XPPEDIT 18 0 "I-M^(-1)*A;
" "6#,&%\"IG\"\"\"*&)%\"MG,$F%!\"\"F%%\"AGF%F*" }{TEXT -1 56 " is call
ed the iteration matrix of the iteration scheme." }}{EXCHG {PARA 0 "> \+
" 0 "" {MPLTEXT 1 0 59 "mk_itmat := (A,MI) -> IdentityMatrix(RowDimens
ion(A))-MI.A;" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 24 "Itmat := mk_itmat(
A,MI);" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 32 "ev := evalf(Eigenvalues(I
tmat));" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 55 "The eigenvalues of the
iteration matrix are important: " }}{PARA 15 "" 0 "" {TEXT -1 44 "sma
ll values (near 0) mean fast convergence," }}{PARA 15 "" 0 "" {TEXT
-1 80 "values with absolute values near, but smaller than, 1 mean slow
convergence, and" }}{PARA 15 "" 0 "" {TEXT -1 58 "values with absolut
e values larger than 1 mean divergence." }}{PARA 0 "> " 0 "" {MPLTEXT
1 0 0 "" }}}}{SECT 1 {PARA 4 "" 0 "" {TEXT -1 32 "mk_MI_itmat: inverse
of mk_itmat" }}{PARA 0 "" 0 "" {TEXT -1 57 "Later on, we will need th
e inverse operation - computing " }{XPPEDIT 18 0 "M^(-1);" "6#)%\"MG,$
\"\"\"!\"\"" }{TEXT -1 28 " for given iteration matrix." }}{PARA 0 ""
0 "" {TEXT -1 60 "As we have to solve a linear system with coefficient
matrix " }{XPPEDIT 18 0 "A;" "6#%\"AG" }{TEXT -1 84 ", this a operati
on that is only reasonable for analyzing, not for real applications."
}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 85 "mk_MI_itmat := (A,itmat) -
> (IdentityMatrix(RowDimension(A))-itmat).MatrixInverse(A);" }}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 21 "mk_MI_itmat(A,Itmat);" }}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 3 "MI;" }}}{EXCHG {PARA 0 "> "
0 "" {MPLTEXT 1 0 0 "" }}}}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "
" }}}{SECT 1 {PARA 3 "" 0 "" {TEXT -1 44 "Convergence analysis of the \+
Jacobi iteration" }}{SECT 1 {PARA 4 "" 0 "" {TEXT -1 17 "Generate matr
ices" }}{PARA 0 "" 0 "" {TEXT -1 140 "To study the speed of convergenc
e for the Jacobi iteration, we use a somewhat larger (but still by no \+
way large) system with seven unknowns:" }}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 12 "N_ex := 2^3;" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 21 "A \+
:= mk_SysMat(N_ex);" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 28 "MI := mk_MI_
jac(A,omega_ex);" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 24 "Itmat := mk_itm
at(A,MI);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{SECT 1
{PARA 4 "" 0 "" {TEXT -1 45 "Formula for eigenvalues and eigenvectors \+
of A" }}{PARA 0 "" 0 "" {TEXT -1 41 "Now, we could look at the eigenva
lues of " }{TEXT 0 5 "Itmat" }{TEXT -1 128 ". But first, we should add
, that for the Jacobi iteration, we can give an explicit formula for t
he eigenvalues and eigenvectors." }}{PARA 0 "" 0 "" {TEXT -1 80 "This \+
is due to the fact, that we can do this for the discrete Laplace oper
ator " }{XPPEDIT 18 0 "A;" "6#%\"AG" }{TEXT -1 70 " (to check whether \+
the following are really eigenvectors, just apply " }{XPPEDIT 18 0 "A
;" "6#%\"AG" }{TEXT -1 52 " to them and use standard trigonometric ide
ntities)." }}{PARA 0 "" 0 "" {TEXT -1 59 "For comparison, we ask Maple
to compute the eigenvalues of " }{XPPEDIT 18 0 "A;" "6#%\"AG" }{TEXT
-1 1 ":" }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 22 "evalf(Eigenvalues
(A));" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 33 "The formula for the eige
nvalues.." }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 47 "A_eigenvalue := (N,i) \+
-> (2*sin(i*Pi/N/2)*N)^2;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0
65 "lambda := Vector([seq(evalf(A_eigenvalue(N_ex,i)),i=1..N_ex-1)]);
" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 29 "... and for the eigenvectors:
" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 67 "A_eigenvector := (N,i) -> Vecto
r([seq(sin(i*j*Pi/N),j=1..N_ex-1)]);" }}}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 55 "ev := Matrix([seq(A_eigenvector(N_ex,i),i=1..N_ex-1)]
);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 45 "Now, we can make use of the
display function " }{TEXT 0 11 "vecsequence" }{TEXT -1 1 ":" }}{PARA
0 "> " 0 "" {MPLTEXT 1 0 23 "vecsequence(lambda,ev);" }}}{EXCHG {PARA
0 "" 0 "" {TEXT -1 38 "This should produce the same pictures:" }}
{PARA 0 "> " 0 "" {MPLTEXT 1 0 29 "vecsequence(Eigenvectors(A));" }}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{SECT 1 {PARA 4 "" 0 "
" {TEXT -1 54 "Eigenvalues of the iteration matrix (Jacobi iteration)
" }}{PARA 0 "" 0 "" {TEXT -1 24 "From the eigenvalues of " }{XPPEDIT
18 0 "A;" "6#%\"AG" }{TEXT -1 81 ", we can compute the eigenvectors of
the iteration matrix: for every eigenvalue " }{XPPEDIT 18 0 "lambda;
" "6#%'lambdaG" }{TEXT -1 4 " of " }{XPPEDIT 18 0 "A;" "6#%\"AG" }
{TEXT -1 24 ", we have an eigenvalue " }{XPPEDIT 18 0 "1-omega/(2*n^2)
*lambda;" "6#,&\"\"\"F$*(%&omegaGF$*&\"\"#F$*$%\"nGF(F$!\"\"%'lambdaGF
$F+" }{TEXT -1 25 " of the iteration matrix " }{XPPEDIT 18 0 "I-M^(-1)
*A;" "6#,&%\"IG\"\"\"*&)%\"MG,$F%!\"\"F%%\"AGF%F*" }{TEXT -1 1 ":" }}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 70 "Jacobi_eigenvalue := (N,i,om
ega) -> 1 - omega/N^2/2*A_eigenvalue(N,i);" }}}{EXCHG {PARA 0 "> " 0 "
" {MPLTEXT 1 0 79 "lambda := Vector([seq(evalf(Jacobi_eigenvalue(N_ex,
i,omega_ex)),i=1..N_ex-1)]);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1
0 23 "vecsequence(lambda,ev);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 72 "
Again, we can get the same result with Maple computing the eigenvalue
s:" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 42 "vecsequence(Eigenvectors(mk_i
tmat(A,MI)));" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 116 "But the really \+
important observation is, that the Jacobi iteration performs poorly on
the low frequency error modes." }}{PARA 0 "" 0 "" {TEXT -1 64 "Things
get worse, if we use a larger problem (more grid points)." }}{PARA 0
"" 0 "" {TEXT -1 64 "This is a typical behavior of many classical ite
ration schemes." }}{PARA 0 "" 0 "" {TEXT -1 43 "A possible solution ar
e multilevel schemes." }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{SECT 1 {PARA 3 "" 0 ""
{TEXT -1 8 "Homework" }}{EXCHG {PARA 0 "" 0 "" {TEXT -1 60 "We prepare
again the solution of the model problem for N=8." }}{PARA 0 "" 0 ""
{TEXT -1 217 "As a further simplification, we set the right hand side \+
b zero. Therefore, the exact solution is also zero and the error (the \+
difference to the exact solution) during the iteration process is just
the current iterate " }{XPPEDIT 18 0 "x^i;" "6#)%\"xG%\"iG" }{TEXT
-1 1 "." }}{PARA 0 "" 0 "" {TEXT -1 128 "(This is the typical test sit
uation in real-life problems where we are not able to compute the dire
ct solution for arbitrary b)." }}{PARA 0 "" 0 "" {TEXT -1 80 "Then, we
can study the speed of convergence by starting with some initial gues
s " }{XPPEDIT 18 0 "x^0;" "6#*$%\"xG\"\"!" }{TEXT -1 18 " (= initial e
rror " }{XPPEDIT 18 0 "e^0;" "6#*$%\"eG\"\"!" }{TEXT -1 26 ") and appl
y the iteration." }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 12 "N_ex := 2^3;" }
}{PARA 0 "> " 0 "" {MPLTEXT 1 0 15 "omega_x := 1/2;" }}{PARA 0 "> " 0
"" {MPLTEXT 1 0 21 "A := mk_SysMat(N_ex):" }}{PARA 0 "> " 0 ""
{MPLTEXT 1 0 28 "MI := mk_MI_jac(A,omega_ex):" }}{PARA 0 "> " 0 ""
{MPLTEXT 1 0 55 "ev := Matrix([seq(A_eigenvector(N_ex,i),i=1..N_ex-1)]
):" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 20 "b := Vector(N_ex-1):" }}
{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{EXCHG {PARA 0 "" 0 "" {TEXT
-1 8 "Compute " }}{PARA 15 "" 0 "" {TEXT -1 50 "the first 10 (30 in th
e third example) iterations " }{XPPEDIT 18 0 "x^i;" "6#)%\"xG%\"iG" }
{TEXT -1 25 " for the Jacobi iteration" }}{PARA 15 "" 0 "" {TEXT -1
27 "a list of the norms of the " }{XPPEDIT 18 0 "x^i;" "6#)%\"xG%\"iG
" }{TEXT -1 30 " (you don't have to store the " }{XPPEDIT 18 0 "x^i;"
"6#)%\"xG%\"iG" }{TEXT -1 38 " themselves, we only need their norms)"
}}{PARA 15 "" 0 "" {TEXT -1 24 "a list of the fractions " }{XPPEDIT
18 0 "x^(i+1)/(x^i);" "6#*&)%\"xG,&%\"iG\"\"\"F(F(F()F%F'!\"\"" }
{TEXT -1 110 " (note the similar approach as in the experimental inves
tigation of the accuracy of the Euler scheme for ODE)" }}{PARA 0 ""
0 "" {TEXT -1 20 "for three different " }{XPPEDIT 18 0 "x;" "6#%\"xG"
}{TEXT -1 1 ":" }}{PARA 15 "" 0 "" {XPPEDIT 18 0 "x[i] := sin(i*Pi/N);
" "6#>&%\"xG6#%\"iG-%$sinG6#*(F'\"\"\"%#PiGF,%\"NG!\"\"" }{TEXT -1 1 "
," }}{PARA 15 "" 0 "" {XPPEDIT 18 0 "x[i] := sin(i*Pi*(N-1)/N);" "6#>&
%\"xG6#%\"iG-%$sinG6#**F'\"\"\"%#PiGF,,&%\"NGF,F,!\"\"F,F/F0" }{TEXT
-1 1 "," }}{PARA 15 "" 0 "" {TEXT -1 29 "and an random initial vector.
" }}{PARA 0 "" 0 "" {TEXT -1 48 "(You may want to look again at the te
st run for " }{TEXT 0 8 "lin_iter" }{TEXT -1 123 ", where we did simil
ar computations)\nExplain the results, especially in comparison with t
he eigenvalue analysis from above." }}{PARA 0 "> " 0 "" {MPLTEXT 1 0
0 "" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{EXCHG {PARA 0
"> " 0 "" {MPLTEXT 1 0 0 "" }}}{SECT 1 {PARA 3 "" 0 "" {TEXT -1 16 "Mu
ltigrid method" }}{PARA 0 "" 0 "" {TEXT -1 177 "To understand the mult
igrid approach, it is important not to get confused by the new functio
ns and operators - things are more complicated than with the simple Ja
cobi iteration." }}{PARA 0 "" 0 "" {TEXT -1 73 "Therefore, keep in min
d the basic principles of this multigrid iteration:" }}{PARA 15 "" 0 "
" {TEXT -1 70 "One component is a smoother - here, we use again our Ja
cobi iteration." }}{PARA 15 "" 0 "" {TEXT -1 104 "The smoother works f
ine for the high frequency parts of the error, but not for the low fre
quency parts. " }}{PARA 15 "" 0 "" {TEXT -1 56 "The latter ones are ha
ndled by a coarse grid correction:" }}{PARA 15 "" 0 "" {TEXT -1 2 "A \+
" }{TEXT 259 12 "restriction " }{TEXT -1 40 "transfers the residual fr
om a grid with " }{XPPEDIT 18 0 "N;" "6#%\"NG" }{TEXT -1 26 " subinter
vals (called the " }{TEXT 262 9 "fine grid" }{TEXT -1 17 ") to a grid \+
with " }{XPPEDIT 18 0 "N/2;" "6#*&%\"NG\"\"\"\"\"#!\"\"" }{TEXT -1 19
" subintervals (the " }{TEXT 261 13 "coarse grid)." }}{PARA 15 "" 0 "
" {TEXT 260 0 "" }{TEXT -1 49 "On the coarse grid, an (approximate) so
lution of " }{XPPEDIT 18 0 "-A*e = r;" "6#/,$*&%\"AG\"\"\"%\"eGF'!\"\"
%\"rG" }{TEXT -1 13 " is computed." }}{PARA 15 "" 0 "" {TEXT -1 2 "A \+
" }{TEXT 263 13 "prolongation " }{TEXT -1 99 "transfers the correction
back on the fine grid, where it improves the current approximate solu
tion." }}{PARA 0 "" 0 "" {TEXT -1 42 "We start with a fairly coarse 'f
ine grid':" }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 10 "N_ex := 8;" }}
}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 21 "A := mk_SysMat(N_ex);" }}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{SECT 1 {PARA 4 "" 0 ""
{TEXT -1 29 "Prolongation: mk_prolongation" }}{PARA 0 "" 0 "" {TEXT
-1 115 "For the prolongation of functions (corrections) from the coars
e grid to the fine grid, we use linear interpolation:" }}{EXCHG {PARA
0 "> " 0 "" {MPLTEXT 1 0 167 "mk_prolongation := proc(N)\n local P,j;
\n P := Matrix(N-1,N/2-1);\n for j to N/2-1 do\n P[2*j-1,j] := 1/
2;\n P[2*j,j] := 1;\n P[2*j+1,j] := 1/2\n end;\n P\nend; " }}
}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 22 "mk_prolongation(N_ex);" }}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 14 "% . <1, 3, 2>;" }}}{EXCHG
{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{SECT 1 {PARA 4 "" 0 "" {TEXT
-1 27 "Restriction: mk_restriction" }}{PARA 0 "" 0 "" {TEXT -1 192 "Fo
r the restriction operator, one can simply choose the weighted transp
ose of the prolongation operator (it does a averaging of any coarse gr
id point and its two neighbors in the fine grid):" }}{EXCHG {PARA 0 "
> " 0 "" {MPLTEXT 1 0 57 "mk_restriction := N -> 1/2*Transpose(mk_prol
ongation(N));" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 21 "mk_restric
tion(N_ex);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{SECT
1 {PARA 4 "" 0 "" {TEXT -1 64 "Coarse grid correction: mk_MI_cgc and t
he two-grid version MI_tg" }}{PARA 0 "" 0 "" {TEXT -1 46 "The coarse g
rid correction is a composition of" }}{PARA 15 "" 0 "" {TEXT -1 12 "re
striction," }}{PARA 15 "" 0 "" {TEXT -1 93 "application of an (approxi
mate) inverse of the coarse grid operator - this can be the matrix " }
{XPPEDIT 18 0 "M^(-1);" "6#)%\"MG,$\"\"\"!\"\"" }{TEXT -1 67 " from an
y iterative scheme, it will be passed as a parameter, - and" }}{PARA
15 "" 0 "" {TEXT -1 13 "prolongation." }}{PARA 0 "" 0 "" {TEXT -1 98 "
Of course, we have to read the matrix product from the end (where the \+
residual vector is applied)." }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0
72 "mk_MI_cgc := (N,MI_c) -> mk_prolongation(N) . MI_c . mk_restricti
on(N);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 62 "A first approach with t
wo grids can be constructed, if we use " }{XPPEDIT 18 0 "M^(-1) = A^(-
1);" "6#/)%\"MG,$\"\"\"!\"\")%\"AG,$F'F(" }{TEXT -1 20 " on the coarse
grid:" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 59 "MI_tg := mk_MI_cgc(N_ex, \+
MatrixInverse(mk_SysMat(N_ex/2)));" }}}{EXCHG {PARA 0 "" 0 "" {TEXT
-1 110 "Again, we are interested in the speed of convergence and there
fore in the eigenvalues of the iteration matrix." }}{PARA 0 "> " 0 ""
{MPLTEXT 1 0 45 "itmat_cgc := mk_itmat(mk_SysMat(N_ex),MI_tg);" }}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 24 "Eigenvectors(itmat_cgc);" }}
}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 15 "vecsequence(%);" }}}{EXCHG
{PARA 0 "" 0 "" {TEXT -1 140 "Some components of the error are complet
ely eliminated, but other components are not changed at all. This is
where the smoothing comes in." }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 ""
}}}}{SECT 1 {PARA 4 "" 0 "" {TEXT -1 60 "Two-grid solver: Jacobi itera
tion and coarse grid correction" }}{PARA 0 "" 0 "" {TEXT -1 103 "We ge
t a working scheme by adding a Jacobi iteration that handles the high \+
frequency error components." }}{PARA 0 "" 0 "" {TEXT -1 187 "A sequen
ce of two iteration schemes (Jacobi iteration and coarse grid correcti
on) is described in terms of iteration matrices as the product of the
iteration matrices of the two schemes:" }}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 56 "itmat_tg := itmat_cgc.mk_itmat(A,mk_MI_jac(A,omega_ex
));" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 23 "Eigenvectors(itmat_t
g);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 15 "vecsequence(%);" }}}
{EXCHG {PARA 0 "" 0 "" {TEXT -1 113 "In our two-grid scheme, every com
ponent of the error is reduced at least by a factor of 1/2 per iterat
ion step. " }}{PARA 0 "" 0 "" {TEXT -1 168 "This speed of convergence \+
is fairly good, but the iteration involves the exact solution on the c
oarse grid, which would be much too expensive in a real (large) proble
m." }}{PARA 0 "" 0 "" {TEXT -1 208 "The idea to overcome this, is to r
eplace the exact solution by a multigrid step on the coarse grid - per
formed on a even coarser grid and so on, until a direct solution becom
es feasible on a very coarse grid." }}{PARA 0 "> " 0 "" {MPLTEXT 1 0
0 "" }}}}{SECT 1 {PARA 4 "" 0 "" {TEXT -1 72 "Towards multigrid: gener
ate coarse grid MI (temporary version) - mk_MI_c" }}{PARA 0 "" 0 ""
{TEXT -1 169 "The construction of the multigrid iteration matrix is mo
re complicated than in the case of the Jacobi-iteration, as it contain
s a multigrid iteration on a coarser grid." }}{PARA 0 "" 0 "" {TEXT
-1 75 "This leads to a recursive structure with two procedures calling
each other:" }}{PARA 15 "" 0 "" {TEXT 0 7 "mk_MI_c" }{TEXT -1 21 " co
mputes the matrix " }{XPPEDIT 18 0 "M^(-1);" "6#)%\"MG,$\"\"\"!\"\"" }
{TEXT -1 81 " for the coarse grid. Finally, this will be a multigrid i
teration (the result of " }{TEXT 0 11 "mk_itmat_mg" }{TEXT -1 94 "), b
ut we start with a temporary version that will again solve the coarse \+
grid system exactly." }}{PARA 15 "" 0 "" {TEXT -1 1 " " }{TEXT 0 11 "m
k_itmat_mg" }{TEXT -1 69 " computes the iteration matrix of a multigri
d iteration. It will use " }{TEXT 0 9 "mk_MI_cgc" }{TEXT -1 55 " for t
he coarse grid correction which will in turn use " }{TEXT 0 7 "mk_MI_c
" }{TEXT -1 1 "." }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 101 "mk_MI_c
:= proc(N,omega)\n local itmat1,itmat2;\n option trace;\n MatrixIn
verse(mk_SysMat(N/2));\nend;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1
0 15 "mk_MI_c(8,1/2);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }
}}}{SECT 1 {PARA 4 "" 0 "" {TEXT -1 53 "Iteration matrix of the multig
rid method: mk_itmat_mg" }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 237 "
mk_itmat_mg := proc(N,omega)\n local A,itmat_cgc,itmat_jac;\n option
trace;\n A := mk_SysMat(N,omega);\n itmat_cgc := mk_itmat(A,mk_MI_
cgc(N,mk_MI_c(N,omega)));\n itmat_jac := mk_itmat(A,mk_MI_jac(A,omega
));\n itmat_cgc . itmat_jac\nend;" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0
1 " " }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{SECT 1 {PARA
4 "" 0 "" {TEXT -1 55 "Test run for mk_itmat_mg (still with temporary \+
mk_MI_c)" }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 19 "mk_itmat_mg(8,1/
2);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 29 "vecsequence(Eigenvec
tors(%));" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{SECT 1
{PARA 4 "" 0 "" {TEXT -1 47 "Final version of mk_MI_c (calling mk_itma
t_mg):" }}{PARA 0 "" 0 "" {TEXT -1 97 "Now, we will use the direct sol
ution only for the coarsest possible grid (with only one unknown)." }}
{PARA 0 "" 0 "" {TEXT -1 69 "For all other cases, we will use a multig
rid step on the coarse grid." }}{PARA 0 "" 0 "" {TEXT -1 40 "There, we
get the iteration matrix from " }{TEXT 0 11 "mk_itmat_mg" }{TEXT -1
33 " and convert it into the matrix " }{XPPEDIT 18 0 "M^(-1);" "6#)%
\"MG,$\"\"\"!\"\"" }{TEXT -1 4 " by " }{TEXT 0 11 "mk_MI_itmat" }
{TEXT -1 1 "." }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 185 "mk_MI_c :=
proc(N,omega)\n local itmat1,itmat2;\n #option trace;\n if N=4 the
n\n MatrixInverse(mk_SysMat(N/2))\n else\n mk_MI_itmat(mk_SysMa
t(N/2),mk_itmat_mg(N/2,omega))\n end\nend;" }}{PARA 0 "> " 0 ""
{MPLTEXT 1 0 0 "" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 55 "Now, we can s
ee how the multigrid method is performing:" }}{PARA 0 "> " 0 ""
{MPLTEXT 1 0 32 "t := evalf(mk_itmat_mg(32,1/2));" }}}{EXCHG {PARA 0 "
> " 0 "" {MPLTEXT 1 0 29 "vecsequence(Eigenvectors(t));" }}}{EXCHG
{PARA 0 "" 0 "" {TEXT -1 35 "A final note concerning efficiency:" }}
{PARA 0 "" 0 "" {TEXT -1 178 "We have seen that the error decreases ra
pidly with comparatively few multigrid cycles. But what about the amo
unt of work per cycle? Maple had to do a lot of work in our example?"
}}{PARA 0 "" 0 "" {TEXT -1 104 "This is only because we wanted to see \+
the iteration matrix (more precisely, its eigenvalues) explicitly." }}
{PARA 0 "" 0 "" {TEXT -1 338 "In a real implementation, we only have t
o apply simple operations: restriction, prolongation, and computation
of the residual, to name the most important. This needs only a small \+
number of operations per grid point. Then, a multigrid cycle is in ter
ms of computational costs comparable to a few Jacobi iterations, but f
ar more effective!" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{EXCHG
{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 0 "" }}}}}}{MARK "0 1 0" 0 }{VIEWOPTS 1 1 0 3 2 1804 1 1
1 1 }{PAGENUMBERS 0 1 2 33 1 1 }