{VERSION 5 0 "IBM INTEL NT" "5.0" }
{USTYLETAB {CSTYLE "Maple Input" -1 0 "Courier" 0 1 255 0 0 1 0 1 0 0
1 0 0 0 0 1 }{CSTYLE "2D Math" -1 2 "Times" 0 1 0 0 0 0 0 0 2 0 0 0 0
0 0 1 }{CSTYLE "2D Comment" 2 18 "" 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 }
{CSTYLE "" -1 256 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 }{CSTYLE "" -1
257 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 }{PSTYLE "Normal" -1 0 1
{CSTYLE "" -1 -1 "Times" 1 12 0 0 0 1 2 2 2 2 2 2 1 1 1 1 }1 1 0 0 0
0 1 0 1 0 2 2 0 1 }{PSTYLE "Heading 1" -1 3 1 {CSTYLE "" -1 -1 "Times
" 1 18 0 0 0 1 2 1 2 2 2 2 1 1 1 1 }1 1 0 0 8 4 1 0 1 0 2 2 0 1 }
{PSTYLE "Heading 2" -1 4 1 {CSTYLE "" -1 -1 "Times" 1 14 0 0 0 1 2 1
2 2 2 2 1 1 1 1 }1 1 0 0 8 2 1 0 1 0 2 2 0 1 }{PSTYLE "Bullet Item"
-1 15 1 {CSTYLE "" -1 -1 "Times" 1 12 0 0 0 1 2 2 2 2 2 2 1 1 1 1 }1
1 0 0 3 3 1 0 1 0 2 2 15 2 }{PSTYLE "Title" -1 18 1 {CSTYLE "" -1 -1 "
Times" 1 18 0 0 0 1 2 1 1 2 2 2 1 1 1 1 }3 1 0 0 12 12 1 0 1 0 2 2 19
1 }}
{SECT 0 {EXCHG {PARA 18 "" 0 "" {TEXT -1 59 "Iterative \"Solution\" of
Trivial Linear Systems of Equations" }}{PARA 0 "> " 0 "" {MPLTEXT 1
0 8 "restart;" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 20 "with(LinearAlgebra
):" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 12 "with(plots):" }}}{SECT 1
{PARA 3 "" 0 "" {TEXT -1 37 "Systems with diagonal system matrices" }}
{EXCHG {PARA 0 "" 0 "" {TEXT -1 268 "To get a better understanding, wh
y we have to compute eigenvalues in order to determine the speed of co
nvergence of (linear) iterative solvers for linear systems, we investi
gate the most simple case, where both system matrix and iteration matr
ix are diagonal matrices:" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 29 "A := D
iagonalMatrix([1,2,3]);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 13 "
b := <1,4,9>;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 17 "LinearSolv
e(A,b);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 94 "Of course, we do not n
eed a sophisticated method to 'solve' this system, as obviously we hav
e:" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 17 "MatrixInverse(A);" }}}{EXCHG
{PARA 0 "" 0 "" {TEXT -1 91 "But we will se, that this system is still
useful in order to learn about iterative methods." }}{PARA 0 "> " 0 "
" {MPLTEXT 1 0 0 "" }}}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}
{SECT 1 {PARA 3 "" 0 "" {TEXT -1 20 "Richardson iteration" }}{EXCHG
{PARA 0 "" 0 "" {TEXT -1 188 "Most iterative solvers are too smart for
our intention - they solve diagonal systems in one single 'iteration'
step (you can verify this easily for the Jacobi and the Gau\337-Seide
l methods)." }}{PARA 0 "" 0 "" {TEXT -1 118 "To have a real iterative \+
solution process, we choose the simplest iteration scheme: the (damped
) Richardson iteration:" }}{PARA 0 "" 0 "" {XPPEDIT 18 0 "x^(i+1) = x^
i+alpha*r^i;" "6#/)%\"xG,&%\"iG\"\"\"F(F(,&)F%F'F(*&%&alphaGF()%\"rGF'
F(F(" }{TEXT -1 8 ", where " }{XPPEDIT 18 0 "r^i = b-A*x^i;" "6#/)%\"r
G%\"iG,&%\"bG\"\"\"*&%\"AGF))%\"xGF&F)!\"\"" }{TEXT -1 13 " denotes th
e " }{TEXT 256 8 "residual" }{TEXT -1 5 " for " }{XPPEDIT 18 0 "x^i;"
"6#)%\"xG%\"iG" }{TEXT -1 30 " and the damping coefficient " }
{XPPEDIT 18 0 "alpha;" "6#%&alphaG" }{TEXT -1 55 " is a fixed real num
ber (in the lecture, only the case " }{XPPEDIT 18 0 "alpha = 1;" "6#/%
&alphaG\"\"\"" }{TEXT -1 82 " was covered, but to ensure convergence, \+
we will need damping, i.e. some positive " }{XPPEDIT 18 0 "alpha < 1;
" "6#2%&alphaG\"\"\"" }{TEXT -1 2 ")." }}{PARA 0 "> " 0 "" {MPLTEXT 1
0 85 "richardson := proc(x)\n local r;\n global A,b,alpha;\n r := b
-A.x;\n x + alpha*r\nend;" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 30 "We \+
choose some damping factor " }{XPPEDIT 18 0 "alpha;" "6#%&alphaG" }
{TEXT -1 3 "..." }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 13 "alpha := 1/4;" }
}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 79 "... and an initial guess - as we
don's know better, we choose all components 0:" }}{PARA 0 "> " 0 ""
{MPLTEXT 1 0 14 "x0 := <0,0,0>;" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1
37 "Now, we can apply one iteration step:" }}{PARA 0 "> " 0 ""
{MPLTEXT 1 0 15 "richardson(x0);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1
64 "And more iteration steps - to save typing, we do this in a loop:"
}}{PARA 0 "> " 0 "" {MPLTEXT 1 0 12 "Itmax := 10;" }}{PARA 0 "> " 0 "
" {MPLTEXT 1 0 13 "x := <0,0,0>;" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 12
"xseq := [x];" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 68 "for i to Itmax do \+
\n x := richardson(x);\n xseq := [op(xseq),x]\nend:" }}{PARA 0 "> "
0 "" {MPLTEXT 1 0 5 "xseq;" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 52 "The
sequence really approaches the solution <1,2,3>:" }}{PARA 0 "> " 0 "
" {MPLTEXT 1 0 12 "evalf(xseq);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT
1 0 25 "xsol := LinearSolve(A,b);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1
69 "To get a visual impression, we extract the first component from ea
ch " }{XPPEDIT 18 0 "x^i;" "6#)%\"xG%\"iG" }{TEXT -1 24 ": and plot th
e new list:" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 35 "[seq([i,xseq[i+1][1]
],i=0..Itmax)];" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 47 "pointplot(%,styl
e=LINE,thickness=2,color=BLUE);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1
56 "We can do the same for the other two components of each " }
{XPPEDIT 18 0 "x^i;" "6#)%\"xG%\"iG" }{TEXT -1 1 ":" }}{PARA 0 "> " 0
"" {MPLTEXT 1 0 125 "display([seq(\n pointplot([seq([i,xseq[i+1][j]],
i=0..Itmax)],\n style=LINE,thickness=2,color=[RED,BLUE,GREEN][j])\n,j
=1..3)]);" }}}{SECT 1 {PARA 4 "" 0 "" {TEXT -1 49 "How the error chang
es during the solution process" }}{EXCHG {PARA 0 "" 0 "" {TEXT -1 21 "
Next, we compute the " }{TEXT 257 5 "error" }{TEXT -1 1 " " }{XPPEDIT
18 0 "e^i = A^(-1)*b-x^i;" "6#/)%\"eG%\"iG,&*&)%\"AG,$\"\"\"!\"\"F,%\"
bGF,F,)%\"xGF&F-" }{TEXT -1 13 " and plot it:" }}{PARA 0 "> " 0 ""
{MPLTEXT 1 0 36 "errseq := map(xit -> xsol-xit,xseq);" }}}{EXCHG
{PARA 0 "> " 0 "" {MPLTEXT 1 0 127 "display([seq(\n pointplot([seq([i
,errseq[i+1][j]],i=0..Itmax)],\n style=LINE,thickness=2,color=[RED,BL
UE,GREEN][j])\n,j=1..3)]);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 44 "The
same curves in a semi logarithmic scale:" }}{PARA 0 "> " 0 ""
{MPLTEXT 1 0 125 "display([seq(\n logplot([seq([i,errseq[i+1][j]],i=0
..Itmax)],\n style=LINE,thickness=2,color=[RED,BLUE,GREEN][j])\n,j=1.
.3)]);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 269 "From the last diagram,
we see, that in our example each component of the error is reduced by
some factor in every iteration step (this is due to the special prope
rties of our simple problem an not true for realistic problems - and t
here, the eigenvectors will come in)." }}{PARA 0 "" 0 "" {TEXT -1 252
"The factors are different for each component. In the next section, we
will learn, how to compute them in advance, but for now, we ask Maple
to compute the quotients. The following procedure computes the quotie
nts y[i+1]/y[i] for a list of points [x,y]:" }{MPLTEXT 1 0 0 "" }}
{PARA 0 "> " 0 "" {MPLTEXT 1 0 94 "quotients := proc(pl)\n local i;\n
[seq([pl[i][1],pl[i+1][2]/pl[i][2]],i=1..nops(pl)-1)];\nend;" }}
{PARA 0 "> " 0 "" {MPLTEXT 1 0 31 "quotients([[1,3],[2,2],[3,1]]);" }}
}{EXCHG {PARA 0 "" 0 "" {TEXT -1 40 "We do this for the error computes
above:" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 138 "display([seq(\n pointp
lot(quotients([seq([i,errseq[i+1][j]],i=0..Itmax)]),\n style=LINE,thi
ckness=2,color=[RED,BLUE,GREEN][j])\n,j=1..3)]);" }}}{EXCHG {PARA 0 "
" 0 "" {TEXT -1 103 "Remember, that we want to have factors as close \+
to zero as possible: this results in fast convergence!" }}{PARA 0 "> \+
" 0 "" {MPLTEXT 1 0 0 "" }}}}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0
"" }}}{SECT 1 {PARA 3 "" 0 "" {TEXT -1 20 "Convergence analysis" }}
{EXCHG {PARA 0 "" 0 "" {TEXT -1 11 "As we have " }{XPPEDIT 18 0 "x^(i+
1) = x^i+alpha*(b-A*x^i);" "6#/)%\"xG,&%\"iG\"\"\"F(F(,&)F%F'F(*&%&alp
haGF(,&%\"bGF(*&%\"AGF()F%F'F(!\"\"F(F(" }{TEXT -1 47 ", we get the fo
llowing equation for the error: " }{XPPEDIT 18 0 "x^(i+1)-x = x^i-x+al
pha*A*(x-x^i);" "6#/,&)%\"xG,&%\"iG\"\"\"F)F)F)F&!\"\",()F&F(F)F&F**(%
&alphaGF)%\"AGF),&F&F))F&F(F*F)F)" }{TEXT -1 11 ", which is " }
{XPPEDIT 18 0 "e^(i+1) = e^i-alpha*Ae^i;" "6#/)%\"eG,&%\"iG\"\"\"F(F(,
&)F%F'F(*&%&alphaGF()%#AeGF'F(!\"\"" }{TEXT -1 4 " or " }{XPPEDIT 18
0 "e^(i+1) = (I-alpha*A)*e^i;" "6#/)%\"eG,&%\"iG\"\"\"F(F(*&,&%\"IGF(*
&%&alphaGF(%\"AGF(!\"\"F()F%F'F(" }{TEXT -1 8 ". where " }{XPPEDIT 18
0 "I;" "6#%\"IG" }{TEXT -1 29 " denotes the identity matrix." }}{PARA
0 "" 0 "" {TEXT -1 11 "The matrix " }{XPPEDIT 18 0 "I-alpha*A;" "6#,&%
\"IG\"\"\"*&%&alphaGF%%\"AGF%!\"\"" }{TEXT -1 155 " is the iteration m
atrix of the damped Richardson matrix and determines the speed of conv
ergence (the general formula for the iteration matrix of a scheme " }
{XPPEDIT 18 0 "x^(i+1) = x^i+M^(-1)*r^i;" "6#/)%\"xG,&%\"iG\"\"\"F(F(,
&)F%F'F(*&)%\"MG,$F(!\"\"F()%\"rGF'F(F(" }{TEXT -1 5 " was " }
{XPPEDIT 18 0 "-M^(-1)*(A-M);" "6#,$*&)%\"MG,$\"\"\"!\"\"F(,&%\"AGF(F&
F)F(F)" }{TEXT -1 44 ", and for the Richardson iteration, we have " }
{XPPEDIT 18 0 "M^(-1) = alpha*I;" "6#/)%\"MG,$\"\"\"!\"\"*&%&alphaGF'%
\"IGF'" }{TEXT -1 1 ")" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 26 "IdentityM
atrix(3)-alpha*A;" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 134 "As we get a
diagonal matrix, we can directly see the reduction factors 0.75, 0.5,
and 0.25,that we have received in the diagram above!" }}{PARA 0 "> "
0 "" {MPLTEXT 1 0 0 "" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 145 "We see \+
again, that the third component of the error shows fastest convergence
(0.25 in each step - we have noticed this already in the diagrams)."
}}{PARA 0 "" 0 "" {TEXT -1 20 "For other values of " }{XPPEDIT 18 0 "a
lpha;" "6#%&alphaG" }{TEXT -1 24 ", the situation changes:" }}{PARA 0
"> " 0 "" {MPLTEXT 1 0 17 "alpha := 'alpha';" }}}{EXCHG {PARA 0 "> "
0 "" {MPLTEXT 1 0 42 "Itmat := evalm(IdentityMatrix(3)-alpha*A);" }}}
{EXCHG {PARA 0 "" 0 "" {TEXT -1 27 "Look at the first component" }}
{PARA 0 "> " 0 "" {MPLTEXT 1 0 17 "f1 := Itmat[1,1];" }}}{EXCHG {PARA
0 "" 0 "" {TEXT -1 27 "For this component, we have" }}{PARA 15 "" 0 "
" {TEXT -1 16 "convergence, if " }{XPPEDIT 18 0 "Abs(-alpha+1) < 1;" "
6#2-%$AbsG6#,&%&alphaG!\"\"\"\"\"F*F*" }{TEXT -1 11 ", which is " }
{XPPEDIT 18 0 "0 < alpha;" "6#2\"\"!%&alphaG" }{TEXT -1 5 " and " }
{XPPEDIT 18 0 "alpha < 2;" "6#2%&alphaG\"\"#" }{TEXT -1 1 "," }}{PARA
15 "" 0 "" {TEXT -1 80 "fastest possible convergence (solution reached
in the first iteration step) for " }{XPPEDIT 18 0 "alpha = 1;" "6#/%&
alphaG\"\"\"" }{TEXT -1 1 "," }}{PARA 15 "" 0 "" {TEXT -1 23 "no progr
ess at all for " }{XPPEDIT 18 0 "alpha = 0;" "6#/%&alphaG\"\"!" }
{TEXT -1 26 " (not surprising) and for " }{XPPEDIT 18 0 "alpha = 2;" "
6#/%&alphaG\"\"#" }{TEXT -1 1 "," }}{PARA 15 "" 0 "" {TEXT -1 15 "dive
rgence for " }{XPPEDIT 18 0 "alpha < 0;" "6#2%&alphaG\"\"!" }{TEXT -1
9 " and for " }{XPPEDIT 18 0 "2 < alpha;" "6#2\"\"#%&alphaG" }{TEXT
-1 1 "." }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{EXCHG {PARA 0 "" 0
"" {TEXT -1 83 "For a complete analysis, we plot the diagonal elements
for varying values of alpha:" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 36 "ds
:= [seq(Itmat[i,i],i=1..3),1,-1];" }}}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 67 "plot(ds,alpha=0..2,color=[RED,BLUE,GREEN,BLACK,BLACK]
,thickness=2);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 122 "For convergenc
e, all components have to converge, the speed of the overall process i
s determined by the slowest component." }}{PARA 0 "" 0 "" {TEXT -1 51
"Therefore, in our example, we have convergence for " }{XPPEDIT 18 0 "
0 < alpha;" "6#2\"\"!%&alphaG" }{TEXT -1 5 " and " }{XPPEDIT 18 0 "alp
ha < 2/3;" "6#2%&alphaG*&\"\"#\"\"\"\"\"$!\"\"" }{TEXT -1 45 " (why?) \+
and fastest possible convergence, if " }{XPPEDIT 18 0 "-alpha+1 = -(-3
*alpha+1);" "6#/,&%&alphaG!\"\"\"\"\"F',$,&*&\"\"$F'F%F'F&F'F'F&" }
{TEXT -1 1 ":" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 32 "solve(-alpha+1 = -
(-3*alpha+1));" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 30 "Try this in the
example above!" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{EXCHG
{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{SECT 1 {PARA 3 "" 0 "" {TEXT
-1 27 "And for realistic problems?" }}{EXCHG {PARA 0 "" 0 "" {TEXT -1
110 "For realistic problems, the iteration matrix will not be a diagon
al matrix, so things become more complicated." }}{PARA 0 "" 0 ""
{TEXT -1 142 "But it still operates on the eigenvectors, as if it were
a diagonal matrix: each of them is scaled according to the correspond
ing eigenvalue." }}{PARA 0 "" 0 "" {TEXT -1 209 "Therefore, the same \+
analysis still holds, if we exchange 'component of the vector' by 'coe
fficient of the vector in a decomposition into eigenvectors' and the e
ntries in the diagonal matrix by the eigenvalues." }}{PARA 0 "> " 0 "
" {MPLTEXT 1 0 0 "" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}
}{MARK "0 1 0" 8 }{VIEWOPTS 1 1 0 1 1 1803 1 1 1 1 }{PAGENUMBERS 0 1
2 33 1 1 }