All Courses
All Courses
Courses by Software
Courses by Semester
Courses by Domain
Tool-focused Courses
Machine learning
POPULAR COURSES
Success Stories
Objective ::: 1. To be familiar with the stability of linear iterative solvers. 2. Importance of iterative solvers over direct methods. 3. Analysing the computational expensive…
Epuri Yoga Narasimha
updated on 18 Mar 2021
Objective ::: 1. To be familiar with the stability of linear iterative solvers.
2. Importance of iterative solvers over direct methods.
3. Analysing the computational expensive of the methods.
4. Algorithms for the Methods.
5. Convergence criteria of the iterative methods. (spectral radius)
6. Influence of diagonal dominance on the convergence of iterative scheme.
Discretizing the governing equation at all grid points in the considered domain , we get a system of equations in in implicit form.
Let's consider equations ;
In matrix form ;
A = ℝnXn
X = ℝnX1
b = ℝnX1
To solve these type of equations , we do have three different methods
1. Direct Methods:::
1. Cramer's rule.
2. Navie Gauss elimination.
3. Gauss jordan method.
4. LU decomposition.
5. Cholesky method. (special type of LU decomposition , only for symmetric matrices).
2. Iterative Methods :::
1. Gauss jacobi iteration method.
2. Point gauss siedel method.
3. Successive order approximation (SOR-gs) and (SOR-jacobi).
4. Multi grid method.
5. Line gauss siedel method.
6. Conjugate gradient method.
3. Graphical Method.
Basic Types of matrices need to know :::
1. Sysmmetric MATRIX.
2. Transpose matrix.
3. Upper triangular matrix.
4. Lower triangular matrix.
5. Banded matrix.
7. Augumented matrix.
8. Sparse matrix.
9. Strictly upper and lower triangular matrix.
Basic fundamentals :::
1. Inverse of a matrix.
2. Determinant of a matrix.
3. Matrix Multiplication.
4. Transpose of a matrix.
Important topics explanation to know :::
Banded Matrix :
Band matrix is a sparse matrix contains elements along the main diagonal , and contain
a zeros on the either side of the diagonal , contain some other non zero sub diagonals
on either side.
Important thing in computations need to take care about computation expensive.
means No.of multiplications/divisions and additions/subtractions.
Different methods , do have a different number of operations .
As number of operations are less , less propogation of round off error , more approximate solution.
So , while selecting an algorithm for the particular algorithm need to study and compare all the methods of computational difficulty.
Let's see for the determinant , 3 methods:::
1. Leibniz's rule. (Permitation) -> (n-1)⋅n!
2. Gauss elimination. (UTM) -> (n3+2n-33)
3. Laplace (Co-factor) -> n⋅(2n-1-1)
As can see that number of operations taken to calculate determinant by different methods.
Depending on the size of the matrix , operations and priority of method changes.
System of linear equation methods :::
1. Gauss elimination ::
Best representation can undertood the mathematical method solving and implementing in computer.
Two steps in Gauss Elimination Method ::
1. Forward elimination.
A̲⋅ˉx=ˉb→U̲⋅ˉx=ˉb′
2. Backward substitution.
%% Main code :::
% Clearing the interferace data
close all ; clear all ; clc
% Defining the system of linear equations
A = [0.143 0.357 2.01 ; -1.31 0.911 1.99 ; 11.2 -4.30 -0.605];
B = [-5.173 -5.458 4.415]';
disp('co-efficent matrix ::')
disp(A)
disp('contant vector ::')
disp(B)
tic
c = inv(A)*B;
b = toc;
disp(b)
c = {'gauss elimination','gauss jordan','LU Decomposition','TDMA'};
fprintf('gauss elimination : 1 n gauss jordan : 2 n LU Decomposition : 3 n')
i = input('enter the method to solve the system of linear equations ::');
if i == 1
% Gauss elimination
[x1,A1,B1,a] = gauss_elimination(A,B);
disp('co-efficient matrix after gauss elimination ::')
disp(A1)
disp('constant matrix ::')
disp(B1)
disp('solution using gauss elimination ::')
disp(x1)
disp(a)
elseif i==2
% gauss jordan
% Augumented matrix
Ag = [0.143 0.357 2.01 -5.173 ; -1.31 0.911 1.99 -5.458 ; 11.2 -4.30 -0.605 4.415];
tic
[GJ_A,n,a] = gauss_jordan(Ag);
x2 = GJ_A(1:n(1),n(2));
disp('Augumented matrix after gauss jordan ::')
disp(Ag)
disp('solution using gauss jordan ::')
disp(x2)
disp(a)
elseif i==3
% LU Decomposition
%continue
end
%% Gauss Elimination
function [x1,A,B,a] = gauss_elimination(A,B)
n = length(A);
% Exact solution
x = inv(A)*B;
disp('Exact solution for the system of linear equations')
disp(x)
n = length(A);
tic
% Forward Elimination
for i = 1:n-1
for j = i+1:n
B(j,1) = B(j,1) - (A(j,i)/A(i,i)).*B(i,1);
A(j,i:n) = A(j,i:n) - (A(j,i)/A(i,i)).*A(i,i:n);
end
end
% Backward substitution
x1 = ones(n,1);
x1(n,1) = B(n,1)/A(n,n);
for i = [2,1]
x1(i,1) = (B(i,1) - sum(A(i,i+1:n)*x1(i+1:n,1)))/(A(i,i));
end
a= toc;
end
%% Gauss Jordan
function [A,n,a] = gauss_jordan(A)
n = size(A);
tic
for j = 1:n(2)-1
for i = 1:n(1)
if i==1
A(j,i:n(2)) = A(j,i:n(2))/A(j,j);
end
if i ~= j
A(i,j:n(2)) = A(i,j:n(2)) - A(i,j)*A(j,j:n(2));
end
end
end
a = toc;
end
What are iterative methods :::
Iterative methods is an approach to the approximate accurate solution of the problem , over iterations with an initial guess.
Benifits of iterative methods over direct methods :::
In genral , system of equations won't be like 10 , it would around multiplies of 107 , very difficult to form a matrix . And due to more number of calculation complexity
round off error propogation would be high.
So, it would be better to use the iterative methods in these cases.
Block diagram of iterative methods :::
These methods we can say them as Fixed point iteration methods , as approaching a fixed values over the iterations.
Fixed point iteration method :::
System of linear equations in the matrix form -> A⋅X=B
Fixed point iteration as -> X=TX+B
T is called Iteration Matrix.
Jacobi Iterative method :::
Algorithm :::
Xk+1=D-1⋅(L+U)⋅Xk+D-1⋅B
Iteration Matrix -> D-1⋅(L+U)
Gauss siedel Iterative method :::
Gauss siedel method is faster convergence than the jacobi method , as it considered the values executed at the present iteration.
Xk+1=(D+L)-1⋅U⋅Xk+(D+L)-1⋅B
Succesive Over/Under approximation :::
Enhance method of jacobi and gauss siedel method.
Taking the updated values either from gauss siedel or jacobi , and scaling by scaling factor called relaxation factor.
Convergence is very high than jacobi and gauss sidel , means Number of iterations for the convergence decreased.
Proof for SOR representation in matrix form :::
Xk+1=Xk+1⋅(Xk+1-Xk)
Xk+1=Xk+ω⋅(Xk+1-Xk)
ω -> Scalaing factor.
>1 -> Overrelaxation.
<1 -> Underrelaxation.
Iteration Matrix(TSOR) :: (L+1ω⋅D)-1⋅(1ωD-D-U)
Iterative methods can be under some circumstences only -> convergence criterias.
Optimum relaxation factor for SOR ::
wopt=2-2√1-s2jacobi(or)gs
Convergence criteria for Iterative methods :::
1. weakly Diagonal dominance of Co-efficient matrix.
2. Norm of a Iteration Matrix∥→T∥ < 1.
3. Spectral radius of a Iteration matrix`(rho(T))` <1.
4. ρ(T)<∥→T∥. -> spectral radius is bounded by T.
These conditions are sufficient but not ncessary for convergence.
Spectral Radius :::
Spectral Radius is maximum eigenvalue of a vector.
ρ(T)=maxi|λ|T
Norm :::
Norm represents the length of the matrix/vector , with the help of the norm we can compare the matrices.
Main code snippet :::
Description ::
1. Calculated soution of system if linear equations using inv(v) command.
2. Divided the co-efficient matrix into strictly Lower and Upper Triangular matrix , Diagonal matrix.
3. Defined the function , for respective iterative methods for required results.
4. Plotted the data gotten from the execution of code.
5. Also diagonal dominanance influence in spectral radius.
clc
clear all
close all
% solving system of equations using iterative solvers.
% equations
% Coefficient Matrix, A=[5 1 2 ; -3 9 4; 1 2 -7]
% Solution matrix, X=[x1;x2;x3]
% Constant Matrix, B=[10 -14 33]
A = [5 1 2 ; -3 9 4; 1 2 -7];
B = [10 -14 33];
% Exact solution using inverse of a matrix for cross check
% x = inv(A)*B.
% inv(A) = inverse of a matrix A (transpose(Adj(A))).
% Transpose used on B , for matrix multiplication
% inv(A)*B' as A\B' for speed execution
x_e = A\B';
% Displaying the exact solution on command prompt
disp('Solution by the inverse method ::')
disp(x_e)
% Decomposing the co-efficinet matrix into
% Strictly lower triangular matrix (L).
% Strictly upper triangular matrix (U).
% Diagonal matrix (D).
% A = L + D + U.
% Defining U and L , as zeros(n(1),n(2))
% n = size(A).
% n(1) = No. of rows.
% n(2) = No. of columns.
n = size(A);
L = zeros(n(1),n(2));
U = L;
D = U;
% Grasping values from the co-efficinet matrix of any size.
% Using for loop.
% No inbuilt commands.
for i = 1:n(1)
for j = 1:n(2)
if i == j
D(i,j) = A(i,j);
elseif i<j
U(i,j) = A(i,j);
elseif i>j
L(i,j) = A(i,j);
end
end
end
D1 = {D , 0.1*D};
for i =1:2
% Jacobi iteration method.
[j_iter1,x_jaco,sp_jaco] = Jacobi(L,D1{i},U,B');
% displaying the required output values
fprintf('Number of iterations to converge to the solution = %d \n',j_iter1)
% Solution by the jacobi iteration method
disp('Solution by the jacobi iteration method ::')
disp(x_jaco)
% Spectral radius
fprintf( 'Spectral radius = %f \n',sp_jaco)
% gauss sidel iteration method.
[gs_iter1,x_gs,sp_iter] = Gauss_sidel(L,D1{i},U,B');
% displaying the required output values
fprintf('Number of iterations to converge to the solution = %d \n',gs_iter1)
% Solution by the gauss sidel iteration method
disp('Solution by the gauss sidel iteration method ::')
disp(x_gs)
% Spectral radius
fprintf( 'Spectral radius = %f \n',sp_iter)
% Found different formula for optimum values of relaxation factor.
% No conclusion on it
% But I found , better to go with trail and error upto now
%omeg = 2 - 2/1+(sqrt(1+sp_iter^2));
if j == 1
omeg = 0.8;
else
omeg = 1.1;
end
% For convergence omega (0<w<2) non symmetric matrices
% For symmetric atrices (0<w<1)
% SOR_gauss sidel iteration method.
[SOR_gs_iter,x_SOR_gs,sp_SOR] = SOR_gs(L,D1{i},U,B',omeg);
% displaying the required output values
fprintf('Number of iterations to converge to the solution = %d \n',SOR_gs_iter)
% Solution by the SOR gauss sidel iteration method
disp('Solution by the successive over relaxation gauss sidel iteration method ::')
disp(x_SOR_gs)
% Spectral radius
fprintf( 'Spectral radius = %f \n ',sp_SOR)
end
% For diagonal magnification influence
% Plots
% 1. spectral radius vs Iterations.
% 2. Diagonal magnification vs spectral radius.
% 3. Diagonalmagnification vs Iterations.
%{
D_m = linspace(0.2,1.4,5);
for j = 1:3
for i = 1:length(D_m)
D1 = D_m(i)*D;
if j == 1
[iter(i,1),~,sp(i,1)] = Gauss_sidel(L,D1,U,B');
tit = sprintf('jacobi iterative method');
elseif j==2
[iter(i,1),~,sp(i,1)] = Jacobi(L,D1,U,B');
tit = sprintf('gauss siedel iteration method');
omega(i,1) = 2 - 2/1+(sqrt(1+sp(i,1)^2)); % he taken under relaxation
else
[iter(i,1),~,sp(i,1)] = SOR_gs(L,D,U,B',omega(i,1));
tit = sprintf('SOR_gs iterative method');
end
end
f = figure('WindowState','maximized');
figure(j)
subplot(3,1,1)
loglog(sp,iter,'color','r','marker','o','MarkerFaceColor','b')
grid on
xlabel('spectral radius')
ylabel('Number of iterations')
title('spectral radius vs Iterations ',{tit})
subplot(3,1,2)
plot(D_m,sp,'color','m','marker','o','MarkerFaceColor','r')
grid on
xlabel('Diagonal magnification')
ylabel('Spectral radius')
title('Diagonal magnification vs Spectral radius',{tit})
subplot(3,1,3)
loglog(D_m,iter,'color','k','marker','o','MarkerFaceColor','b')
grid on
xlabel('Diagonal magnification')
ylabel('Number of iterations')
title('Diagonal magnification vs Number of iterations',{tit})
end
%}
Description of Iteration methods ::
1. Calculated the eigen values of the iteration matrix , using commands -> det , solve , double , abs .
2. Calculted the spectral radius -> max command (max of eigen value).
3. Got the Number of Iterations and spectral radius and solution.
Jacobi Iteration snippet :::
function [iter,x,sp_radius] = Jacobi(L,D,U,B)
format short
% Iteration matrix of jacobi
% T = inv(D)*(L+U)
T = (D\(L+U));
% Tolerance value (1e-4) considered for better accuracy.
tol = 1e-4 ;
% Defining error>tolerance
err = 9e9;
% Considering initial guess as [0 0 0]
xo = [0 0 0]';
% Eigenvalue of iteration matrix
[eigVal]=spec_calculation(T);
% Spectral radius of iteration matrix
% maximum of eigen values
sp_radius = max(((eigVal)));
% Iteration number
n_iter = 0;
while err > tol
x_k = D\B - (D\(L+U))*xo;
err = max(abs(x_k - xo ));
xo = x_k;
n_iter = n_iter+1;
end
% Number of iterations for convergence
iter = n_iter;
% Solution by jacobi iteration method
x = xo;
end
%}
Gauss siedel Iteration snippet :::
function [iter,x,sp_radius] = Gauss_sidel(L,D,U,B)
format short
% Iteration matrix of jacobi
T = -((D+L)\U);
% Tolerance value (1e-5) considered for better accuracy.
tol = 1e-4 ;
% Defining error>tolerance
err = 9e9;
% Considering initial guess as [0 0 0]
xo = [0 0 0]';
% Eigen value of iteration matrix
[eigVal]=spec_calculation(T);
% Spectral radius of iteration matrix
% maximum of eigen values
sp_radius = max(eigVal);
n = 0;
while err > tol
x_k = T*xo + (D+L)\B ;
err = max(abs(x_k - xo ));
xo = x_k;
n = n+1;
end
% Number of iterations for convergence
iter = n;
% Solution by jacobi iteration method
x = xo;
end
SOR_gs Iteration snippet :::
function [iter,x,sp_radius] = SOR_gs(L,D,U,B,w)
format short
% Iteration matrix of jacobi
%T = inv((D/w) + L)*(D -U -(1/w)*D);
T = inv(D+w*L)*(D*(1-w)-w*U);
% Tolerance value (1e-5) considered for better accuracy.
tol = 1e-4 ;
% Defining error>tolerance
err = 9e9;
% Considering initial guess as [0 0 0]
xo = [0 0 0]';
% Eigen value of iteration matrix
[eigVal]=spec_calculation(T);
% Spectral radius of iteration matrix
% maximum of eigen values
sp_radius = max(eigVal);
n = 0;
while err > tol
%x_k = T*xo + inv(((D/w) + L))*B;
x_k = T*xo + inv(D+w*L)*w*B;
err = max(abs(x_k - xo ));
xo = x_k;
n = n+1;
if err == inf
err = 1e-5;
end
end
% Number of iterations for convergence
iter = n;
% Solution by jacobi iteration method
x = xo;
end
Eigen Value function :::
function [eigVal]=spec_calculation(T)
% Eigen value of iteration matrix
s = size(T);
if s(1)~=s(2)
error('Error: Input must be square matrix.')
end
n = size(T); %size of iteration matrix
I = eye(n(1),n(2)); %Pre allocating
% Defining symbolic x
syms x
% calculating characteristic equation
v = T-(I*x);
eq1 = det(v);
% Finding roots of the characteristic equation using
% solve command solve(eq,x)
% find roots of x , eq1
eigVal = abs(double(solve(eq1,x)));
end
Results :::
Solution by the inverse method ::
3.2985
1.2687
-3.8806
Number of iterations to converge to the solution = 18
Solution by the jacobi iteration method ::
3.2985
1.2686
-3.8806
Spectral radius = 0.510208
Number of iterations to converge to the solution = 12
Solution by the gauss sidel iteration method ::
3.2985
1.2687
-3.8806
Spectral radius = 0.327645
Number of iterations to converge to the solution = 8
Solution by the successive over relaxation gauss sidel iteration method ::
3.2985
1.2686
-3.8806
Spectral radius = 0.214370
Number of iterations to converge to the solution = 436
Solution by the jacobi iteration method ::
NaN
NaN
NaN
Spectral radius = 5.102080
Number of iterations to converge to the solution = 173
Solution by the gauss sidel iteration method ::
NaN
NaN
NaN
Spectral radius = 63.374973
Number of iterations to converge to the solution = 162
Solution by the successive over relaxation gauss sidel iteration method ::
-Inf
-Inf
-Inf
Spectral radius = 81.520530
>>
NaN :: (Not a Number)
arithmetic operations will produce a NaN: zero/zero, zero*infinity, infinity/infinity, infinity-infinity.
Conclusion from above results ::
1. Convergence of SOR is faster than the jacobi and gauss siedel.
2. If spectral radius >1 , what will be the solution -> diverges.
Plots :::
1. Jacobi ::
Note ::
Reason for Decrease of iterations after spectral radius >1 ::
As spectral radius increases, the number of iterations it takes to converge also increases. But the catch here is the while loop's error criterion.
MATLAB exits the while loop once it encounters infinity value for error. Now this 'inf' value is reached much faster for higher spectral radius
matrices due to high degree of divergence (low diagonal dominance) which gives you lower iteration numbers.
2. Gauss siedel ::
SOR_gs ::
Conclusions :::
1. Convergence criteria of the linear solvers , studied detail manner.
2. How will be the influence of the diagonal Magnification on the spectral radius and convergence rate.
3. Convergence of SOR > Gauss siedel > Jacobi. (found).
4. Instead of calling succesive over relaxation Technique | Better calling Successive Over/Under relaxation technique -> (w>1) -> Over relaxation
--> (w<1) --> Under relaxation.
5. For optimum value of omega , upto to now observed 'trail and error' is very good approach.
6. Importance of selection of algorithm based on computational complexity.
7. Importance of Iterative solvers over direct methods.
Leave a comment
Thanks for choosing to leave a comment. Please keep in mind that all the comments are moderated as per our comment policy, and your email will not be published for privacy reasons. Please leave a personal & meaningful conversation.
Other comments...
Project 4
Objective 1. Detect lanes using Hough Transform Method. Steps in lane Detection 1. Detect Edges using Canny Edge Detection Method. 2. Capture the Region of Interest in Image. 3. Detect lines using Hough Transform. 4. Calculate the actual left and right lanes from all lines. 6. Find the coordinates of the lines. 7. Display…
21 Feb 2023 05:22 AM IST
Lane Detection using Hough Transform
Objective 1. Detect lanes using Hough Transform Method. Steps in lane Detection 1. Detect Edges using Canny Edge Detection Method. 2. Capture the Region of Interest in Image. 3. Detect lines using Hough Transform. 4. Calculate the actual left and right lanes from all lines. 6. Find the coordinates of the lines. 7. Display…
21 Feb 2023 05:20 AM IST
Edge Detection and Hough Transform
Objective: Detecting Edges using Canny Edge Detector. Understand and display the outputs of all steps in the canny Edge Detection process. Effect on the output on changing the parameters of the Non-Maximum Suppression. Understand Hough Transform and Lane Detection using Hough Transform. Steps Follow in the Discussion:…
28 Dec 2022 09:58 PM IST
week 2 : Image Filtering
Objective: Apply Prewitt and Laplacian filter on the image Libraries used: Opencv Open Source computer vision library. cross-platform supported. can perform through c, c++, and python. numpy Mathematical library to work in the domain of Linear algebra, Fourier transform, and matrices. Image Filtering…
09 Jul 2022 06:36 AM IST
Related Courses
Skill-Lync offers industry relevant advanced engineering courses for engineering students by partnering with industry experts.
© 2025 Skill-Lync Inc. All Rights Reserved.