All Courses
All Courses
Courses by Software
Courses by Semester
Courses by Domain
Tool-focused Courses
Machine learning
POPULAR COURSES
Success Stories
Objective: To understand the linear system i.e. to determine the Eigen Value and Spectral Radius of the given system of equations. Also, solve them using iterative techniques available. Given: The system of linear equation is given as: We can write it in coefficient form as, where, Solution matrix can be written as …
GAURAV KHARWADE
updated on 21 Oct 2019
Objective: To understand the linear system i.e. to determine the Eigen Value and Spectral Radius of the given system of equations. Also, solve them using iterative techniques available.
Given:
The system of linear equation is given as:
We can write it in coefficient form as,
where,
Solution matrix can be written as
[x]=[A]−1⋅[B]
We are going to find out the following details in order to understand the linear system solution using different iterative techniques.
The iteration matrix for iteration methods are:
1. Jacobi Method:
The Jacobi method is easily derived by examining each of the n equations in the linear system of equation Ax=B in isolation. If, in the equation
which is the Jacobi method.
In this method, the order in which the equations are examined is irrelevant, since the Jacobi method treats them independently. The definition of the Jacobi method can be expressed with matrices as
The Gauss-Seidel method is a technique for solving the 'n' equations of the linear system of equation Ax=B one at a time in sequence, and uses previously computed results as soon as they are available,
There are two important characteristics of the Gauss-Seidel method should be noted.
Firstly, the computations appear to be serial. Since each component of the new iterate depends upon all previously computed components, the updates cannot be done simultaneously as in the Jacobi Method.
Secondly, the new iterate x(k+1) depends upon the order in which the equations are examined. If this ordering is changed, the components of the new iterates (and not just their order) will also change.
In terms of matrices, the definition of the Gauss-Seidel method can be expressed as
where the matrices D, -L, and -U represent the diagonal, strictly lower triangular, and strictly upper triangular parts of A, respectively.
The Gauss-Seidel method is applicable to strictly diagonally dominant, or symmetric positive definite matrices A.
3. Successive Overrelaxation Method:
The successive overrelaxation method (SOR) is a method of solving a linear system of equation Ax=B derived by extrapolating the Gauss-seidel method. This extrapolation takes the form of a weighted average between the previous iterate and the computed Gauss-Seidel iterate successively for each component,
where ¯x denotes a Gauss-Seidel iterate and is the extrapolation factor. The idea is to choose a value for
that will accelerate the rate of convergence of the iterates to the solution.
In matrix terms, the SOR algorithm can be written as
where the matrices D, -L and -U represent the diagonal, strictly lower-triangular, and strictly upper-triangular parts of A, respectively.
If , the SOR method simplifies to the gauss-seidel method A theorem due to Kahan shows that SOR fails to converge if
is outside the interval (0,2).
So the iteration matrix for the above methods we have:
_______For Jacobi Method
_______For Gauss-seidel Method
_______For SOR-GS Method
Here, new guess values will be calculated as,
where, T- Iteration matrix of different methods
Main Program file:
%% By Gaurav Kharwade%%
clear all
close all
clc
% Understanding of linear equation-
% 5x1+x2+2x3=10
% -3x1+9x2+4x3=-14
% x1+2x2-7x3=33
% RHS
B=[10;-14;33];
% Decomposition of matrix
L=[0 0 0;-3 0 0;1 2 0];
U=[0 1 2;0 0 4;0 0 0];
D=[5 0 0;0 9 0;0 0 -7];
% co-efficient matrix
A=L+U+D;
% Calculating for x
x_exact=inv(A)*B
disp('Type 1 to select Jacobi method')
disp('Type 2 to select GS method')
disp('Type 3 to select SOR method')
solving_method= input('Select solving method= ');
if solving_method==1
% For magnifying diagonal terms
m=linspace(0.4602,5,10);
Dold=D;
[jac_iter,x_jac,error_jac,Computational_time,SpectralR_jac]=jac(A,B,D,Dold,L,U,m)
figure(1)
subplot(2,1,1)
plot(m,SpectralR_jac,'b')
xlabel('Diagonal Magnification')
ylabel('Spectral Radius')
title('Effect of Diagonal Magnification on Spectral Radius')
hold on
subplot(2,1,2)
plot(SpectralR_jac,jac_iter,'r','linewidth',2)
ylabel('Nos. of iteration')
xlabel('Spectral Radius')
title(sprintf('No. of iterations vs Spectral Radius graph'))
endif
if solving_method==2
% For magnifying diagonal terms
m=linspace(0.55,5,10);
Dold=D;
[gs_iter,x_gs,SpectralR_gs,error_gs]=gs(A,B,Dold,D,L,U,m)
figure(1)
subplot(2,1,1)
plot(m,SpectralR_gs,'b')
xlabel('Diagonal Magnification')
ylabel('Spectral Radius')
title('Effect of Diagonal Magnification on Spectral Radius')
hold on
subplot(2,1,2)
plot(SpectralR_gs,gs_iter,'r','linewidth',2)
ylabel('Nos. of iteration')
xlabel('Spectral Radius')
title(sprintf('No. of iterations vs Spectral Radius graph'))
endif
if solving_method==3
% For magnifying diagonal terms
m=linspace(0.45,5,10);
disp('Enter value for relaxation factor suitably to get spectral radius to 1.1, i.e. 0.9')
omega=input('Enter value for relaxation factor=')
Dold=D;
[sor_iter,x_sor,SpectralR_sor,error_sor]=sor(A,B,Dold,D,L,U,m,omega)
figure(1)
subplot(2,1,1)
plot(m,SpectralR_sor,'b')
xlabel('Diagonal Magnification')
ylabel('Spectral Radius')
title('Effect of Diagonal Magnification on Spectral Radius')
hold on
subplot(2,1,2)
plot(SpectralR_sor,sor_iter,'r','linewidth',2)
xlabel('Spectral Radius')
ylabel('Nos. of iteration')
title(sprintf('No. of iterations vs Spectral Radius graph'))
endif
User_defined Function Program file- JACOBI METHOD
function [iter,x_jac,error_jac,Computational_time,SpectralR_jac]=jac(A,B,Dold,D,L,U,m)
tic
for j=1:length(m)
D=m(j)*Dold;
n=length(B);
x_old=zeros(n,1);
%xnew_jac=zeros(n,1);
error_jac=9e9;
tol=1e-3;
Tjac=inv(D)*(L+U); % for Jacobi
iter(j)=1;
while(error_jac>tol)
xnew_jac=(-(inv(D)*(L+U))*x_old)+(inv(D)*B);
error_jac=max(abs(xnew_jac-x_old));
x_old=xnew_jac;
iter(j)=iter(j)+1;
endwhile
x_jac=xnew_jac;
eigen_value=eig(Tjac);
SpectralR_jac(j)=max(real(abs(eigen_value)));
endfor
Computational_time=toc;
endfunction
User_defined Function Program file- GAUSS-SEIDEL METHOD
unction [gs_iter,x_gs,SpectralR_gs,error_gs]=gs(A,B,Dold,D,L,U,m)
for j=1:length(m)
D=m(j)*Dold;
n=length(B);
x_old=zeros(n,1);
%xnew_gs=zeros(n,1);
error_gs=9e9;
tol=1e-5;
Tgs=(L+D)^-1*(-U); % for GS
gs_iter(j)=1;
while(error_gs>tol)
xnew_gs= ((-inv(D+L)*(U))*x_old)+(inv(D+L)*B);
error_gs=max(abs(xnew_gs-x_old));
x_old=xnew_gs;
gs_iter(j)=gs_iter(j)+1;
endwhile
x_gs=xnew_gs;
Computational_time=toc;
eigen_value=eig(Tgs);
SpectralR_gs(j)=max(real(abs(eigen_value)));
endfor
endfunction
User_defined Function Program file- SOR METHOD
function [sor_iter,x_sor,SpectralR_sor,error_sor]=sor(A,B,Dold,D,L,U,m,omega)
for j=1:length(m)
D=m(j)*Dold;
n=length(B);
x_old=zeros(n,1);
%xnew_sor=zeros(n,1);
error_sor=1e9;
tol=1e-5;
Tsor=(L+(1/omega)*D)^-1*(((1/omega)*D)-D-U); %for sor
sor_iter(j)=1;
while(error_sor>tol)
xnew_sor= -(inv(D+omega*L)*((1-omega)*L+U)*x_old)+(inv(D+omega*L)*B);
error_sor=max(abs(xnew_sor-x_old));
x_old=xnew_sor;
sor_iter(j)=sor_iter(j)+1;
endwhile
x_sor=xnew_sor;
Computational_time=toc;
eigen_value=eig(Tsor);
SpectralR_sor(j)=max(abs(eigen_value));
endfor
endfunction
Following are results obtained:
PLOTS:
1. JACOBI METHOD
Values of Nos od iteration and Spectral radius is as below:
2. GAUSS-SEIDEL METHOD
Values of Nos od iteration and Spectral radius is as below:
3. SOR METHOD
The idea of over-relaxation is to choose the factor ω in such a way that the spectral radius ρ (P) is made as small as possible, thus giving the optimum rate of convergence.
Values of Nos od iteration and Spectral radius is as below:
From the graph, we can easily say that if the value for the spectral radius is more than 1(ρ>1) numbers of iteration to converge will be much higher as it takes more updated guess values for convergence.
As spectral radius coming down from 1.1, there is a drastic decrease in a number of iterations for convergence.
Conclusion:
The iterative scheme is convergent if and only if every
the eigenvalue of [T] satisfies |λ|<1, that is, the spectral radius .
Leave a comment
Thanks for choosing to leave a comment. Please keep in mind that all the comments are moderated as per our comment policy, and your email will not be published for privacy reasons. Please leave a personal & meaningful conversation.
Other comments...
Week 9 - Senstivity Analysis Assignment
Objective: To write the code which will take entire reactions of GRI mechanism 3.0 and take out the most sensitive top 10 reactions. The main parameters are as follows: Write code to list out the top 10 most sensitive reactions from a list of all reactions from the GRI mechanism. The sensitivity parameters should be with…
04 Jan 2021 05:51 PM IST
Auto ignition analysis of combustible mixture methane under different conditions using Cantera and Python
Objective: To study auto-ignition using Cantera. Following are the tasks to perform using Cantera: Plot the variation of Auto Ignition time of Methane with a constant temperature of 1250K and pressure varying from 1 to 5 atm. Plot the variation of Auto Ignition time…
06 Dec 2020 04:55 AM IST
Week 6 - Multivariate Newton Rhapson Solver
Objective: To solve a given set of Ordinary Differential equations using the Multi-Variate Newton Raphson Method. Given: The set of ODE's are given below: dy1dt=−0.04⋅y1+104⋅y2⋅y3 dy2dt=0.04⋅y1−104⋅y2⋅y3−3⋅107⋅y22 dy3dt=3⋅107⋅y22 The jacobian should be estimated numerically and not analytically.…
01 Nov 2020 03:50 AM IST
Week 5 - Literature review: ODE Stability
Objective: To review the literature about ODE and to write the python program to substantiate our results. Theory: …
20 Oct 2020 03:52 PM IST
Related Courses
0 Hours of Content
Skill-Lync offers industry relevant advanced engineering courses for engineering students by partnering with industry experts.
© 2025 Skill-Lync Inc. All Rights Reserved.