All Courses
All Courses
Courses by Software
Courses by Semester
Courses by Domain
Tool-focused Courses
Machine learning
POPULAR COURSES
Success Stories
AIM: To write a python program to solve the given system of equations using Multivariate Newton solver. Equations INTRODUCTION Life is nonlinear, and its many diverse occurrences are frequently related to one another. Because each equation depends on the variables that have been solved…
Kpooupadang Jacques ABOUZI
updated on 03 Jul 2022
AIM: To write a python program to solve the given system of equations using Multivariate Newton solver.
INTRODUCTION
Life is nonlinear, and its many diverse occurrences are frequently related to one another. Because each equation depends on the variables that have been solved for in the other equations, the dynamic evolution of a system or a material cannot be described by the independent solutions of a material balance, a momentum balance, and an energy balance. Thus, systems of nonlinear equations are created to describe intriguing events. Nowadays, trustworthy tools are available to methodically solve systems of nonlinear algebraic equations, just as it was the case when solving a single nonlinear algebraic problem. Finding a decent first guess is difficult when solving a single nonlinear algebraic equation. You may imagine that wandering in n-dimensional space in search of an n-dimensional starting point is substantially more challenging if you think that searching in one-dimensional space for an initial guess that will allow the procedure to converge is uncomfortable. However, it is possible. It is necessary to have access to the proper numerical tool in addition to the knowledge of the physical system. Thus, in this work we will focus on the utilization of multivariate Newton-Raphson method to solve the system of non-linear equations shown above. For this, a Python program will be written to show all steps involved as well as the solution.
THEORY
Multivariate Newton-Raphson Method
Not surprisingly, the Multivariate Newton-Raphson method is a direct extension of the single variable Newton-Raphson method. Where the single variable Newton-Raphson method solved f (x) = 0, the multivariate version will solve a system of n equations of the form
f1(x1,x2,x3,…xn-1,xn)=0f1(x1,x2,x3,…xn−1,xn)=0
f2(x1,x2,x3,…xn-1,xn)=0f2(x1,x2,x3,…xn−1,xn)=0
f3(x1,x2,x3,…xn-1,xn)=0f3(x1,x2,x3,…xn−1,xn)=0
.............
............. (1)
.............
fn-1(x1,x2,x3,…xn-1,xn)=0fn−1(x1,x2,x3,…xn−1,xn)=0
fn(x1,x2,x3,…xn-1,xn)=0fn(x1,x2,x3,…xn−1,xn)=0
We will adopt the short-hand notation for the
f(x̲)̲=0 (2)
Note that this short hand notation does not here imply anything about the linearity of any of the equations in f(x̲)̲ . The basis of the single variable Newton-Raphson method lay in the fact that we approximate the derivative of f (x) numerically using a forward finite difference formula based on a truncated Taylor series,
f′(x1)=dfdx∣x1≈f(x1)-f(x2)x1-x2 (3)
Multivariate Taylor series also exist. The idea behind the multivariate Taylor series lies in the definition of the total derivate of a multivariate function. For a function of two variable we can write,
df(x1,x2)=(∂f∂x1)x2dx1+(∂f∂x2)x1dx2 (4)
Where (∂f∂x2)x1 is called the partial derivative of the function f with respect to variable x1. The subscript outside the parentheses in a partial derivative indicates variables that were treated as constants during the differentiation. For a function of n variables, we have
df(▁x)=∑ni=1(∂f∂xi)xm≠idxi (5)
The multivariate Taylor series expansion, truncated after the first derivative is thus
f(▁x(k))-f(▁x(k+1))=∑ni=1(∂f∂xi)xm≠i∣▁x(k)(x(k)i-x(k+1)i) (6)
Let it be clear that the i subscript on the variable x indicates a different independent variable. The superscript (k) on the x is not an exponent. The parentheses are included to make clear it is a notation that does not signify a mathematical operation. Instead, the superscript (k) indicates a different value of x, which we shall soon see can be associated with the k and k+1 iterations of the Newton-Raphson method. The subscript that now appears outside the parentheses, xm≠i , indicates that all variables except xm are held constant in the differentiation. The final subscript x̲(k) of the partial derivative indicates the value of x̲ where the derivative is evaluated.
If n=1, then equation (6) simplifies to
f(x(k)1)-f(x(k+1)1)=dfdx1∣▁x(k)(x(k)i-x(k+1)i) (7)
The equation (7) can be rearranged for x(k+1)1
x(k+1)1=x(k)1-f(x(k)1)-f(x(k+1)1)dfdx1∣▁x(k) (8)
which is precisely the Newton-Raphson method once we set f(x(k+1)1)=0 . Similarly for the multivariate case, in which we have n equation and n unknowns, we write equation (6) for every function.
fj(x̲(k))-fj(x̲(k+1))=∑ni=1(∂f∂xi)xm≠i∣▁x(k)(x(k)i-x(k+1)i) (9)
where all that has been done is to add a subscript j to f., identifying each equation from j = 1 to n. Equation (9) is a system of nonlinear algebraic equations. The n unknowns are the next iteration of x, x(k+1)1 . Following the Newton-Raphson procedure, we set f j(x(k+1))= 0 for all j. Again, this choice is based on the fact that we intend for our next estimate to be a better approximation of the root, at which the functions are zero. By convention, we express equation (9) in matrix notation as
J̲̲(k)▁δ(k)=-R̲(k) (10)
where R̲(k) is called the residual vector at the kth iteration and defined as
R̲(k)≡f(x̲(k))̲ (11)
In other words, the residual is simply a vector of the values of the functions evaluated at the
current guess. The Jacobian matrix at the kth iteration,J̲̲(k) , defined as
J(k)j,i≡(∂fj∂xi)xm≠i∣▁x(k) (12)
Thus, the Jacobian matrix is an nxn matrix of all the possible pairwise combinations of partial first derivatives between n unknown variables, xi, and n functions, fj. The difference vector,δ̲x(k) , is defined as
δ̲x(k)≡x̲(k+1)-x̲(k) (13)
The difference vector can be rearranged for the value of x at the new iteration,
x̲(k+1)≡x̲(k)+δ̲x(k) (14)
The algorithm for solving a system of nonlinear algebraic equations via the multivariate Newton-Raphson method follows analogously from the single variable version. The steps are as follows:
The multivariate Newton-Raphson Method suffers from the same short-comings as the single variable Newton-Raphson Method. Specifically, as with all methods for solving nonlinear algebraic equations, you need a good initial guess. Second, the method does provide fast (quadratic) convergence until you are close to the solution. Third, if the determinant of the Jacobian is zero, the method fails. This last constraint is the multi-dimensional analogue of the fact that the single variable Newton-Raphson method diverged when the derivative was zero.
The determination of convergence of a system that has multiple variables requires a tolerance. One could use a tolerance on each variable. That is the relative error on xi must be less than Toli. Alternatively, one can use something like the root mean square (RMS) error,
err(k)RMS=√(1n∑ni=1(x(k)i-x(k-1)ix(k-1)i)2)
to provide a single error for the entire system. In this case, even with an RMS relative error on x of 10^(-m), you are not guaranteed that every variable has m good significant digits. You are only guaranteed that the RMS error is less than the acceptable tolerance.
Procedure followed to solve the given system of equations
y1[0]=1
y2[0]=0
y3[0]=0
yn1-yn-11∆t=-0.04yn1+104yn2yn3
yn2-yn-12∆t=0.04yn1-104yn2yn3-3.107(yn2)2
yn3-yn-13∆t=3.107(yn2)2
fn1=yn1+∆t(0.04yn1-104yn2yn3)-yn-11=0
fn2=yn2-∆t(0.04yn1-104yn2yn3-3.107(yn2)2)-yn-12=0
fn3=yn3-∆t(3.107(yn2)2)-yn-13=0
Where,
J=[∂fn1∂yn1∂fn1∂yn2∂fn1∂yn3∂fn2∂yn1∂fn2∂yn2∂fn2∂yn3∂fn3∂yn1∂fn3∂yn2∂fn3∂yn3]
yni+1=yni-α.[J-1].Fn where F is a functional or final matrix y_i+1 is the new valued matrix and y_i is the old valued matrix. In this case, we have:
F=[f1f2f3]
And i+1 represents the new iteration, I is for the old or previous iteration and α is the relaxation factor.
Program
"""
This python program is written to solve a system of coupled non-linear equations
using multivariate Newton-Raphson solver
dy1/dt = -0.04*y1 + 10^4*y2*y3
dy2/dt = 0.04*y1 - 10^4y2*y3 - 3.10^7*y2^2
dy3/dt = 3.10^7*y2^2
"""
import matplotlib.pyplot as plt
import numpy as np
from numpy.linalg import inv
#Time interval definition
t_start = 0
t_end = 600
dt = 0.01
n = int((t_end-t_start)/dt)
def f1(y1_old, y1, y2, y3, dt):
# First non-linear equation
return y1_old + dt*(-0.04*y1 + 1*10**4*y2*y3) - y1
def f2(y2_old, y1, y2, y3, dt):
#Second non-linear equation
return y2_old + dt*(0.04*y1 - 10**4*y2*y3 - 3*10**7*y2**2) - y2
def f3(y3_old, y1, y2, y3, dt):
# Third non-linear equation
return y3_old + dt*(3*10**7*y2**2) - y3
def Jacobian(y1_old, y2_old, y3_old, y1, y2, y3, dt):
J = np.ones((3,3))
h = 1e-5
# Row 1
J[0,0] = (f1(y1_old, y1+h, y2, y3, dt) - f1(y1_old, y1, y2, y3, dt))/h
J[0,1] = (f1(y1_old, y1, y2+h, y3, dt) - f1(y1_old, y1, y2, y3, dt))/h
J[0,2] = (f1(y1_old, y1+h, y2, y3+h, dt) - f1(y1_old, y1, y2, y3, dt))/h
# Row 2
J[1,0] = (f2(y2_old, y1+h, y2, y3, dt) - f2(y2_old, y1, y2, y3, dt))/h;
J[1,1] = (f2(y2_old, y1, y2+h, y3, dt) - f2(y2_old, y1, y2, y3, dt))/h;
J[1,2] = (f2(y2_old, y1, y2, y3+h, dt) - f2(y2_old, y1, y2, y3, dt))/h;
# Row 3
J[2,0] = (f3(y3_old, y1+h, y2, y3, dt) - f3(y3_old, y1, y2, y3, dt))/h;
J[2,1] = (f3(y3_old, y1, y2+h, y3, dt) - f3(y3_old, y1, y2, y3, dt))/h;
J[2,2] = (f3(y3_old, y1, y2, y3+h, dt) - f3(y3_old, y1, y2, y3, dt))/h;
return J
#Initial guess values definition
y1_old = 1
y2_old = 0
y3_old = 0
#Defining the column vector
Y_old = np.ones((3,1))
Y_old[0] = y1_old
Y_old[1] = y2_old
Y_old[2] = y3_old
#Defining the guess values column vector
F = np.copy(Y_old)
#Defining the initial error
error = 9e9
alpha = 1
tol = 1e-6
y_1 = []
y_2 = []
y_3 = []
iteration = []
#Time iteration loop
iter = 1
for i in range(0,n):
# Entering the Newton Raphson loop
while error > tol:
#Computing the numerical Jacobian- old values are used
J = Jacobian(y1_old, y2_old, y3_old,Y_old[0], Y_old[1], Y_old[2], dt)
#Computing the F vector
F[0] = f1(y1_old, Y_old[0], Y_old[1], Y_old[2], dt)
F[1] = f2(y2_old, Y_old[0], Y_old[1], Y_old[2], dt)
F[2] = f3(y3_old, Y_old[0], Y_old[1], Y_old[2], dt)
#Computing the new values
Y_new = Y_old - alpha*np.matmul(inv(J),F)
#Computing the maximum absolute error
error = np.max(np.abs(Y_new - Y_old))
#Updating old values
Y_old = Y_new;
#log message
log_message = 'iteration # = {0} y1={1} y2= {2} y3={3}' .format(iter, Y_new[0], Y_new[1], Y_new[2])
print(log_message)
iter = iter + 1
#Final results
y_1.append(Y_new[0])
y_2.append(Y_new[1])
y_3.append(Y_new[2])
iteration.append(iter)
Y_old = Y_new
y1_old = Y_new[0]
y2_old = Y_new[1]
y3_old = Y_new[2]
error = 9e9
plt.plot(iteration,y_1,color='black')
plt.plot(iteration,y_2,color='orange')
plt.plot(iteration,y_3,color='blue')
plt.legend(['y_1','y_2','y_3'])
plt.xlabel('Number of steps')
plt.ylabel('Variables')
plt.title('Multivariate Newton Raphson Method For dt = 0.01')
plt.show()
Stepwise explanation of the code
Output
Selecting an appropriate time step size:
Prior to running the code and obtaining a solution, the time step dt is first set to 0.01. Run the simulation a second time with the time step value set to 0.001. However, the plot and solution are identical. The simulation was ran a third time with the time step aggressively raised to dt =0.1. But nothing has changed as of yet. The length of the simulation also didn't vary significantly. This means that the change in time step does not affect the solution, which is stable and finally converges to produce findings that are fully meaningful. Therefore, the simulation is unaffected by the time step size. Since we are employing the implicit technique as an iterative solver and it is unaffected by the time step size, the simulation is absolutely stable.
In fact, if we utilize an explicit method as the iterative solver, the change in time step size can have an impact on the solution. Despite being relatively challenging to implement, implicit techniques can perform incredibly well in complex scenarios like the stiffness of non-linear coupled equations, rapid increases in temperature, pressure, and mass flow rate, etc. These situations can also occur when the implicit schemes are not easily codable.
Conclusion
The apex values of y1, y2, and y3 are changing fast with regard to the time in the X-axis, as seen by the aforementioned graphs. As the solution advances in time, these variables are changing quickly. Due to the stiff nature of the equation system, this nature is observed.
Leave a comment
Thanks for choosing to leave a comment. Please keep in mind that all the comments are moderated as per our comment policy, and your email will not be published for privacy reasons. Please leave a personal & meaningful conversation.
Other comments...
Week 6 - Multivariate Newton Rhapson Solver
AIM: To write a python program to solve the given system of equations using Multivariate Newton solver. Equations INTRODUCTION Life is nonlinear, and its many diverse occurrences are frequently related to one another. Because each equation depends on the variables that have been solved…
03 Jul 2022 04:22 PM IST
Week 5.2 - Literature review: ODE Stability
AIM: To carry out a literature review on ordinary differential equations stability. Introduction A stable problem is the one for which small changes in the initial conditions elicit only small changes in the solution. In other words, stability in mathematics, is the state in which a little change in a system…
26 Jun 2022 12:11 PM IST
Compact Notation Derivation for a simple Mechanism
Aim: To derive the reaction rate ODEs and production for each species in the0 given reaction mechanism. Objective The objective in this work is to determine the net reaction rate for all four reactions using reactant and product matrices. After that, the production of each species is determined with respect to net…
13 May 2022 04:22 PM IST
CURVE FITTING USING PYTHON
AIM: The aim of this program is to write a Python code to fit a linear and cubic polynomial for the cp data. It aims to show the split wise method and explain all the parameters involved in the measurement of the fitness characteristics. OBJECTIVES: Know the importance of curve fitting in real life Learn how…
30 Apr 2022 08:30 AM IST
Related Courses
0 Hours of Content
Skill-Lync offers industry relevant advanced engineering courses for engineering students by partnering with industry experts.
© 2025 Skill-Lync Inc. All Rights Reserved.