All Courses
All Courses
Courses by Software
Courses by Semester
Courses by Domain
Tool-focused Courses
Machine learning
POPULAR COURSES
Success Stories
What is Genetic Algorithm? The genetic algorithm is a method for solving both constrained and unconstrained optimization problems that is based on natural selection, the process that drives biological evolution. The genetic algorithm repeatedly modifies a population of individual solutions. At each step, the genetic algorithm…
Sagar Gupta
updated on 05 May 2020
What is Genetic Algorithm?
The genetic algorithm is a method for solving both constrained and unconstrained optimization problems that is based on natural selection, the process that drives biological evolution. The genetic algorithm repeatedly modifies a population of individual solutions. At each step, the genetic algorithm selects individuals at random from the current population to be parents and uses them to produce the children for the next generation. Over successive generations, the population "evolves" toward an optimal solution.
One can use the genetic algorithm to solve a variety of optimization problems that are not well suited for standard optimization algorithms, including problems in which the objective function is discontinuous, nondifferentiable, stochastic, or highly nonlinear.
The genetic algorithm uses three main types of rules at each step to create the next generation from the current population:
Selection rules select the individuals, called parents, that contribute to the population at the next generation.
Crossover rules combine two parents to form children for the next generation.
Mutation rules apply random changes to individual parents to form children.
The following outline summarizes how the genetic algorithm works:
The algorithm begins by creating a random initial population.
The algorithm then creates a sequence of new populations. At each step, the algorithm uses the individuals in the current generation to create the next population. To create the new population, the algorithm performs the following steps:
Scores each member of the current population by computing its fitness value. These values are called the raw fitness scores.
Scales the raw fitness scores to convert them into a more usable range of values. These scaled values are called expectation values.
Selects members, called parents, based on their expectation.
Some of the individuals in the current population that have 'lower fitness scores' are chosen as elite. These elite individuals are passed to the next population.
Produces children from the parents. Children are produced either by making random changes to a single parent—mutation—or by combining the vector entries of a pair of parents—crossover.
Replaces the current population with the children to form the next generation.
The algorithm stops when one of the stopping criteria is met.
The algorithm begins by creating a random initial population, as shown in the following figure.
In this example, the initial population contains 20
individuals. Note that all the individuals in the initial population lie in the upper-right quadrant of the picture, that is, their coordinates lie between 0 and 1. For this example, the Initial range in the Population options is [0;1].
If you know approximately where the minimal point for a function lies, you should set Initial range so that the point lies near the middle of that range. For example, if you believe that the minimal point for Rastrigin's function is near the point [0;0], you could set Initial range to be [-1;-1]. However, as this example shows, the genetic algorithm can find the minimum even with a less than optimal choice for Initial range.
At each step, the genetic algorithm uses the current population to create the children that make up the next generation. The algorithm selects a group of individuals in the current population, called parents, who contribute their genes—the entries of their vectors—to their children. The algorithm usually selects individuals that have better fitness values as parents.
The genetic algorithm creates three types of children for the next generation:
Elite children are the individuals in the current generation with the best fitness values. These individuals automatically survive to the next generation.
Crossover children are created by combining the vectors of a pair of parents.
Mutation children are created by introducing random changes, or mutations, to a single parent.
The following schematic diagram illustrates the three types of children.
Muttion and Crossover explains how to specify the number of children of each type that the algorithm generates and the functions it uses to perform crossover and mutation.
The following sections explain how the algorithm creates crossover and mutation children.
The algorithm creates crossover children by combining pairs of parents in the current population. At each coordinate of the child vector, the default crossover function randomly selects an entry, or gene, at the same coordinate from one of the two parents and assigns it to the child. For problems with linear constraints, the default crossover function creates the child as a random weighted average of the parents.
The algorithm creates mutation children by randomly changing the genes of individual parents. By default, for unconstrained problems the algorithm adds a random vector from a Gaussian distribution to the parent. For bounded or linearly constrained problems, the child remains feasible.
The following figure shows the children of the initial population, that is, the population at the second generation, and indicates whether they are elite, crossover, or mutation children.
The following figure shows the populations at iterations 60, 80, 95, and 100.
|
|
|
|
As the number of generations increases, the individuals in the population get closer together and approach the minimum point [0;0].
The genetic algorithm uses the following conditions to determine when to stop:
Generations — The algorithm stops when the number of generations reaches the value of Generations.
Time limit — The algorithm stops after running for an amount of time in seconds equal to Time limit.
Fitness limit — The algorithm stops when the value of the fitness function for the best point in the current population is less than or equal to Fitness limit.
Stall generations — The algorithm stops when the average relative change in the fitness function value over Stall generations is less than Function tolerance.
Stall time limit — The algorithm stops if there is no improvement in the objective function during an interval of time in seconds equal to Stall time limit.
The algorithm stops as soon as any one of these conditions is met.
The options Stall time limit and Time limit prevent the algorithm from running too long. If the algorithm stops due to one of these conditions, one can improve the results by increasing the values of Stall time limit and Time limit.
The selection function chooses parents for the next generation based on their scaled values from the fitness scaling function. The scaled fitness values are called the expectation values. An individual can be selected more than once as a parent, in which case it contributes its genes to more than one child. The default selection option, Stochastic Uniform, lays out a line in which each parent corresponds to a section of the line of length proportional to its scaled value. The algorithm moves along the line in steps of equal size. At each step, the algorithm allocates a parent from the section it lands on.
A more deterministic selection option is Remainder, which performs two steps:
In the first step, the function selects parents deterministically according to the integer part of the scaled value for each individual. For example, if an individual's scaled value is 2.3, the function selects that individual twice as a parent.
In the second step, the selection function selects additional parents using the fractional parts of the scaled values, as in stochastic uniform selection. The function lays out a line in sections, whose lengths are proportional to the fractional part of the scaled value of the individuals, and moves along the line in equal steps to select the parents.
Reproduction options control how the genetic algorithm creates the next generation.
The options are:
Elite count — The number of individuals with the best fitness values in the current generation that are guaranteed to survive to the next generation. These individuals are called elite children. The default value of Elite count is 2.
When Elite count is at least 1, the best fitness value can only decrease from one generation to the next. This is what we want to happen, since the genetic algorithm minimizes the fitness function. Setting Elite count to a high value causes the fittest individuals to dominate the population, which can make the search less effective.
Crossover fraction — The fraction of individuals in the next generation, other than elite children, that are created by crossover. Setting the crossover function describes how the value of Crossover fraction affects the performance of the genetic algorithm.
The genetic algorithm uses the individuals in the current generation to create the children that make up the next generation. Besides elite children, which correspond to the individuals in the current generation with the best fitness values, the algorithm creates
Crossover children by selecting vector entries, or genes, from a pair of individuals in the current generation and combines them to form a child.
Mutation children by applying random changes to a single individual in the current generation to create a child.
Both processes are essential to the genetic algorithm. Crossover enables the algorithm to extract the best genes from different individuals and recombine them into potentially superior children. Mutation adds to the diversity of a population and thereby increases the likelihood that the algorithm will generate individuals with better fitness values.
You can specify how many of each type of children the algorithm creates as follows:
Elite count, in Reproduction options, specifies the number of elite children.
Crossover fraction, in Reproduction options, specifies the fraction of the population, other than elite children, that are crossover children.
For example, if the Population size is 20, the Elite count is 2, and the Crossover fraction is 0.8, the numbers of each type of children in the next generation are as follows:
There are two elite children.
There are 18 individuals other than elite children, so the algorithm rounds 0.8*18 = 14.4 to 14 to get the number of crossover children.
The remaining four individuals, other than elite children, are mutation children.
Syntax used in GA:
x = ga(fun,nvars) finds a local unconstrained minimum, x, to the objective function, fun. nvars is the dimension (number of design variables) of fun.
x = ga(fun,nvars,A,b) finds a local minimum x to fun, subject to the linear inequalities A*x ≤ b. ga evaluates the matrix product A*x as if x is transposed (A*x').
x = ga(fun,nvars,A,b,Aeq,beq) finds a local minimum x to fun, subject to the linear equalities Aeq*x = beq and A*x ≤ b. (Set A=[] and b=[] if no linear inequalities exist.)
x = ga(fun,nvars,A,b,Aeq,beq,lb,ub,nonlcon) subjects the minimization to the constraints defined in nonlcon. The function nonlcon accepts x and returns vectors C and Ceq, representing the nonlinear inequalities and equalities respectively. ga minimizes the fun such that C(x) ≤ 0 and Ceq(x) = 0. (Set lb=[] and ub=[] if no bounds exist.)
x = ga(fun,nvars,A,b,Aeq,beq,lb,ub,nonlcon,options) minimizes with the default optimization parameters replaced by values in options. (Set nonlcon=[] if no nonlinear constraints exist.) Create options using optimoptions.
A MATLAB code has been generated which shows how the Genetic Algorithm works:
clear all
close all
clc
x = linspace(0,0.6,150);
y = linspace(0,0.6,150);
%Convering 1D array to 2D array
[xx yy] = meshgrid(x,y)
num_cases = 50;
tic;
for i = 1:length(xx)
for j = 1:length(yy)
input_vector(1) = xx(i,j)
input_vector(2) = yy(i,j)
f(i,j) = stalagmite(input_vector)
end
end
study_time = toc %For calculating time taken for this study
surfc(xx,yy,-f)
shading interp
xlabel('x value')
ylabel('y value')
title('Stalagmite Function')
%study_1
tic
for i = 1:num_cases
[inputs,fopt(i)] = ga(@stalagmite,2);
xopt(i) = inputs(1);
yopt(i) = inputs(2);
end
study_1time = toc
figure(1)
subplot(2,1,1)
hold on
surfc(x,y,-f)
shading interp
plot3(xopt,yopt,-fopt,'marker','o','markersize',5,'markerfacecolor','r')
title('Unbounded inputs')
subplot(2,1,2)
plot(-fopt)
xlabel('Iterations')
ylabel('Function Maximum')
%study_2
tic
for i = 1:num_cases
[inputs,fopt(i)] = ga(@stalagmite,2,[],[],[],[],[0;0],[0.6;0.6]);
xopt(i) = inputs(1);
yopt(i) = inputs(2);
end
study_2time = toc
figure(2)
subplot(2,1,1)
hold on
surfc(x,y,-f)
shading interp
plot3(xopt,yopt,-fopt,'marker','o','markersize',5,'markerfacecolor','r');
title('Optimum solution')
subplot(2,1,2)
plot(-fopt)
xlabel('iterations')
ylabel('Function maximum')
%study_3
options = optimoptions(@ga)
options = optimoptions(options,'populationsize',170);
tic
for i = 1:num_cases
[inputs,fopt(i)] = ga(@stalagmite,2,[],[],[],[],[0;0],[1;1],[],[],options);
xopt(i) = inputs(1)
yopt(i) = inputs(2)
end
study_3time = toc
figure(3)
subplot(2,1,1)
hold on
surfc(x,y,-f)
shading interp
plot3(xopt,yopt,-fopt,'marker','o','markersize',5,'markerfacecolor','r');
title('Optimum solution')
subplot(2,1,2)
plot(-fopt)
xlabel('iterations')
ylabel('Function maximum')
Global_Maxima = -fopt
Function for Stalagmite:
function [f] = stalagmite(input_vector)
x = input_vector(1);
y = input_vector(2);
f1 = (sin(5.1*pi*x + 0.5)).^6;
f2 = exp(-4*log(2)*((x - 0.0667).^2)./0.64);
f3 = (sin(5.1*pi*y + 0.5)).^6;
f4 = exp(-4*log(2)*((y - 0.0667).^2)./0.64);
f = -(f1*f2*f3*f4)
end
Result Report:
Study_2:
Study_3:
Leave a comment
Thanks for choosing to leave a comment. Please keep in mind that all the comments are moderated as per our comment policy, and your email will not be published for privacy reasons. Please leave a personal & meaningful conversation.
Other comments...
Genetic Algorithm
What is Genetic Algorithm? The genetic algorithm is a method for solving both constrained and unconstrained optimization problems that is based on natural selection, the process that drives biological evolution. The genetic algorithm repeatedly modifies a population of individual solutions. At each step, the genetic algorithm…
05 May 2020 01:23 AM IST
Combustion Modeling of a Port Fuel Injection Engine
ABSTRACT In direct-injection engines, the fuel spray characteristics influence the combustion efficiency and exhaust emissions. The performance of available spray models for predicting liquid and vapor fuel distributions and their influence on combustion is reviewed for gasoline direct injection engines. A phenomenological…
05 May 2020 01:23 AM IST
Flow over a cylinder using SolidWorks
What is Fluid Dynamics? Fluid dynamics is "the branch of applied science that is concerned with the movement of liquids and gases. Fluid dynamics is one of two branches of fluid mechanics, which is the study of fluids and how forces affect them. (The other branch is fluid statics, which deals with fluids at rest.) …
04 May 2020 10:40 AM IST
Emission characterization on a CAT3410 engine
ABSTRACT The current computational fluid dynamics (CFD) presents the effect of the piston bowl geometry on the performance and emissions of the direct-injection diesel engine. As the major limitation of diesel engines is the high soot and nitrogen oxide emission which cannot be reduced totally with only conventional catalytic…
01 May 2020 11:36 AM IST
Related Courses
Skill-Lync offers industry relevant advanced engineering courses for engineering students by partnering with industry experts.
© 2025 Skill-Lync Inc. All Rights Reserved.