Master's Certification Program in Autonomous Vehicles

Master's Certification Program in Autonomous Vehicles

A comprehensive course on Autonomous Vehicles using AV system design and key algorithms and techniques that are commonly used. This course is highly suited for beginners

  • 0% EMI Option Available
  • Pre-requisites : Basic coding
Enroll Now View demo

What is the course about?

The future is full of possibilities and one of which is the Driverless Car or  “Autonomous Vehicle”. Prominent companies have increasingly supported the idea of having cabs with no drivers, as it reduces the cost that goes in employing a person. Although no car on the road today is fully automated, tests are being carried out to build a car that will require no human intervention. The future is definitely making our favourite sci-fi movies a reality. 

Even though one may argue with the reliability of autonomous vehicles, who would have also thought that aeroplanes can become a reality or that we would ever put a foot on the moon? 

Having said that, certain levels trail the evolution of autonomous vehicles. They are defined from level 0 to level 5 by SAE. Level 0 being completely manual and level 5 being completely automated. Building a level 5 completely driverless car would mean giving the car the power to analyze and make decisions by itself when on the road. 

Similar to how a child's brain needs to be first moulded with basics to understand difficult concepts, an autonomous vehicle too, secures a step by step procedure to make the car fully automated. 

The Master’s Program in Autonomous Driving introduced by Skill Lync covers a step by step formulation to understand the complete process in building an Autonomous Vehicle. 

The program is divided into 4 modules, each of which is accompanied by projects that will give a better understanding to the students of what they are being taught. 

The first module is on Applying Computer Vision for Autonomous Vehicles. Using computer vision, the vehicle avoids obstacles on the road. Computer vision technology uses cameras and sensors to gather information such as traffic conditions, road conditions and the pedestrians. In case of a situational emergency, these assist the vehicle to take a quick decision. 

The second module will focus on Localisation, Mapping and SLAM. Here, the emphasis is mainly laid on detecting the real-time position of a vehicle with accuracy. The ability to incorporate data on a map in real time will take the autonomous vehicle a step closer in making sure that the passenger is safe, thus increasing the dependability on autonomous vehicles.


In the third module, we will discuss Path Planning & Trajectory Optimization Using C++ & ROS. Using path planning, the vehicle decides on the lane that it will pick to reach from point A to point B. While trajectory optimization strategizes the timing by which the vehicle will reach from one path to another within the laid path.


The fourth module of the program will introduce you to Autonomous Vehicle Controls using MATLAB and Simulink. While sensors sense the obstacles around the car, the control system of the car is what directs the car away from the obstacle. Incorporating this in a vehicle gives it the ability to make decisions on its own.






Download syllabus


Download Syllabus


Speak to our technical specialists to understand what is included in this program and how you can benefit from it.

Request a Demo Session

List of courses in this program

1Applying Computer Vision for Autonomous Vehicle

Computer vision enables the detecting and recognizing of objects and has aided various sectors such as Banking, Surveillance, Automotive, Sports Analytics, Virtual / Augmented reality, Medical Imaging etc. Under this course, the students will learn about the different softwares that are used in Computer vision. They will also learn the methodologies and algorithms that are used and how they are implemented in the industry. In the first module, the students will gain

  • A complete understanding of Computer Vision 
  • Hands on experience on projects for developing and implementing algorithms
  • Knowledge on softwares such as Tensorflow, Python, OpenCV etc.

2Localisation, Mapping and SLAM

Autonomous vehicle uses maps to plan the path ahead. In addition to this, using SLAM (Simultaneous Localization and MApping),  the system detects unknown paths and adds them to the map which can be later used for path planning and obstacle avoidance. Under this course, the students will learn how to 

  • Produce estimates from unknown variables when on road 
  • Develop 3D models with Camera and Lidar data fusion 
  • Specifically identify locations on map using Grid Mapping 

3Path Planning & Trajectory Optimization Using C++ & ROS

While robotics piques the interest of a majority of people, to develop one is not a piece of cake. Every movement that a robot makes is programmed and involves path planning and trajectory optimisation. This course on the same topic will help the student gain insights into robot machine planning, which is used in autonomous vehicles, warehouse robots etc. Here the students will learn 

  • Step by Step description of motion planning techniques 
  • Practical implementation of learned concepts using ROS, C++ and Python 
  • How to develop softwares for robots

4Autonomous Vehicle Controls using MATLAB and Simulink

Every company uses some specific learning tools to build their own ADAS technologies. This course focuses on building a control system for driver assist technology. The students here will learn how to 

  • Develop an ADAS system from Level 1 automation to Level 3
  • Build control system using simulink 
  • Build a level 2 adaptive cruise control project

1. Applying CV for Autonomous Vehicles

1Introduction to Computer Vision

Computer Vision aims at giving a high- level understanding to a machine. They focus on making a machine capable of making its own decisions. In the first week, we will walk you through on what computer vision is. The topics that will be covered in this week are: 

  • What is computer vision? 
  • Applications of computer vision 
  • Course details 
  • Software used in the course 
  • Understanding Images 
  • Loading and Displaying Images 
  • Basic Image modifications using Python/OpenCV 
  • Image representation modes: RGB, HSV, Greyscale etc. 

2Image Processing Techniques – I

In order to extract information from the image, image processing techniques are carried out on it. These extracted features in terms of shapes and alphabets, helps in recognizing an object. This week, we will look into the techniques that are used to do so. 

The topics, that will be discussed are:

  • Image Filters 
  • Gaussian Noise
  • Noise removal methods 
  • Convolution and Correlation operations
  • Mean and Median filters 
  • Template matching methods
  • Edge Detection methods : Canny , Sobel , Prewitt etc. 
  • Image Gradients 
  • Hough transforms – straight lines , circles and curves 
  • Time/Spatial to frequency domain Image conversion

3Image Geometries and Camera

The images are viewed on an x-y plane. Each of it’s pixels is traced in a manner to get information out of it. This is facilitated using image geometrics and cameras. In this week, we will learn about the different geometries that are used in computer vision:

  • Image coordinate system: Different systems 
  • Projective geometry 
  • Perspective Projection 
  • Multiview Geometry 
  • Stereography and Depth Imaging 
  • Stereo Correspondence 
  • Epipolar Geometry 
  • Rigid body transformations 
  • Geometric camera calibration 
  • Multiplane calibration 
  • Finding corners and matching feature points 
  • Performing transformations with intrinsic and extrinsic parameters 

4Motion Models

A  video is a sequence of images. In order to extract information from it, these images will be evaluated every time with a different angle. The topics that will be discussed under motion models are:

  • Applications of motion modelling 
  • Motion estimation and techniques 
  • Optical flow 
  • Lucas and Kanade methods: Hierarchical and Sparse methods
  • Full motion, general motion and affine motion models 

5Trackers / Filters in Computer Vision

Filters are applied to an image to enhance it. However, applying a filter to a video would mean to continuously keep a track on all the images and apply filters to it. The process that is carried out to do this, will be discussed this week. 

  • Introduction to Tracking 
  • Feature trackers
  • Steps in tracking, Prediction and Correction 
  • Constant velocity, acceleration models 
  • Kalman filter: Prediction, correction, Intuition and tracking with Kalman filters 
  • Bayes Filters 
  • Particle Filters and Localization using Particle Filters 
  • Real life tracking issues and comparison of different methods 

6Image Recognition and Classification

There are various types of things around us, from cats, dogs, humans, tree so and so forth. One of the applications of computer vision is to differentiate one object from another.    This understanding of what falls under what category is done by image image recognition and then are further classified by image classification.  The topics, discussed under this week are: 

  • Introduction to Recognition and classification 
  • Supervised and Unsupervised learning methods 
  • Dimensionality reduction , Principal Component Analysis , Discriminative classifiers , 
  • Machine learning classifiers 
  • Feature extractors: HOG , Haar Cascade and CNN models 
  • Introduction to Convolutional Neural Networks 
  • Building blocks of Neural Network for computer vision 

7Video Analysis and Image Segmentation:

It is very easy for  a human to differentiate one object from another just by looking at it. However, this task is done by computer vision following a number of steps. This becomes, even more difficult when it comes to videos, to recognize and detect objects from noise. An understanding of how this is done will be given this week.

  • Understanding background and background subtraction methods 
  • Filtering methods for videos 
  • Temporal templates 
  • Moments in images and video 
  • Hidden Markov model
  • Image segmentation 
  • Clustering methods for segmentation 
  • Mean shift algorithms for segmentation 
  • Segmentation by Graph partitioning 

83D Vision

In contrast to 2D vision, 3D introduces another dimension called the depth dimension. This makes the images that we are looking at more realistic. 3D vision is enabled in a number of ways. We will learn about these methods this week. 

  • Introduction to depth imaging 
  • Depth estimation from stereo imaging
  • Role of geometry in depth imaging 
  • IR based depth estimation 
  • Applications of depth imaging 

9Computer Vision architectures and Frameworks

To make the process of learning computer vision easier a number of libraries exist in the form of frameworks. What these are and how they can be used is what we will be covering this week. 

  • Introduction to Deep learning architectures
  • Alex net 
  • Yolo 
  • VGG Net 
  • GoogleNet 
  • ResNet 
  • Region based CNN
  • Introduction to deep learning frameworks / Libraries for computer vision 
  • Tensorflow
  • Keras 
  • Pytorch 
  • Caffe
  • Constructing a model and training with the frameworks 
  • Inference graph / model generation

10Data collection and Synthetic Data Generation

Computer vision does not gain its learning by looking at an image from one side. Instead, multiple copies of it are generated with the help of algorithms in order to understand an image. This is one of the things that gives Computer Vision its intelligence. 

  • Introduction to Data collection 
  • Data acquisition : Discovery , Augmentation and Generation 
  • Data Labelling : Labeling methods and tools used for labeling 
  • Using Existing Data : Relabeling and Transfer learning methods
  • Introduction to Synthetic data generation 
  • Rendering methods 
  • Procedural randomization 
  • Manual Modeling 
  • 2D and 3D datasets generation 

1112 Capstone Project

In the last two weeks, 5 research papers will be provided to the student. The student can pick the topic of their choice from them. This will be followed up with lectures on that topic. 

  • 3D Object detection from 2D Images – Monocular 3D , Object Detection Leveraging Accurate Proposals and Shape Reconstruction 
  • Human Pose Estimation – Deep Cut : Joint Subset Partition and Labeling for Multi Person Pose Estimation 
  • Panoptic Segmentation 
  • COCO-GAN : Generation by Parts via Conditional Coordinating 
  • Hierarchical Multi-Scale Attention for Semantic Segmentation 

Projects Overview

Project 1


The objective of this project is to load, display an image, resize images using different methods, change images from one colour space to another. The student will also submit a report on their understanding of the colour spaces, their advantages and disadvantages.

Project 2


The students will be presented a vehicle scene with a noisy image. The image will be collected from noisy or blurry environments. The students will have to perform noise removal from the images applying different methods covered in the class. 

Project 3


Lane detection using edge and Hough transform method for curved lanes , merging lanes and splitting lanes. Lane discontinuity challenge

Project 4


An image sequence of vehicle motion will be provided. LK motion modelling is to be applied to the image sequence to estimate motion of vehicles / objects in the image.

Project 5


The students will be provided with image sequences of vehicles in frames. The task is to estimate the position and the speed of the vehicles in the subsequent frame. Additional points for estimating the motion plan of the vehicles. 

Project 6


The students will be tasked with creating a dataset , choosing a standard CNN architecture , training , testing and reporting out the results

Project 7


In this project, a video sequence will be provided for students to perform background subtraction and apply tracking. The video sequence will be of the vehicles in the road recorded with front facing camera

Project 8


Students will be presented with a sequence of Images. The objective of the project is to perform semantic segmentation of the images.

Project 9


 A 2D vehicle driving dataset would be provided to the students. The task of the project is to apply data augmentation and synthetic data generation methods from the concepts covered in the lectures

2.Localization, Mapping, and SLAM


In order to track the movement of a robot in an unknown environment, continuously updating its location while it's making a movement is needed. This tracing of the path also helps the robot in taking a reference from this map, in the future. 

In the first week, we will look into what  Localization, Mapping, and SLAM actually are and how they are applied in real life. The topics that will be covered are:

  •  Introduction about the course 
  •  Introduction about State Estimation & Localization 
  •  Real-life examples of Localization 
  •  Introduction about Mapping  
  •  Examples 
  •  SLAM – Introduction, and examples

2Kalman Filters

Very often when a car passes through a tunnel or location where there is a lot of disturbance, determining their exact location becomes difficult. These disturbances are called noise, and one needs to overcome this noise to get the continuous location of the car. To do this, Kalman filters are used. 

Under Kalman filters in this week, we will be covering 

  • MLE and MAP 
  • Bayes Inference 
  • Gaussian Distribution 
  • Kalman filter

3Extended Kalman Filters, UKF

Kalman filters are further divided into Extended Kalman Filters and Unscented Kalman Filters. While EKF uses a few points to estimate the location, UKF uses a number of points to estimate a given location. How is one better than the other and why is it needed will be explained in detail in week 3. 

The topics that will be covered in this week are:

  • Introduction to Nonlinear Kalman Filters 
  • EKF 
  • Solved example of EKF 
  • UKF

4Particle Filter

Another type of filter that is used in the estimation of an object. The particle filter is one of the most widely used filters after Kalman Filters. 

Under particle filters the topic that will be covered are:

  • Introduction to particle filter 
  • Examples of a particle filter 
  • Comparison between Gaussian and Nonparametric filters

5Monte Carlo Localization

Using the range sensor, odometer, and a map of the location, Monte Carlo Localization helps in estimating the position and orientation of the object of interest. 

In this week, the topics that will be covered are:

  • Explaining MCL with example 
  • Properties of MCL 
  • Discussion problem solving using MCL

6GNSS/INS Sensing for Pose Estimation

An aircraft, before landing needs to be sure of its position, a self-driving car also needs to be alert of its path. While driving, the car should be on the road and also alert of its environment. This is done by using sensors such as GNSS/INS. 

In this week, we will learn about:

  • Sensor Fusion and the need for it. 
  • Introduction for Pose Estimation using GNSS and  INS 
  • Why and where do we need sensor fusion 4.
  • Demonstration

7Camera and Lidar data fusion

Camera and Lidar (or Light Detection and Ranging) use two different approaches in order to detect an object. By fusing these two parameters, the distance at which an object is from the self-driving car can be estimated with accuracy. 

In this week, we will cover:

  • Camera and LiDAR parameters 
  • Applications 
  • Demonstration

8Introduction to SLAM/ Mapping

During the course of this week, you will be learning about SLAM/Mapping.  Simultaneous Localization And Mapping(SLAM) is a computational problem that constructs or updates a map of an unknown environment and also keeps a track of the robot’s location.

In this week we will cover, 

  • Introduction to Mapping and SLAM 
  • Real – Life examples 
  • Features like ORB

9 Occupancy Grid Mapping

During the course of this week, you will be learning about Occupancy Grid Mapping. Occupancy Grid Mapping refers to a family of computer algorithms that address the problem of generating maps from noisy and uncertain sensor measurement data for mobile robots.

In this week we will cover, 

  • Algorithm 
  • Demonstration 
  • Examples


During the course of this week, you will be learning about EKF SLAM. EKF SLAM algorithms are used for maximum feasible algorithm for data association. It is basically a class of algorithm that utilizes the Extended Kalman Filter (EKF) for Simultaneous Localization And Mapping (SLAM). 

In this week we will cover,

  • Comparison with EKF localization 
  • Application 
  • Advantages and Disadvantages


FAST SLAM is a method to detect the position of an object as it travels a distance by mapping all the landmarks it encounters in the place. If there are N number of landmarks in a particular space, then the fast slam approach will take note of all these landmarks to give the desired result. 

In this week, we will be learning about:

  • Application  
  • Advantages and Disadvantages


One way of detecting the position of the self-driving car or robot is to take an estimation of its movement and the distance it has from one landmark. This gives the idea of where the object is most likely to be next, making it easier to place its location on the map. 

In the last week of this course, the students will learn about :

  • Graph SLAM 
  • Application 
  • Advantages and Disadvantages 

3. Path Planning & Trajectory Optimization Using C++ & ROS


Robots are programmable machines that influence every aspect of a human's work and have a high potential to replace humans from performing a range of tasks. For example, it is becoming possible for computers to assist our daily driving. One such best example is Tesla.

In past decades, autonomous vehicles have attracted dramatic attention. But for the autonomous vehicles alias robots to move around, they need commands. So, the movement a robot makes is all based upon programs and involves path planning and trajectory optimization. This course will help the student gain insights into robot machine planning, which is used in autonomous vehicles, warehouse robots, etc.

This week, we will introduce you to the tools that will be used during this course. The topics you will be learning are:

  • What are the Graph-based algorithms?
  • Breadth-First Search Algorithm
  • Depth-First Search Algorithm

2Configuring Space for Motion planning

During the course of this week, you will be learning about C-Space i.e Configuration Space. C-space is the space that provides possible positions for the robot to move. 

The topics you will be learning this week are:

  • How to use the Configuration Space
  • Representing Configuration space as a graph
  • Planning using Visibility Graph
  • Finding the shortest path.
  • Djikstra’s Algorithm, A*, Bellman-Ford

3Random sampling-based motion planning

During the course of this week, you will be learning about sampling-based motion planning. This will solve the navigation queries. Instead of depending on the entire map of the C-space, the robot depends on the procedures that decide if the robot’s configuration is approaching an obstacle or not. 

The topics you will be learning this week are:

  • Various types of Rapidly exploring Random Tree(RRT)
  • Application of RRTs
  • Path planning using the RRT algorithm
  • Setting up Ubuntu environment for the next week

4Robot Operating System

During the course of this week, you will be learning about ROS. ROS is a robotics middleware that manages the complexity and heterogeneity of the hardware and applications. Not only this but it also performs low-level device control, implementation of commonly-used functionality, message-passing between processes, and package management.

The topics you will be learning this week are:

  • Setting up ROS 
  • Following instructions on the ROS website
  • Adding ROS to the docker container
  • A basic introduction to Cmake
  • Programming using ROS
  • Introduction to 3-D visualization tool called Rviz
  • Difference between 
    • ROS/RTOS
    • ROS1/ROS2
    • DDS
    • Middleware

5Motion planning with Non-holonomic robots:

During the course of this week, you will be learning about motion planning with non-holonomic robots. Non-Holonomic robots are built in such a way that they only travel in one direction along a given axis. To put it in simple words, Non-Holonomic robots can only move forward, backward, or sideways.

During this week you will learn:

  • Path and speed planning
  • Trajectory representations
    • Splines
    • Clothoid
    • Bezier curves
    • Polynomials
  • Introduction to Frenet Frame a
  • Planning in Frenet frame
  • Boundary value constraint problem and methods
  • Pointwise constraint problem and methods

6Mobile Robot collision detection

During the course of this week, you will be learning about Mobile Robot collision detection. Here, the robot will detect a collision and will change its trajectory to escape the contact as fast as possible and move away safely.

During this week you will learn:

  • Collision detection for static Obstacles
  • Motion prediction for dynamic obstacles
  • Motion prediction in Frenet frame with Kalman filters
  • Collision prediction for dynamic obstacles

7Hierarchical planning for Autonomous Robots

During the course of this week, you will be learning about Hierarchical planning for Autonomous Robot. Hierarchical planning optimizes the global path and it requires only a considerable amount of time for the path replanning operations.

During this week you will learn:

  • Route planning, A*, D*, D* lite
  • HD Maps, SD Maps
  • Behavior planning - State Machines, Decision Tree, Behavior Tree, etc.
  • Behavior and Motion Planning integration

8Trajectory planning

During the course of this week, you will be learning about Trajectory planning. Trajectory planning plays a major role in robotics and paves way for autonomous vehicles. It is basically the movement of robots from point A to point B by avoiding obstacles over time. 

During this week you will learn:

  • Polynomial Planners
  • Lattice Planners
  • Collision checking
  • Trajectory selection (Cost functions)

9Planning Algorithm

During the course of this week, you will be learning theoretical concepts on this topic

The topics you will be learning this week are:

  • Dynamic Programming for Path planning for the long horizon.
  • MPC Planner for the short horizon.

10Planning in unstructured environments

During the course of this week, you will be learning about Planning in unstructured environments. Unstructured environments include the off-road, parking lot, etc. In such a type of environment, the robots should identify the optimal path between the start and the goal path. So, for the robots to perform this, a suitable path planning algorithm is required.

During this week you will learn:

  • Path planning for a racing environment 
  • The Autonomous Racing Software Stack of the KIT19d
    • Covers essential modules like
      • Perception
      • Localization
      • Mapping
      • Motion planning and control

11Reinforcement learning for planning

During the course of this week, you will be learning about Reinforcement learning for planning. Basically, it is a machine learning method that has increased application in robot path planning. The robot would explore its surrounding environment and learn using the trial and error process. The machine learning method has an advantage in path planning and requires less prior information. 

The topics you will be learning this week are:

  • Markov process and Bellman’s principle of optimality
  • Value and Policy iteration
  • RL classifications: On/Off Policy, Model-based/free
  • TD learning/ SARSA, Monte Carlo, Dynamic programming, Q learning
  • DQN
  • Highway driving example
  • Parking lot example


Now that we have covered the major parts of the course, we will now be moving on to the concluding week.

During this week we will be 

  • Recapturing on what we learned
  • Tips on how to stand out in job search
  • Tips on how to proceed in academics in this field

Projects Overview

Project 1


A project on design and implementation of different graph-based trajectory planers in partially known static environment. Student will design different graph-based algorithm and test their performances in partially known static environment. In partially known static environments only static obstacles are present but the layout of the environment is changing as the agent acquires new information.

Key Highlights:

  • Collaboration of the first 6 weeks of content.
  • Understanding of heuristic-based path planning.
  • Analyzing complexity and calculation time of each algorithm by using different environment e.g. Small, medium and large environment.
  • Parametrization of planners and analyzing the effect of each parameter on the performance. 


  • Detail analysis of the search algorithms behavior on a given sample navigation scenario.
  • Analysis of the effect of concaved obstacles.
  • Detailed performance tests.

Project 2


A project on design and implementation of a motion planner for an autonomous car in realistic dynamic environment. The motion planner must plan a collision free trajectory for the vehicle that leads it through a given destination by considering other road users (e.g. other vehicles on the traffic network). 

Key Highlights:

  • Exposure and hands on experience to usage of ROS.
  • Programming in a very realistic environment in which there is a data abstraction for decoupling of planning algorithm and environment model.
  • Using high-definition road-map representation.
  • Consideration of vehicle dynamic model.


  • Detailed description of the data flow, based on different interfaces, to the motion planner.
  • Detailed description of vehicle dynamic model and its role in planning process.
  • Detailed documentation of the planner.

4.Autonomous Vehicle Controls using MATLAB and Simulink

1 Course Overview

Autonomous Vehicle is one of the fastest evolving fields in the recent years. Research and development made in the field of autonomous vehicles is continuously increasing and engineers are persistently striving for simplifying and improving the systems to a greater extent. However, the control of autonomous vehicles is still one of the major challenges. 

The first week of the course will give you an overview of the autonomous vehicle controls. The topics that we will cover in the first week include: 

  • Introduction
  • Overview and motivation behind autonomous vehicle controls 
  • Need for study 
  • Overview of autonomous systems engineering 
  • Program Management 
  • System Engineering
  • How do automotive companies work? 

2Classical Controls Overview

Stability plays a crucial role in determining the safety and performance of vehicles.  In the case of autonomous vehicles, it deserves even more attention. To ensure stability and to perform all the required functions in an efficient manner, autonomous vehicles employ control systems.

The second week of the course will give you an overview of classical controls. The topics that will be covered here include: 

  • Stability
  • Pole zero methods
  • Transient performance
  • Disturbance and tracking
  • Other classical controls definition
  • PID systems
  • Analysis and solving
  • Examples on P, I, PI & PID
  • Gain Selection 
  • A brief explanation on solvers in Simulink

3 Start of Project - Adaptive Cruise Control

Adaptive Cruise Control ensures safety by maintaining the vehicle at a safe distance from vehicles ahead. It functions by automatic alteration of the speed of the vehicle based on the circumstances. 

The third week of the course includes a  project that involves Adaptive Cruise Control. The topics that will be covered in this week include: 

  • Overview of adaptive and normal cruise control
  • Start of project with longitudinal model
  • Longitudinal dynamic modelling
  • Aerodynamic drag
  • Rolling resistance
  • Linearizing longitudinal dynamics
  • Longitudinal dynamics block diagram in simulink 

4Longitudinal Controller Design

A longitudinal controller regulates the cruise speed of the vehicle. It is a system of sensors, control computation and control actuation components. The fourth week of the course deals with the design of longitudinal controllers for autonomous vehicles. 

The topics of this week are: 

  • Transfer function modeling in Simulink
  • Modeling control system
  • Controller design 

5Adaptive Vehicle Speed Control

Other than safety, adaptive cruise control offers convenience to drivers. They keep the vehicle steady by adjusting the speed and also, they accord the option for the drivers to set their own preferences. 

Fifth week of the course covers topics like: 

  • Touch base on least square method 
  • Adaptive control for vehicle speed control

6Adaptive Cruise Control - ADAS Modeling

This part of the course deals with the ADAS modeling of Adaptive Cruise Control. Here, you will get to know about the sensors used, mathematical model, basics of Linear Quadratic Regulator, state model, etc. 

The topics of the week include 

  • Introduction to level 2 automation & sensors used
  • Mathematical model
  • Introduction to LQR basics
  • State model 

7Adaptive Cruise Control(Continued)

This week also covers the modeling of Adaptive Cruise Control. You will get to know about the design method and modeling of  ACC. Topics of the seventh week include: 

  • Design method and modeling 
  • Simulink modeling for ACC

8Improvement to Adaptive Cruise Control

Cooperative Adaptive Cruise Control is an extension of Adaptive Cruise Control that makes the autonomous vehicles connected. Other than regulating vehicle speed for maintaining a safer distance, CACC makes autonomous vehicles cooperate with one another by establishing communication between them. 

The Eighth week of the course deals with improvements in adaptive cruise control. This covers the topics of 

  • Cooperative adaptive cruise control 
  • Introduction and modeling

9Lateral Control Model

Proper navigation of an autonomous vehicle is achieved by means of longitudinal and lateral controls. As longitudinal control regulates the cruise speed of the vehicle, the function of lateral control is to steer the wheels to keep them in the lane. In other words, it deals with the lane keeping and lane changing control. 

The topics that will be discussed in this week are 

  • Introduction to lateral control 
  • Lateral model and tire model
  • Bicycle model
  • Lance centering logic discussion
  • Model lane centering logic
  • Assignment on  controller design and tuning 

10Continuation of Lane Centering Feature

Lane centering is the feature designed for maintaining the vehicle position at the centre of the lane. It automatically steers the vehicle to ensure that it travels only along the centre of the lane. This week also deals with the modeling of lateral control. 

Topics that will be covered in this week include 

  • Lane control modeling 
  • Lane centering feature 

11Lane Centering Feature Modification

The last week of the course also deals with lane centering. It covers the modifications of the lane centering features. The topics of this week include

  • Lane biasing
  • Introduction to logic
  • Feature modeling 

12Major Project

In the final week of this course, the student will be working on the major project. They will have to develop a level 2 system with Adaptive cruise control and Lane Change Assist

Projects Overview

Project 1


A project to implement a Driver assist feature for Green wave traffic assist. Green wave occurs when a series of traffic lights are coordinated to allow a continuous traffic flow over several intersections. green wave traffic assist is a feature which recommends vehicle speeds such that the vehicle can be run in the green wave and minimize stop time in traffic.

Key Highlights:

  • Use algorithmic approach of thinking
  • Use design thinking define requirements and implement algorithm
  • Use a simple kinematics concept to implement algorithms.
  • Gain proficient experience in Matlab scripting
  • Use matlab to generate simulation results.


  • Show a flowchart diagram for the algorithm’s pseudocode.
  • Plot Green wave recommended velocity vs distance to traffic light.
  • Comments and comparison of output graphs\
  • Plot relevant flags and messages with respect to the scenario

Project 2


In this project students will integrate the various features together to develop an integrated automated driving model. This model will include previously discussed highway assist (ACC + LCA) + Auto lane change + Predictive speed assist + Intelligent speed assist. Students will then go ahead and test different scenarios that cover all the control functionalities for every feature and provide plots to show the working of the model.
students will develop a new feature model for a Minimum risk maneuver as covered in week 11. The flow chart & pseudocode for the MRM will be provided. Students would first have to implement a function and perform unit testing on the model to show all the states are working. Finally, a scenario for MRM will be given where the student would have to integrate the MRM block.

Key Highlights:

  • Complete lateral and longitudinal feature design implementation
  • Feature model integration using matlab and simulink
  • Unit testing for functional blocks
  • New feature design requirements and implementation
  • Complete simulation for lateral , longitudinal and MRM features.


  • Model implementation and executable
  • Show suitable plots and simulation results for functional testing
  • Show results for defined scenarios
  • Flow chart for new feature implementation and simulink implementation
  • Feature simulation

Flexible Course Fees

Choose the Master’s plan that’s right for you


12 Months Access


Per month for 10 months

  • Access Duration : 12 Months
  • Mode of Delivery : Online
  • Project Portfolio : Available
  • Certification : Available
  • Individual Video Support : 8/Month
  • Group Video Support : 8/Month
  • Email Support : Available
  • Forum Support : Available

Lifetime Access


Per month for 10 months

  • Access Duration : Lifetime
  • Mode of Delivery : Online
  • Project Portfolio : Available
  • Certification : Available
  • Individual Video Support : 24x7
  • Group Video Support : 24x7
  • Email Support : Available
  • Forum Support : Available
  • Telephone Support : Available
  • Dedicated Support Engineer : Available


Companies hire from us

See all


  • Top 5% of the class will get a merit certificate
  • Course completion certificates will be provided to all students
  • Build a professional portfolio
  • Automatically link your technical projects
  • E-verified profile that can be shared on LinkedIn


See all

The Skill-Lync Advantage

See all