Master's Certification Program in Sensor Fusion

This program offers a deep insight into the sensors and localization of autonomous vehicles. The course starts with teaching the math behind ML and AI, dives deeper into sensors such as Camera, LIDAR, and Radar, and explores the localization concept giving greater insight to the student about the working of an autonomous vehicle.

  • Domain : CSE, AUTONOMOUS
Enroll Now View demo

Program Outcomes

Humans have 5 senses that help them make a decision in any situation with accuracy. With the development in technology, the main aim lies today in making machines capable of taking such decisions when faced with a situation. 

While you can train a machine to operate in a certain way, say pick an object and put it in the same place over and over again, how would you train a machine to adapt to its continuously changing environment and make a decision within seconds and with accuracy because lives depend on it. 

This is where sensor fusion steps in. A sensor fusion, as the name suggests takes input from multiple sensors, for example, LIDAR radar, and camera, and uses this information to give accurate results. In a typical scenario that involves autonomous cars, the radar will take input from its surroundings to get the distance at which an object is, cameras will determine what the object is. LIDAR too helps in detecting an object, however, it lacks the accuracy which radar and camera have. 

These sensors are heavily dependent on the algorithms and we are continually working on developing accurate results through clustering and segmentation. The more adept a sensor fusion is, the better is the performance of a technology-dependent on it. Thus, at an individual level, you are working on a technology that helps make human life easier. 

If you have an interest in ADAS technology, IoT, etc. then we have more to offer you from our end. Until now we have understood what sensor fusion does. If you wish to learn in-depth on how you can contribute to this technology by increasing your knowledge in it, enroll in this program. 

This course on sensor fusion has been divided into 5 modules, each of which is a step-by-step in-depth guide to understand every phase in sensor fusion better. A list of the modules present in this course are:

  1. Math behind Machine Learning & Artificial Intelligence using Python
  2. Introduction to Camera Systems Using C++
  3. LiDAR Sensor and Data Processing Using ROS/Linux
  4. Radar Sensor Processing Using MATLAB
  5. Localization, Mapping & SLAM Using ROS, C++ & Python

Each of these modules is supported with challenges for all weeks and 2 projects per module to help the students better understand a course and simultaneously gain practical, industry-oriented knowledge on it. 


GET COURSE COUNSELLING TODAY

Get a 1-on-1 demo to understand what is included in the course and how it can benefit you from an experienced sales consultant. The demo session will help you enroll in this course with a clear vision and confidence.

Request a Demo Session

Course-wise syllabus

1Math Behind Machine Learning & Artificial Intelligence Using Python

The first module will give the students an insight into assessing the most favorable results that can be drawn from the sensors. The students will essentially be creating algorithms to get the best results according to a situation. Apart from this, some sensory data will be provided to the students, which they will need to compute to increase their practical knowledge on the subject.

2Introduction to Camera Systems Using C++

This program deals with detailed studies about cameras and the camera systems being used in the ADAS/Autonomous driving vehicles in the current times. In this course, we study the camera and how it is constructed and understand different parts of the camera and their effect on an image. We also deal with Image formation, different types of camera models along with the processing of the same images with an introduction to basic Image processing techniques and algorithms.

 

 

 

 

3LiDAR Sensor and Data Processing Using ROS/Linux

This program gives the student a practical course on LiDAR as a sensor and also uses LiDAR in sensor fusion and localization problems. The course talks about the hardware and the various algorithms, principles that are needed to use the hardware for robotics and autonomous driving applications.

4Radar Sensor Processing Using MATLAB

 

The students will gain an in-depth knowledge of the RADAR system along with an understanding of Advanced Driver-Assistance Systems, and Autonomous Driving.

The students will have a hands-on experience with the embedded device as well as they will gain their creative knowledge in programming a sensor. Their knowledge will be improved beyond electronics design and using the knowledge gained from this course the students can find a job opportunity in the core industry or as a freelancer as well as they will get the opportunity to get into an embedded system environment.

5Localization, Mapping & SLAM Using ROS, C++ & Python

This course is about the techniques that are used by a modern robotic system to infer its position from its sensor and also algorithms that a robot uses to make an understanding of the environment around itself. The course includes topics from probabilistic robotics, specifically Bayesian filtering techniques, the theory and its practical application which are part of the current state of the art for autonomous robotic systems


1. Math behind Machine Learning & Artificial Intelligence using Python - Syllabus

1Basic concepts

  • Sets
  • Subsets
  • Power set
  • Venn Diagrams
  • Trigonometric functions
  • Straight lines
  • A.M, G.M and H.M
  • Concepts of Vectors

2Permutation & Combinations

  • Introduction & basics
  • Fundamental principle of counting
  • Permutations
  • Combinations

3Statistics - I

  • First business moment
  • Second business moment
  • Third business moment
  • Fourth business moment

4Probability

  • Introduction
  • Random experiments
  • Conditional probability
  • Joint probability

5Statistics – II

  • Z Scores
  • Confidence interval
  • Correlation
  • Covariance

6Probability - II

  • Introduction
  • Uniform Distribution
  • Normal Distribution
  • Binomial Distribution
  • Poisson Distribution

7Likelihood (for Logistic regression)

  • Introduction
  • Odds
  • Log odds
  • Maximum likelihood vs probability
  • Logistic regression

8Gradient descent (for Linear & Logistic regression)

  • Loss function
  • Cost function
  • Gradient descent for linear regression
  • Gradient descent for logistic regression

9Linear Algebra (for PCA)

  • Matrices
  • Types of matrices
  • Operation on matrices
  • Eigen values
  • Eigen vectors

10Derivatives (for Neural network)

  • Derivatives
  • Intuitive idea of derivatives
  • Increasing & decreasing function

11Backpropagation (for Deep learning)

  • Chain rule
  • Maxima & minima
  • Back propagation
  • Cost function for deep learning

12Python

  • Basics of Python
  • If else
  • For loop
  • Data types


Projects Overview

Project 1

Highlights

 
 To perform a logistic regression algorithm taught in the t course with simple random sampling and stratified sampling and identify the differences in the result (Data will be given in the class)

Project 2

Highlights

Perform gradient descent for ANN in Python (Data will be given in class)


2. Introduction to Camera Systems Using C++ - Syllabus

1Camera Construction

  • Introduction to geometrical construction
  • Introduction to optical construction 
  • Introduction to Camera Types
  • Camera Sensor types – CCD, CMOS 
  • Camera Sensor types – RGGB, RCCB, RCCC
  • Different Lens Types – Normal vs Fisheye
  • Optical Parameters – Exposure time, Shutter, White Balance, Gain

2Camera Calibration

  • Introduction to Camera Calibration
  • Introduction to Camera Parameters
  • Calibration Techniques 
  • Calibration for intrinsic vs Extrinsic 
  • Image Undistortion

3Camera Models

  • Different Camera Models
  • Pinhole model, Perspective model, fisheye model
  • Lens Distortion – Barrel /Radial, Pin Cushion
  • Depth Of Field , Field of View 
  • Effects on changing aperture

4Projective Geometry

  • Dimensionality Reduction
  • What is lost / Preserved?
  • Vanishing Lines & Points
  • World to Image Projection
  • Rotation and Translation
  • Orthographic Projection

5Stereo Vision

  • Basic Idea of Stereo
  • Components of Stereo Vision
  • Stereo Correspondence
  • Image Normalization vs Histogram Warping
  • Pixel Based vs Edge Based vs Segmentation Based Stereo
  • Stereo Matching with Dynamic Programming
  • Disparity Maps and their uses.

6Camera Systems

  • FLIR  Camera
  • Fisheye Camera – Continental
  • Stereo Camera 
  • Low FOV Long-range cameras
  • Resolution (Megapixel… )
  • Different Uses for each of them

7Image Pre-Processing

  • Image Color Spaces
  • Color Space conversions (RAW -> RGB, RGB-> GRAYSCALE, RGB->YUV , …)
  • Image Formation, Sampling, Sub-Sampling, Quantization
  • Image Interpolation, Extrapolation
  • Image Normalization
  • Image Noise – Salt and Pepper noise, Gaussian Noise , Impulse Noise
  • Image Erosion/Dilution

8Image Processing -1 (Transformations)

  • Basic Transformations and Filtering
    • Domain Transformations
    • Noise Reduction
    • Filtering as Cross-Correlation
    • Convolution
    • NonLinear Filtering

9Image Processing -2

  • Basic Image Filtering and Detection techniques
    • Corners Detection
    • Edge Detection
    • Contour Detection
    • Image Thresholding Histogram
    • Binning Technique
    • Histogram Equalization

10Image Processing -3

  • Features and Image Matching
    • Image Features, Invariant Features (Geometrical, Photometric Invariance) 
    • Harris Detector, Eigen based Features
    • Image Descriptors
    • SIFT, SURF, SSD, … 
    • Mosaics and Image Stitching

11Image Processing - 4

  • Introduction to Structure from Motion (SFM)
    • Camera Model and Coordinate systems
    • Epipolar Constraint and
    • Derive Essential Matrix
    • 3d Reconstruction 
    • Bundle Adjustment
    • SLAM example 
    • SVD approach to SFM

12Introduction to Embedded Systems

  • Camera Interfaces. Ex: GMSL, LVDS
  • Communication Protocol – I2C 
  • Camera Initialization Sequence 
  • Automated Exposure Gain (AEG) Control 
  • Vision Processing Units (VPU) 
  • Graphic Processing Units


Projects Overview

Project 1

Highlights

A project on calibrating the camera to produce an undistorted video to use for different applications. The project also focuses on using the stereo vision data of driving through the roads to create a depth map and identify the different parts of the scene. 

 

Key Highlights:

  • Collaboration of the first 6 weeks of content.
  • Practical implementation of Camera Calibration concepts
  • Understanding the data type for the calibration matrix.  
  • Understanding the analysis of Stereo Images 
  • Understanding the concept of the disparity map
  • Understanding the depth map

 

Deliverables:

  • Camera Matrix is saved into a document so as to be accessible by other programs.
  • Undistorted video stream from the webcam (Recorded video in this case)  
  • Recorded Video of the disparity map
  • Recorded Video of the depth map
  • Recorded Video of generated regions on the scenes. 

 

        

Project 2

Highlights

A project to detect the parking spaces from a video obtained from fisheye cameras installed on the car. The car is driving through the parking lot and frames from each of the cameras are stitched into a bird eye's view providing the field of vision of the parking lots from the top. Finding the parking lots using classical vision which we have come across this course we should be able to identify the parking lots. This is a real-world problem being worked on in the companies 

 

Key Highlights:

  • Understanding the data from the bird’s eye view perspective. 
  • Understanding the best preprocessing techniques to get a better image from the data.
  • Practical implementation of image feature extraction.
  • Understanding the contour extraction feature.
  • Understanding the Lane detection and extraction.
  • Understanding clustering of detected lanes and defining parking space. 

 

Deliverables:

  • A Video showing detected lanes of the parking spaces. 
  • Another video showing the parking spaces appropriately marked with an ‘x’ or indicates to the scene.

 

       


3.LiDAR Sensor and Data Processing - Syllabus

1Introduction to LiDAR

 

  • Introduction to the course
  • History of LiDAR
  • Understanding LiDARs and their principle.
  • Evaluating various types of LiDARs.
  • Understanding packet and point cloud formats of LiDAR
  • Working principle of Velodyne LiDAR

 

 

2LIDAR Pointcloud & Pcl Pointcloud

 

  • 3D perception - Overall process involved in 3D perception
  • Pointcloud formats: ASCII & Binary how to interpret. 
  • Overview of Tools: PCL, Open3D 
  • LiDAR pointcloud and 3D pointcloud

 

 

3Operations on LiDAR Data

 

  • Basics of Transformations: Rotation, Translation, Homogeneous Coordinates,  
  • Quaternions; roll, pitch, yaw; Gimbal lock; inter-conversions;
  • Clustering Algorithms: Euclidean Clustering, various types of distances, norms, 
  • Getting started with PCL - point cloud library. PCL with ROS

 

 

4LiDAR Point Cloud Registration

  • LIDAR point cloud registration: What is registration? why it is important?
  • Registration methods: ICP, NDT
  • What is ICP and its tuning?
  • What is NDT and its tuning? 

5LiDARs and their calibration

 

  • Calibration: Explaining extrinsic calibration & intrinsic calibration
  • Extrinsic Calibration between two lidars: Why it is needed?
  • Automated extrinsic Calibration using registration. 
  • Intrinsic calibration

 

 

6LIierent interpretation - 1

 

  • LiDAR data different interpretation: Surfel representation
  • Surfel explained
  • In-depth analysis of surfel and their application
  • Surfel based LiDAR point cloud registration

 

 

7LiDAR data different interpretation - 2

  • Occupancy Grid
  • Drivable space
  • In-depth analysis of Occupancy grid and its relation to Drivable space

8LiDAR data different interpretation - 3

  • Scan Context
  • Scan Context explained
  • In-depth analysis of Scan-Context and association computation

 

 

9LiDAR data application

  • LiDAR-based SLAM
  • Pointcloud registration can be extended to SLAM and be used for the map-building process
  • Discussion about LOAM. 
  • Implementation of SLAM from purely LiDAR scans. (Kitti-data from chapter 1)

 

 

10LiDAR Data and deep learning - 1

  • Deep Learning: PointNet. Point Pillers
  • Discussion of the Network and how to change LIDAR data for the network.

 

 

11LiDAR Data and deep learning -2

  • Deep learning: Point upsampling network (PU-net), PU-GAN
  • Discussion of the Network and how to change LIDAR data for the network.

12LiDAR Data and deep learning -3

  • Deep learning: PF-net
  • Discussion of the network.


Projects Overview

Project 1

Highlights

A project fusing lidar data and adding color from RGB camera and removing the ground plane from the LiDAR data.

 

Key Highlights:

  • Knowledge from the last 6 weeks.
  • Coordinate transforms using the calibration data.
  • Understanding sensor fusion basics.
  • ROS application usage.
  • ROS tooling usage.
  • PCL usage for filtering out the ground plane.

 

Deliverables

  • Output of fusion of camera and LiDAR as rosbag.
  • Code implementation.
  • Document about the approach.

4. Radar Sensor Processing Using MATLAB - Syllabus

1Introduction to ADAS

  • Introduction to ADAS
  • Different types of ADAS
  • Applications of ADAS

2Autonomous Driving (AD)

  • Introduction of Autonomous Driving (AD)
  • Levels of Autonomous Driving
  • The overall architecture of AD
  • How Computer Vision is used in AD?
  • What’s Deep Learning role in AD?

3Understanding of Basic sensors used in AD

  • Sensors roles in Autonomous driving
  • How different sensors work in AD
    • Camera Sensors
    • Radar
    • LiDAR
  • Sensor Fusion - Camera, LiDAR, Radar
  • How safety is achieved with multiple sensors

4Introduction to RADAR sensor

  • What is Radar?
  • Automotive Radar
  • How Radar Sensor Looks
  • Radar sensor on Vehicle

5Radar Signal Processing

  • Introduction
  • Components of Radar Signal processing
  • Range Equation
  • FMCW
  • FMCW – Terms and Definitions
  • Measurement of Range (Distance)
  • Measurement of Doppler Velocity
  • Measurement of Angle/Angle of Arrival
  • Measurement of RCS

6Advance Radar Signal Processing

  • Introduction
  • Range FFT and Doppler FFT
  • Angle FFT and RD Map
  • Clutter Removal and CFAR
  • Final Detection List

7Radar Technical details

  • Introduction
  • Pulse Repetition Frequency
  • Duty Cycle
  • Dwell Time/ Hits per Scan
  • The RADAR Equation
  • Free-Space Path Loss
  • Derivation of the Radar Equation
  • Radar Cross-Section
  • Losses

8Radar Devices and roles

  • Classification of Radar System
  • Radar Devices and their functionalities
  • Role of Radar Sensors
  • Importance of Radar
  • Automotive Radar
  • Radar Signal Processing in automotive systems

9Radar Data Processing

  • Introduction
  • Data processing

10Environment Setup

  • Setting up things/ pre-requirements for coding and setting up the environment.
  • Getting Git Ready
  • Git basics
  • Creating repo
  • Hands-on the Git setup.
  • Installation of Compilers
  • Device Setup

11Radar+ AI

  • Deep Learning: PointNet. Point Pillers
  • Discussion of the Network and how to change LiDAR data for the network.


Projects Overview


5. Localization, Mapping & SLAM Using ROS, C++ & Python - Syllabus

1Introduction

  • Introduction to the course 
  • Introduction to State Estimation & Localization 
  • Introduction to Mapping   
  • Introduction to SLAM

2Kalman Filters

  • MLE and MAP
  • Bayes Inference
  • Gaussian Distribution
  • Kalman filter

3Extended Kalman Filters, UKF

  • Introduction to Nonlinear Kalman Filters
  • EKF
  • Solved example of EKF
  • UKF

4Particle Filter

  • Introduction to particle filter 
  • Examples of particle filter 
  • Comparison between Gaussian and Nonparametric filters.

5Monte Carlo Localization

  • Sensor Fusion and need for it. 
  • Introduction for Pose Estimation using GNSS and  INS 
  • Why and where do we need sensor fusion 
  • Examples 
  • Demonstration

6GNSS/INS Sensing for Pose

  • Sensor Fusion and need for it.
  • Introduction for Pose Estimation using GNSS and INS
  • Why and where do we need sensor fusion
  • Examples
  • Demonstration

7Camera and Lidar data fusion

  • Camera and LiDAR parameters
  • Applications
  • Demonstration

8EKF SLAM

  • EKF SLAM
  • Comparison with EKF localization
  • Application
  • Advantages and Disadvantages

9FAST SLAM

  • FAST SLAM
  • Application
  • Advantages and Disadvantages

10GRAPH SLAM

  • Graph SLAM
  • Application
  • Advantages and Disadvantages
  • Project 2 explanation/discussion

11RADAR+ AI

  • Deep Learning: PointNet. Point Pillers
  • Discussion of the Network and how to change Lidar data for the network.


Projects Overview

Project 1

Highlights

  • The project aims at examining a localization module and observing the following 
    • It uses an Extended Kalman Filter to localize the robot. 
    • The Localization is based on a landmark-based approach where the sensor is capable of sensing the range and bearing for a particular landmark. The measurement model, because of the trigonometry involved is non-linear
    • The model that the automobile uses is a bicycle model for state transition
  • The students are to  do a comparison study of the two filters for the underlying models and present a case if UKF is a better choice for the current localization approach




Flexible Course Fees

Choose the Master’s plan that’s right for you

Basic

9 Months Access

15000

Per month for 10 months

  • Access Duration : 9 Months
  • Mode of Delivery : Online
  • Project Portfolio : Available
  • Certification : Available
  • Individual Video Support : 8/Month
  • Group Video Support : 8/Month
  • Email Support : Available
  • Forum Support : Available
  • Telephone Support : Available
Premium

Lifetime Access

25000

Per month for 10 months

  • Job Assistance : Maximum of 10 opportunities
  • Master's Assistance : Lifetime
  • Access Duration : Lifetime
  • Mode of Delivery : Online
  • Project Portfolio : Available
  • Certification : Available
  • Individual Video Support : 24x7
  • Group Video Support : 24x7
  • Email Support : Available
  • Forum Support : Available
  • Telephone Support : Available
  • Dedicated Support Engineer : Available

Testimonials

Companies hire from us

See all

CERTIFICATION

  • Top 5% of the class will get a merit certificate
  • Course completion certificates will be provided to all students
  • Build a professional portfolio
  • Automatically link your technical projects
  • E-verified profile that can be shared on LinkedIn

SKILL LYNC WORKS TO GET YOU A JOB

See all

The Skill-Lync Advantage

See all