PointPainting

A new perspective Sensor Fusion

< Introduction >

Sensor fusion is one of important options to develop 3d data analysis. Today, I introduce special fusion mehod, named PointPainting, using mono camera and lidar sensor. Concept is very simple.

Step1 Image semantic segmentation using DeepLabV3 or SqueezeSegV2.

Step2 Project lidar data(point cloud) like image data. And concat image semantic segmentation to lidar data.

Step3 Insert segmented point cloud data in 3d object detection network like PointRCNN or PointPillar.

It’s just all process for sensor fusion. If think about it from input data view, it’s just changed from 3d lidar data(x,y,z) to 3+(the number of segmentation class) input data size.

However, unfortunately it is hard to call it meaningful fusion technique. Because image segmentation result just works on an assistant of 3d object detection. It’s the reason why the author call it not ‘parallel method’, but ‘sequential method’.

Additionally, ‘seqential method’ has a critical problem that is ‘speed’. No matter how much 3d detection speed is high, if 2d image segmentation speed is low, it is useless. So the author suggests one solution ‘pipelining’ which means we use segmentation result of prior frame (consecutive).

< Code Anaylsis >

1. import module

Updated:

Leave a comment