Point Clouds: 3D Perception with Open3D
A FRIENDLY TUTORIAL ON POINT CLOUDS
In the realm of drone delivery, navigating with precision and avoiding obstacles is a significant challenge. While cameras are useful in certain situations, they have limitations such as reduced visibility in bad weather or low light. Monocular cameras, especially, struggle to accurately perceive complex 3D environments. Furthermore, identifying various obstacles in the environment, from trees to power lines, poses a challenge.
The dataset used for obstacle detection cannot cover every possible object. However, using 3D data allows us to focus on the positions and velocities of objects relative to the drone, bypassing the need to recognize individual objects and improving the efficiency of the obstacle avoidance system.
This project delves into practical point cloud analysis using the KITTI dataset. We start by visualizing the data with Open3D and downsampling it using Voxel Grid. Next, we employ the RANSAC algorithm to segment obstacles from the road surface, enhancing our understanding of the scene. By using DBSCAN clustering, we group similar obstacles together to gain more accurate spatial insights. To facilitate tracking, we create 3D bounding boxes around each obstacle. Finally, we utilize surface reconstruction on a custom point cloud obtained from an iPhone's LiDAR. The aim of this project is to showcase the application of Machine Learning algorithms for processing point cloud data in 3D space.