Algorithm development and implementation for computer vision and graphics applications

Multiple topics are available in this project.

I. Targetless Multi-Sensor Extrinsic Calibration  

Develop an algorithm to determine the spatial transformation between a 3D Lidar and an RGB-D camera without using specialized calibration targets. The approach utilizes “features of opportunity” such as building edges or lamp posts and motion-based trajectory alignment to synchronize coordinate systems. By matching the visual trajectory from the camera with the point cloud motion from the Lidar, the system can self-calibrate during active movement. The research focuses on the minimum motion diversity required for convergence and whether temporal synchronization lag significantly degrades extrinsic estimation compared to static methods. 

Software: OpenCV (feature extraction), Ceres Solver (factor graph optimization), and Open3D (point cloud registration). 

II. Comparative Photogrammetric and Lidar Mapping 

Conduct a large-scale 3D reconstruction of an environment using high-resolution RGB frames processed through the COLMAP Structure-from-Motion (SfM) pipeline. This dense photogrammetric point cloud is compared directly against a ground-truth point cloud generated by a 3D Lidar to evaluate geometric accuracy. The project aims to identify the strengths of passive vision versus active laser sensing in diverse lighting and structural conditions. 

The research focuses on “geometric fidelity,” specifically analyzing which modality better captures high-frequency details like thin wires or foliage and how COLMAP handles monocular scale ambiguity.  

Software: COLMAP, Open3D or PCL (point cloud processing), and CloudCompare (mesh-to-mesh distance analysis). 

III. Lidar-based SLAM in Unstructured Terrain  

Deploy Simultaneous Localization and Mapping (SLAM) in outdoor environments characterized by steep slopes and uneven ground using only 3D Lidar data. Without an IMU to provide orientation priors, the system must rely on advanced scan-to-map matching and ground-plane extraction to compensate for the sensor’s pitch and roll. The objective is to maintain a globally consistent 3D map by identifying stable geometric primitives in unstructured areas where traditional planar surfaces (like walls) are absent. 

The research focuses on “Motion Distortion Compensation” and “Geometric Degeneracy,” investigating how pure Lidar odometry handles rapid angular changes and whether the absence of inertial data leads to significant “z-drift” or vertical misalignment in non-flat terrains.

IV. Algorithm development for minimal fitting of a torus

Similarly to the minimal fitting of a sphere [2], one topic is to analyze the torus fitting problem in 3D. The motivation comes from signed distance function research in computer graphics. The important beneficial property of a torus is that this is the simplest second-order surface with an exact signed distance function.  

Besides the 3D coordinates, normal vectors of the points are also provided as an input of the algorithm. The initial task is to implement the method of Eberley [5] for the overdetermined case. However, we would like to replace the algebraic approach with a geometric one. Therefore, we attempt to find other minimal geometric fitting methods both for 3D position-based and for 3D position and normal vector-based problems. We also have to define the minimum number of inputs for both cases.


C++ implementation of LiDAR – camera calibration algorithms

Several sensors are mounted on the test vehicle at the university. Digital cameras and a Velodyne VLP 16 device are also used for data gathering. To combine the sensors with different modularity, we apply calibration algorithms. These methods can find the transformation between the devices’ coordinate systems. The topic is the implementation of varying target-based methods in Python or C++. The feature points or contour lines of the target object help to find the sought calibration parameters.

a. Camera calibration 

One task is to implement a minimal solution for an image-only approach based on edge points and their directions.

b. Bundle adjustment

Another task is to implement an improved spherical calibration process [2] for multiple sensors. A new algorithm can be developed for the case where the scale parameters are unknown during the image processing part to provide a more general solution for the problem.

Mesh generation from registered LiDAR – camera data

Colorized point clouds can be generated if the camera and LiDAR data are aligned. Unfortunately, the point cloud of the Velodyne 16 LiDAR is very sparse. However, a 3D mesh can be generated from the point cloud to dense the visible data, and the images could be used as texture information for the triangles. The first task is to implement general algorithms, like Delaunay triangulation. Then we attempt to recognize the specialties of the provided LiDAR data and find the most effective process for the triangulation. During the texturing, occlusion problems may occur because of the different sensor positions, which must be considered based on the work of Vechersky et al. [3]

Algorithm development for cylinder-based single-beam LiDAR and camera calibration

Single-beam (2D) LiDARs are practical to sense smaller objects with robots both in indoor and outdoor environments. We want to align the 2D LiDAR to a camera, i.e., the extrinsic parameters are required. This is a rigid transformation matrix constructed by rotation and translation. For this purpose, cylinders are practical objects: they can be easily separated in the LiDAR data, and two contour lines define the cylinder in the image. We will attempt to find automatic or semi-automatic methods to solve this problem during this project. Cylinder fitting in 3D data is solved by Eberley [4], and this method can be built in the pipeline. Data generation in Blender using the Blensor package helps to validate the developed algorithms, and real-world data can also be provided.


References

[1] Pusztai et al.: Accurate calibration of multi-lidar-multi-camera systems, Sensors, 2018
[2] Tóth et al.: Automatic LiDAR-Camera Calibration of Extrinsic Parameters Using a Spherical Target, IEEE International Conference on Robotics and Automation (ICRA), 2020
[3] P. Vechersky, M. Cox, P. Borges and T. Lowe, Colourising Point Clouds Using Independent Cameras, in IEEE Robotics and Automation Letters, 2018
[4] David Eberly: Least Squares Fitting of Data by Linear or Quadratic Structures, 1999
[5] David Eberly: Fitting 3D Data with a Torus, 2018