Brucebot
@ the Biped Lab
Abstract— Point clouds could offer a wealth of information such as distance, plans, corners and also semantics IF the point clouds are dense enough. This information could be utilized in renderings of spaces, for experiencing virtually with virtual reality goggles, or perhaps on a website by using the mouse and keys to maneuver a space to move between different perspectives and get a feel for the amount of space of a room. Unfortunately, point clouds are usually only from a specific perspective on the space, with certain areas being occluded in the point cloud and other areas being out of the scope of a perspective of the point cloud.
I aim to come up with a method that allows for merging of point clouds from completely different perspectives of a room so that one could construct a 3d rendering of an apartment
or a building with a complete point cloud which had been merged from several different point clouds.
Yun
3D Bresenham's line algorithm
Center of mess
Data association
SVD to get matrices of update
Build adjacency matrix A and b
Arduino
Motors
LiDAR
tweak the covariance matrices, M and Q
Update map via adapted SAM