My ROS 2 Humble robot "Wali" is based on the Create3 platform which provides fused WheelEncoder - IMU - OpticalMouse odometry.
I have mounted an Oak-D-Lite camera and tested the depthai-ros example launch files - I see that stereo.launch.py publishes /stereo/points with type sensor_msgs/msg/PointCloud2 which I believe will be the input for vSLAM but haven't managed to display them in rviz2 for some reason. (also tried to display the depthcloud /stereo/depth with rgb_stereo_node.launch.py to no avail)
The Turtlebot4 standard image appears to have chosen RTABmap for vSLAM, but also has a LIDAR for which they use SLAM-Toolbox. I don't believe the TurtleBot4 has integrated the LIDAR SLAM with the Stereo Depth vSLAM as yet.
After reading (in "A Comparison of Modern General-Purpose Visual SLAM Approaches", Merzlyakov A., Macenski S., 2021, ) I am left with a question of RTABmap vs OpenVSLAM for my home robot, and with fear that constantly changing lighting coming primarily from sunlight through the windows in my home may severely impact the ability of either package.
RTABmap seems to be the more prevalent package chosen by hobby class robots, but as I am at the critical juncture to decide where to focus my efforts, I feel it important to get any input that will help my robot function to the best of its sensor suite and the "state of the art in vSLAM"
RTABmap or OpenVSLAM for a "no LIDAR" home robot?