Hello, this is the RAE development team. We would like to wish you a happy holiday season and share some details about the current state of the project, upcoming features, and our plans for the future.
RAE, a small wheeled robot controllable remotely over the internet, is an innovative tool for learning and exploration. It's equipped with cameras, speakers, a microphone, a small LCD screen, LEDs, and is designed to provide an interactive learning experience in robotics and programming. RAE is an excellent resource for teaching the basics of robotics, programming, electronics, and more advanced topics like computer vision, machine learning, and artificial intelligence.
The robot operates on the RVC3 chip, differing from the RVC2-based OAK camera lineup, and supports neural network-based computer vision tasks. Furthermore, RAE integrates with the RobotHub cloud platform, allowing remote control and programming capabilities. RobotHub, currently under active development, aims for increased user-friendliness and a richer feature set, including support for additional devices.
You're invited to create a free account on RobotHub to explore its features. Your feedback and suggestions are highly valued. Additionally, RAE's drivers and libraries are open-source, accessible on GitHub, enhancing its accessibility and potential for community-driven improvements.
Progress Update
We are constantly working to improve the robot and add new features. Until recently, the robot's software was primarily based on ROS. While ROS is an excellent framework for robotics, it can be challenging for beginners. To simplify the process of creating custom applications for the robot, we have decided to develop a new Python library. This library will facilitate easier control of the robot and the development of custom applications. More details will be provided in the following section.
Additionally, we are working on integrating SLAM (Simultaneous Localization and Mapping) support into the robot. SLAM is a technique that enables a robot to map its environment and pinpoint its location within that map. This significant feature will allow the robot to navigate autonomously and perform tasks such as object detection and recognition. We plan to release a new robot version with SLAM support in the near future.
We also recognize that developing for the platform has not been as user-friendly as we would like. To address this, we are enhancing our offerings with more comprehensive documentation and tutorials, making it easier for users to get started. Furthermore, we will introduce additional example applications to demonstrate how to effectively use the platform and develop custom applications for the robot.
Python Library
As mentioned earlier, we are developing a new Python library designed to simplify controlling the robot and creating custom applications. Although still in development, we plan to release it soon. The entire codebase is open-source and accessible on GitHub. The library, written in Python, utilizes ROS for communication with the robot. It offers a range of classes and functions for controlling the robot and developing custom applications. To demonstrate how the library works, consider this simple example: it makes the robot move forward for 5 seconds before stopping.
from robot_py.robot import Robot
robot = Robot()
robot.start()
robot.movement_controller.move(0.5, 0.0, 5.0) # arguments are linear speed, angular speed, and duration
robot.stop()
Another example of how to display an image on the robot's screen:
from robot_py.robot import Robot
import cv2
robot = Robot()
robot.start()
image = cv2.imread('image.jpg') # path to the image
robot.display_controller.display_image(image)
robot.stop()
Another example of how to play a sound on the robot's speakers:
from robot_py.robot import Robot
robot = Robot()
robot.start()
robot.audio_controller.play_sound('sound.mp3') # argument is the path to the sound file
robot.stop()
Underneath the hood, the library uses ROS2 topics, services, actions and timers to communicate with the robot. Those interfaces can also be used directly if needed.
from robot_py.robot import Robot
from std_msgs.msg import String
robot = Robot()
robot.start()
robot.ros_interface.create_publisher("/string_message", String)
robot.ros_interface.publish("/string_message", String(data="Hello World!"))
robot.stop()
Example Apps
Currently, the default application for the robot is "Follow Me" App, which showcases using Yolo detection network to detect and follow a person. This App also has a frontend that allows to control the robot remotely via Joystick, play horn sounds and play with other peripherals. In the near future we will be also providing apps that will showcase cases such as emotion recognition, hand tracking, ChatGPT integration, mapping and many others. In the video below you can also see the Christmas App that we have created for the holidays. It showcases controlling different robot peripherals at the same time.
Future Plans
For upcoming year we have planned many features and improvements for the robot, including:
- SLAM support
- Simulation support for testing robot Apps
- Multi-robot support
- Improved documentation and tutorials
- More example Apps
If you have any suggestions or ideas for new features, please let us know in the comments below. We would love to hear your feedback and suggestions!
As always, if you need help or have any questions, feel free to post it on our discussion forum.