@Dominus Thanks for your post. The current state of the software stack on the RAE is definitely not final. For most of our developers, RAE was their first encounter with ROS and it allowed us to test how DepthAI integrates with ROS and how it can be used with SLAM. We realized that it is too complicated, and therefore we are now working on integrating ROS support directly into the DepthAI library, as well as SLAM/VIO. Once that is done, we will release sample applications showing how to use it. This should significantly simplify writing applications for the RAE and at the same time the use of ROS and SLAM with OAK cameras.
@JyothiGarubilli Currently you can't use RobotHub library to stream video from the camera to your application but you can make your own implementation of webrtc or HLS streaming (maybe with help off ffmpeg) in python and send the video frames to your application.
@JyothiGarubilli Do you have any requirements for the stream format/codecs? Could you please describe more details about what you want to achieve?
Hello, this is the RAE development team. We would like to wish you a happy holiday season and share some details about the current state of the project, upcoming features, and our plans for the future.
RAE, a small wheeled robot controllable remotely over the internet, is an innovative tool for learning and exploration. It's equipped with cameras, speakers, a microphone, a small LCD screen, LEDs, and is designed to provide an interactive learning experience in robotics and programming. RAE is an excellent resource for teaching the basics of robotics, programming, electronics, and more advanced topics like computer vision, machine learning, and artificial intelligence.
The robot operates on the RVC3 chip, differing from the RVC2-based OAK camera lineup, and supports neural network-based computer vision tasks. Furthermore, RAE integrates with the RobotHub cloud platform, allowing remote control and programming capabilities. RobotHub, currently under active development, aims for increased user-friendliness and a richer feature set, including support for additional devices.
You're invited to create a free account on RobotHub to explore its features. Your feedback and suggestions are highly valued. Additionally, RAE's drivers and libraries are open-source, accessible on GitHub, enhancing its accessibility and potential for community-driven improvements.
Progress Update
We are constantly working to improve the robot and add new features. Until recently, the robot's software was primarily based on ROS. While ROS is an excellent framework for robotics, it can be challenging for beginners. To simplify the process of creating custom applications for the robot, we have decided to develop a new Python library. This library will facilitate easier control of the robot and the development of custom applications. More details will be provided in the following section.
Additionally, we are working on integrating SLAM (Simultaneous Localization and Mapping) support into the robot. SLAM is a technique that enables a robot to map its environment and pinpoint its location within that map. This significant feature will allow the robot to navigate autonomously and perform tasks such as object detection and recognition. We plan to release a new robot version with SLAM support in the near future.
We also recognize that developing for the platform has not been as user-friendly as we would like. To address this, we are enhancing our offerings with more comprehensive documentation and tutorials, making it easier for users to get started. Furthermore, we will introduce additional example applications to demonstrate how to effectively use the platform and develop custom applications for the robot.
Python Library
As mentioned earlier, we are developing a new Python library designed to simplify controlling the robot and creating custom applications. Although still in development, we plan to release it soon. The entire codebase is open-source and accessible on GitHub. The library, written in Python, utilizes ROS for communication with the robot. It offers a range of classes and functions for controlling the robot and developing custom applications. To demonstrate how the library works, consider this simple example: it makes the robot move forward for 5 seconds before stopping.
from robot_py.robot import Robot robot = Robot() robot.start() robot.movement_controller.move(0.5, 0.0, 5.0) # arguments are linear speed, angular speed, and duration robot.stop()
Another example of how to display an image on the robot's screen:
from robot_py.robot import Robot import cv2 robot = Robot() robot.start() image = cv2.imread('image.jpg') # path to the image robot.display_controller.display_image(image) robot.stop()
Another example of how to play a sound on the robot's speakers:
from robot_py.robot import Robot robot = Robot() robot.start() robot.audio_controller.play_sound('sound.mp3') # argument is the path to the sound file robot.stop()
Underneath the hood, the library uses ROS2 topics, services, actions and timers to communicate with the robot. Those interfaces can also be used directly if needed.
from robot_py.robot import Robot from std_msgs.msg import String robot = Robot() robot.start() robot.ros_interface.create_publisher("/string_message", String) robot.ros_interface.publish("/string_message", String(data="Hello World!")) robot.stop()
Example Apps
Currently, the default application for the robot is "Follow Me" App, which showcases using Yolo detection network to detect and follow a person. This App also has a frontend that allows to control the robot remotely via Joystick, play horn sounds and play with other peripherals. In the near future we will be also providing apps that will showcase cases such as emotion recognition, hand tracking, ChatGPT integration, mapping and many others. In the video below you can also see the Christmas App that we have created for the holidays. It showcases controlling different robot peripherals at the same time.
Future Plans
For upcoming year we have planned many features and improvements for the robot, including:
- SLAM support
- Simulation support for testing robot Apps
- Multi-robot support
- Improved documentation and tutorials
- More example Apps
If you have any suggestions or ideas for new features, please let us know in the comments below. We would love to hear your feedback and suggestions!
As always, if you need help or have any questions, feel free to post it on our discussion forum.
Username is root and no password.
Yes, it's possible. You can use DepthAI code in RobotHub apps and get video frames - once you have the frames you can do whatever you want with them - for example to stream them to some other service.
Good starting point for writing RobotHub app with DepthAI code is here:
@ianC It's because the robot runs some application and the application "owns" the camera - if the application provides some preview, you will find it in the robot app detail page.
- Edited
It’s an incredibly exciting time at Luxonis as we are witnessing entirely new markets being created with spatial artificial intelligence (AI) cameras. One of the most rewarding aspects of working at Luxonis is learning about the various use cases that our customers are tackling. Every week feels like getting to participate in a startup incubator as our customers invent new solutions to old problems. Not a week goes by where we haven’t learned about an entirely new capability that we didn’t realize was possible for our customers to achieve with the Luxonis platform. One of the most common questions that we receive is customers wanting to know what industries Luxonis targets and why we target them.
The reality is that industries have found Luxonis rather than Luxonis finding industries. This is because our early efforts to target specific industries were met with mixed results. For example, in early 2020 we were targeting the energy industry only to see investment in oil & gas completely dry up by April 2020. To seed the market Luxonis decided to target the most technical engineers on the planet with our Kickstarter campaigns. We figure that if we can convince hardcore hobbyists that we are the best solution then we are getting the hard part of the way. The slightly easier part is enabling those hobbyists to convince their business leadership that Luxonis can tackle their scaled enterprise use cases.
Starting in 2021 Luxonis doubled down on our approach to follow the money and that has led us to our current target list of ten key industries. These industries found us and now Luxonis is putting in the time, talent, and treasure to expand rapidly in each area. Our target industries are:
- Transportation
- Agriculture & industrial equipment
- Warehousing
- Retail
- Construction
- Manufacturing
- Consumer Robotics
- Energy & Utilities
- Healthcare
- Sports
Each industry has numerous unique applications that keep our growing team of 64 engineers busy. Let’s take a moment to highlight a few industries in particular:
Construction
Common applications in construction include detecting if workers are wearing personal protective equipment (PPE), people counting, and monitoring equipment operation. An example customer that we work with is CobraVision who is using OAK cameras and RobotHub to monitor and improve safety at construction sites. Take a look at the video to see how CobraVision is able to check when workers are wearing PPE and track if vehicles are being operated safely near obstacles.
Warehousing
Warehouse operations is one of the areas with the most obvious opportunities to automate repetitive manual human labor. We are already witnessing 100% “lights out” warehouses that require no human labor to receive, sort, store, pack, and ship packages. Our customers are seeking solutions that automate different types of warehouse steps such as pick and place, autonomous mobile robots moving goods, and various tracking systems. An example customer that we work with is Rapyuta who is developing a fully autonomous forklift.
Transportation
Transportation is a massive industry that has a lot of room for improvement in terms of both safety and efficiency. American commercial airlines are the gold standard example of what can be achieved with advanced sensing and automation systems. The data backs this up as seen in 2020 when 38,824 Americans died in a car crash, 743 in a train accident, 838 in a boating accident, and 0 on commercial airlines. And zero fatalities for commercial airlines in the United States in 2020 is no fluke as that result has been achieved most years over the last decade. One of the customers tackling the transportation industry in a big way is Hivemapper who is building a decentralized road map with automotive users generating the data. Luxonis is helping Hivemapper deliver the dashcams needed for users to collect map data. The Hivemapper dashcam also can collect video data to quickly determine who is at fault when an accident occurs.
We love learning more about our customer’s applications. If you’d like your use case to be featured on Luxonis.com please reach out to us at support@luxonis.com. And if you ever want to chat with us about your use case please feel free to schedule a video call with us here: https://meetings-eu1.hubspot.com/bradley1.
NineSigma, representing QRDI and Sidra Medicine invites participants to submit proposals for the "Monitoring and Evaluation of Shaken Baby Syndrome" prize ($100K) challenge.
QRDI and Sidra are driving healthcare innovation within Qatar with the goal of addressing major global challenges to detect and prevent acute asthma attacks.
Sidra invites proposals from Startups, SMEs, and Corporates for technologies that provide a diagnostic monitoring device for child abuse via "Shaken Baby Syndrome" events. Sidra is looking for easy-to-use devices with software support to enable, rapid patient assessment, and recommended interventions to reduce long-term complications related to Shaken Baby Syndrome.
Sidra and QRDI are looking for a non-invasive diagnostic device that can monitor and evaluate the forces generated during a potential Shaken Baby Syndrome incident. The software should be able to record measure parameters described below and ideally notify authorities via wireless networks enabling rapid emergency response..
Measurement Parameters:
1. Velocity of movement
2. Acceleration / Deceleration of baby head
3. Distance moved / Time
4. Rotation of movement/angle of displacement
5. Flex/extension of the head/neck during the shaken episode.
6. The device should be conceived for use on patients under five years old
QRDI will invest up to $100,000 for pilot trial technologies which are expected to be minimum TRL = 3. Full commercial rollout will be discussed in phase 2.
I hope you could provide me a short/quick reply if you are actually considering to submit a proposal which highlights your technology or would like more information on the Prize Challenge.
If you have any further questions or require more information on the specifications, feel free to address them to me (via telephone or email) and I will try to respond as soon as possible.
If interested, please reach to worsfold@ninesigma.com