Recent blog posts
DepthAI ROS Release 3.0
Clearpath Robotics + Luxonis: Making Robotics More Accessible
Luxonis OAK cameras power Clearpath Robotics’ TurtleBot and Husky platforms, showcasing how Luxonis’ flexible, developer-friendly vision systems make scalable, real-world autonomy more accessible across education, research, and industrial robotics worldwide.
New ToF Filters: Major Improvements in Depth Perception and Point Clouds
Autonabit + Luxonis: Relying on OAK for autonomous navigation in Agriculture
How one small startup implements OAK cameras in their autonomy stack to navigate vineyards following rows, and avoiding obstacles
Dynamic Calibration: Keeping Stereo Depth Accurate
DepthAI v3.0.0 Release
Power Profiles Benchmark
We tested five neural networks on a Luxonis OAK-4D camera using three power profiles under full CPU load. Results show: - sustained_nn_heavy gives highest NN performance - sustained_cpu_heavy favors CPU efficiency - best_effort balances both All tests ran with stress-ng at 100% CPU load and stable ambient conditions. Reproduction steps and benchmark script included.
RVC4 1.18 Release Announcement
Farmwave + Luxonis: Using AI in Agriculture to Boost Harvest Yield
How one agriculture AI innovator uses Luxonis computer vision to prevent harvest loss and boost overall yield
Vision Buddy + Luxonis: Restoring sight with CV and Edge AI
Vision Buddy X Luxonis: How one MedTech startup is utilizing Luxonis AI cameras and edge compute to restore vision to the visually impaired.
Vention + Luxonis: Smarter Bin Picking with edge AI in action
Vention X Luxonis: How Luxonis depth AI cameras and edge compute are used in Vention's bin picking robot 'AI Operator'
Real-Time Hand Pose with Luxonis OAK 4
Real-Time Hand Pose Estimation with Luxonis OAK 4: Fingertip Precision at the Edge Unlock the power of real-time hand pose estimation with the Luxonis OAK 4 (RVC4) camera, the latest innovation in edge AI vision. Perfect for developers building gesture-based interfaces, sign language recognition, or augmented reality (AR) applications, this solution delivers low-latency performance entirely on-device—no cloud required. Real-time 2D hand landmark detection (21 points per hand) Live visualization in the browser via a local web server Zero cloud latency – all AI inference runs on the OAK 4 Easily embeddable into custom apps and UI overlays Ideal for edge AI applications like gesture control, robotics, AR/VR, and more
AI-Powered Soldering Defect Detection with OAK4-S and a Microscope
AI-Powered Soldering Defect Detection with OAK4-S and a Microscope
Dynamic YOLO-World with Luxonis OAK 4
Unlock real-time object detection on the edge using YOLO-World-L and the Luxonis OAK 4 (RVC4) PoE camera. This powerful demo features a dynamic, browser-based frontend for live AI inference—no backend coding required. Users can: • Detect objects from live video streams or uploaded images • Dynamically add or remove detection classes • Adjust confidence thresholds in real-time • View annotated bounding boxes updated instantly Built on top of open-vocabulary models and the Luxonis DepthAI platform, this project offers developers a plug-and-play solution for deploying vision AI to the edge.
Luxonis Hub is SOC 2 Type II compliant!
Luxonis Hub is now SOC 2 Type II compliant, demonstrating our commitment to data security, operational transparency, and system reliability. This independent audit validates that our controls and processes meet industry-leading standards, giving developers and businesses confidence in using Luxonis Hub for secure device management, AI deployment, and data handling at scale.
Visual SLAM Using Wheel Odometry and Luxonis OAK-D Pro on ROS 2
This guide details how to build a Visual SLAM system using Luxonis OAK-D Pro and iRobot Create 3, relying entirely on wheel odometry (disabling visual odometry) for motion estimation. Running on ROS 2 Humble with Jetson Xavier NX, the setup is optimized for 2D indoor mapping using stereo RGB-D data and RTAB-Map
DepthAI v3 Release Announcement!
A next‑generation AI vision stack—now with a fully native Model ZOO, HubAI conversion pipeline, a refreshed cross‑platform Viewer, and a streamlined core API that runs seamlessly on both RVC2 and RVC4.