Announcing DepthAI v3
A next‑generation AI vision stack—now with a fully native Model ZOO, HubAI conversion pipeline, a refreshed cross‑platform Viewer, and a streamlined core API that runs seamlessly on both RVC2 and RVC4.
A fully native Model ZOO
Today we’re unveiling ZOO, a curated collection of 60‑plus production‑ready neural networks bundled directly into DepthAI v3.
Breadth & depth – Object detection, pose estimation, segmentation, neural depth, super‑resolution, and more.
Rich model cards – Training data, evaluation metrics, expected FPS on RVC2/RVC4, and drop‑in code snippets.
Multiple variants – Select the exact balance of accuracy vs. throughput for your application.
One‑click deployment – The ZOO is natively understood by DepthAI v3; no more hunting for blobs or manual post‑processing hooks.
Open to the community – Publish your own models and vote on what we add next.
HubAI replaces legacy conversion tools
We’ve consolidated blobconverter and tools.luxonis.com into HubAI—a single interface for model conversion, versioning, and lifecycle management. Migrating a Python or CLI workflow? The porting guide will get you there in minutes, and RVC2 users now benefit from the new .superblob
format that lets you choose the ideal SHAVE allocation at runtime.
oak‑examples & depthai‑nodes
Our former depthai‑experiments repo has been reborn as oak‑examples: a clean, well‑documented catalog of end‑to‑end projects ranging from basic depth sensing to full multi‑camera AR pipelines. Pair it with depthai‑nodes, a growing Python contrib library of high‑level, reusable host‑side nodes that eliminate boilerplate while encouraging rapid experimentation.
Meet the new web‑based Viewer
Native installers for Windows, macOS, Linux—plus an on‑device OAK4 app.
Auto‑detects RVC2 & RVC4, streams multiple cameras, and visualizes disparity, depth maps, or point clouds in real time.
Tight ZOO integration: pick a model, click Load, and see live inference overlays instantly.
Built‑in calibration manager for importing, exporting, or extracting camera calibrations without scripts.
DepthAI v3 core: simpler, faster, unified
Unified & simplified API – Fewer lines of code to spin up cameras, neural nets, and spatial pipelines.
Custom host nodes – Run Python or C++ logic on the host inside the same graph; perfect for post‑processing or third‑party library calls.
RVC2 ⇄ RVC4 parity – Prototype on one, deploy on the other. No code changes in 99 % of cases.
Standardized coordinates (RDF) – Consistent orientation across cameras, IMU, and depth outputs.
High‑level RGBD blocks – Synchronized color + depth without manual graph wiring.
Integrated visualizer – Inspect streams, NN outputs, and your entire pipeline graph—even when everything is running on‑device.
Coming from v2? Grab the v2 → v3 porting guide and feed it to your favorite LLM; most migration work is copy‑paste.
Get started today
Upgrade DepthAI: pip install depthai==3.*
Browse the ZOO: models.luxonis.com
Clone the examples: git clone https://github.com/luxonis/oak-examples
Launch the Viewer: download the installer or run it on your OAK4.
Convert a custom model with HubAI and drop the .superblob
straight into your pipeline.
We can’t wait to see what you build. Jump into the forums, share feedback, and help shape the next wave of AI‑powered perception. DepthAI v3 is here—let’s create something extraordinary.