Hello. I am using depthai-core via C++. Following the "RGB video" code example (https://docs.luxonis.com/projects/api/en/latest/samples/ColorCamera/rgb_video/#rgb-video) I am experiencing unexpected behavior. I am getting images back from the camera, however they are greyscale. This is with an OAK-D Pro PoE. Thoughts? I have considered that maybe the camera is in infrared mode, but am not sure how to test this. Is there a way to force the camera in RGB mode and ensure that isn't happening?
Greyscale images returned from an RGB camera/socket
The only difference is I'm not using OpenCV. Instead I get an ImgFrame object and return the byte[], which is then getting saved as a jpg.
using namespace std; try { // Create pipeline dai::Pipeline pipeline; // Define source and output auto camRgb = pipeline.create<dai::node::ColorCamera>(); auto xoutRgb = pipeline.create<dai::node::XLinkOut>(); xoutRgb->setStreamName("rgb"); xoutRgb->input.setBlocking(false); xoutRgb->input.setQueueSize(1); // Properties camRgb->setBoardSocket(dai::CameraBoardSocket::RGB); camRgb->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P); camRgb->setInterleaved(false); camRgb->setColorOrder(dai::ColorCameraProperties::ColorOrder::BGR); //camRgb->setPreviewSize(300, 300); //camRgb->setVideoSize(4056, 3040); camRgb->setVideoSize(1920, 1080); // Linking //camRgb->preview.link(xoutRgb->input); camRgb->video.link(xoutRgb->input); // Connect to device and start pipeline dai::Device device(pipeline); cout << "Connected cameras: " << device.getConnectedCameraFeatures() << endl; // Print USB speed // device already closed or disconnected exception //cout << "Usb speed: " << device.getUsbSpeed() << endl; // Bootloader version if(device.getBootloaderVersion()) { cout << "Bootloader version: " << device.getBootloaderVersion()->toString() << endl; } // Device name cout << "Device name: " << device.getDeviceName() << endl; // Output queue will be used to get the rgb frames from the output defined above //auto qRgb = device.getOutputQueue("rgb", 4, false); auto qRgb = device.getOutputQueue("rgb"); while(true) { auto inRgb = qRgb->get<dai::ImgFrame>(); auto type = inRgb->getType(); auto data = inRgb->getData(); return data; } } catch(const std::exception &ex) { int stophere = 7; } catch(...) { int stophere = 7; } return {};
Trying to break this down. One issue I'm seeing is dai::ImgFrame::getData()
is returning an array of size 3,110,400, which represents 1,036,800 pixels, which is not 1080p. It fits with what we're seeing though. 1080p should be an array of size 6,220,800 (1920x1080x3), and this is half of that. Which is what we see in the above image. The data array represents a 1920x540 image, or, three 640x540 images side by side.
- Edited
That's an idea, not one I like, but it is an idea. The implication is that the NV12toBGR24 routine I implemented is faulty, which is definitely possible. Are there no helper routines built into the API for situations like this?