Hi alexv ,

I just tried the same, and I can't see the artifact:

Code here:

Could you try the same code as well on your side?

    erik I've tried the same code but it was not working properly since on my side it only showed the first frame and then the image feed stopped. However, I tried with the depthai-python examples/ColorCamera/rgb_camera_control.py and the captures did seem to avoid this effect. The thing is that I am using an application in C++ and I would like to verify if the images obtained from the feed are not having this artifact. So, I will modify the saving method using the still frame but for c++ to see if that resolves the issue for the image capture, but there is still the issue of this artifact appearing even without saving, reading the image directly in the pipeline generate the mentioned effect.

    to create my camera pipeline I use this code:

    <script src=">

    and this is the code used to get the image frame:

    <script src=">

    My application uses the images from the camera on real time, to estimate the pose of aruco markers, and even if a solve the saving issue I still have the artifact while runningn on real time the detection and pose estimation algorithms, and since the position of my markers corners are displaced then my pose will be displaced, increasing the pose estimation error.

    I have been using this camera since january 2022 and this artifact was noticed last month, and the only thing that I have changed since last month is an update from depthai version 2.17.4 to 2.20.2. Do you think this is related to the hardware of the camear itself ?

    • erik replied to this.

      Hi alexv ,

      was not working properly since on my side it only showed the first frame and then the image feed stopped

      You have to press c to capture (display & save) new 4K image. You could try installing older version of depthai, it might be that camera tuning changed. Please share the python minimal repro code - as I am not very familiar with c++.

      Thanks, Erik

        6 days later

        Hello erik,

        It took a while but I've managed to replicate the artifact with the following code.

        <script src=">

        You should look for the two saved images in line 171 and in line 189. This time I was using depthai version 2.20.2. I use the ImageManip to avoid loading the whole 3 image channels into the buffer due to frame rate. This step is key since I use the OAK-1 camera connected to an Android Device and if I don't do this the frame rate goes from 20 (taking only one channel from imgmanip node) to 10(taking the 3 channels image from isp).


        I was not able to use the version 2.17.4 in python. But I've done the same experiment with the 2.17.4 in C++ and the artifact is less evident.

        This is the complete image in case the scale is not clear in the first image. The artifact appears normally in the right side of the image as you can see.

        I hope that you can help me with this issue. Thank you for your attention.

        • erik replied to this.

          Hi erik, here is the same code without the unnecessary controls, but the one to save the image when the 'C' key is pressed. This is the minimum I can do since reducing something else from the code will not be representative or enough to reproduce the issue. This time you should look for the saved images in line 92 and 108 to verify the presence of the artifact when using the ImageManip node. The given description in my last message is still the same.

          <script src=">

          I hope that you can help me.

          • erik replied to this.

            HI alexv, so which output (of the two) produces artifacts on your side?

              erik the one from line 108, which came from the ImageManip node, called 'gframe'. The image from line 92 is the one that come directly from the still encoder.

              • erik replied to this.

                Hi @alexv , I simplified this to 55 LOC, and can still confirm an issue. I have sent it over to FW engineers.

                #!/usr/bin/env python3
                
                import depthai as dai
                import cv2
                import numpy as np
                
                # Create pipeline
                pipeline = dai.Pipeline()
                
                # Define sources and outputs
                camRgb = pipeline.create(dai.node.ColorCamera)
                camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_4_K)
                camRgb.setInterleaved(False)
                #camRgb.setIspScale(3,3) # 1080P -> 720P
                camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
                camRgb.setImageOrientation(dai.CameraImageOrientation.NORMAL)
                
                manipVideo = pipeline.create(dai.node.ImageManip)
                manipVideo.setMaxOutputFrameSize(3840*2160)
                manipVideo.initialConfig.setFrameType(dai.ImgFrame.Type.YUV400p)
                
                
                ispOut = pipeline.create(dai.node.XLinkOut)
                outVideo = pipeline.create(dai.node.XLinkOut) #########
                
                outVideo.setStreamName('gray')  #########
                ispOut.setStreamName('isp')
                
                camRgb.isp.link(ispOut.input)
                camRgb.isp.link(manipVideo.inputImage)   #########
                manipVideo.out.link(outVideo.input)      #########
                
                # Connect to device and start pipeline
                with dai.Device(pipeline) as device:
                
                    ispQueue = device.getOutputQueue('isp')
                    grayQueue = device.getOutputQueue('gray',3,False) #########
                
                    while True:
                        ispFrames = ispQueue.tryGetAll()
                        for ispFrame in ispFrames:
                            f = ispFrame.getCvFrame()
                            cv2.imshow('isp', cv2.pyrDown(cv2.pyrDown(f)))
                
                        grayFrames = grayQueue.tryGetAll()
                        for grayFrame in grayFrames:
                            gframe = np.array(grayFrame.getData(),np.uint8).reshape(2160,3840)
                
                        # Update screen (1ms pooling rate)
                        key = cv2.waitKey(1)
                        if key == ord('q'):
                            break
                        elif key == ord('c'):
                            cv2.imwrite("2174_img_manip.png",gframe)

                alexv Seems like only imagemanip to YUV400 produces these artufacts, not the still frame from encoder. Could you confirm? Note also ImageManip's limitations - yuv400 isn't officially supported by ImageManip. Could you try to repro with the supported type as well?

                  erik Yes, the artifact it is only produced when using the ImageManip node. About the ImageManip limitations I do not surpass the width of the image (3840 pixels which is also a multiple of 16). I've tried also with the GRAY8 image type and the artifact also appears with this configuration.

                  I've tested with the following versions of depthai-core:

                  • 2.17.4
                  • 2.18.0
                  • 2.19.1
                  • 2.20.2
                  • 2.21.2

                  The artifact appears in all the versions but for the 2.17.4. In this version the artifact is almost imperceptible, but it does appear when looking carefully (more evident in an image with checkerboard).

                  To give a little of background, I was working with the 2.17.4 since at that time it was the available release version, so thats it is why I have not notice this artifact. I was intending to update the depthai-core version for a new one to be able to recover the metadata from each image (lensPosition, ISO, exposureTime), and it may be related that from version 2.18.0 to 2.21.2 it is possible to obtain the imaga metada, but the artififact is also more evident for those versions.

                  2 months later

                  Hello @erik ,

                  I would like to know if there has been an attempt to solve this issue, do you think it would be possible to correct it ?

                  Thank you, Alexander

                  • erik replied to this.

                    Hi alexv , to get updated on bug fixes I'd suggest posting an issue for it on depthai-core repo on github.

                      erik You are right, I should have done that. I was waiting for an answer from your side and I have just thouhgt it was not possible.

                      4 months later

                      I can confirm that I can still produce these artifacts in the current version of depthai. it is easiest to see this if you setup a "frame forwarding" application that uses script node to oscillate between using an imagemanip node for yuv400 and an imagemanip node for yuv420, keeping as much as possible the same. to mitigate any smoothing effect that my eyes/brain might do (because my old eyes often play tricks on me), I reduced FPS to 1 and used an even/odd modulo operator to determine which imagemanip to use.
                      If you do this, you will see that the offset shift does not appear to be linearly distributed throughout the image. Some objects in view are offset more, and some hardly seem to be offset at all. My colleague and I were looking at this a couple weeks ago, and I can't recall if we determined that it was dark-edged objects that appeared to shift, or if it was something else. At my next opportunity, I'll setup a MRE with frame-forwarding.

                        robotaiguy Hello, in my particular use case it is not possible to combine both ImageManip nodes since the hardware configuration I use do not support YUV420 at least at 20 fps, that is the reason I use YUV400 or GRAY8.

                        In my use case I identify the corners of an object in the image to perform high precision pose estimation (or at least as high as the image allows it) so a shift of the pixels positions (no matter if it is not systematic in the same place) creates an error that should not exist, degrading the quality of the system in general when it is known that an image without such artifact can achieve an expected precision when estimating the pose of an object.

                          6 months later