Hi ramkunchur ,

I do not think this is currently supported. The other rotations I believe are being done on by the image sensor itself (changing the readout/etc.).

That said, I'm not sure if the Image Manip node can be used for this. Looking.

Thanks,
Brandon

On the subject of Image Manip - we will be adding quite a bit of functionality soon-ish, with the efforts below starting likely at the end of this month:
https://github.com/luxonis/depthai/issues/376

That said, it looks like you can use Image Manip now to rotate an image 90 degrees:
https://docs.luxonis.com/projects/api/en/latest/components/nodes/image_manip/

That said, I don't immediately see the function. Asking @erik .

Thanks,
Brandon

    Hello ramkunchur ,
    here's a minimal demo code that achieves rotating of the frame with ImageManip. You could also use colorCamera instead of mono camera - you would just need to change the values of the dai.RotatedRect (center and size) according to the resolution. I hope this helps!
    Thanks, Erik

      Hi Brandon and erik .

      Thanks so much for the information...

      I'm trying to implement camera rotation to 90 degrees using specified method using colorCamera

      Below is code snippet, I get an error, seems there is unknown property error.

      pipeline = depthai.Pipeline()
      if camera:
      print("Creating Color Camera...")
      cam = pipeline.createColorCamera()

          # code to rotate camera by 90 degrees - start
          manip90 = pipeline.createImageManip()
          rr = depthai.RotatedRect()
          rr.center.x, rr.center.y = cam.getResolutionWidth() // 2, cam.getResolutionHeight() // 2
          rr.size.width, rr.size.height = cam.getResolutionHeight(), cam.getResolutionWidth()
          rr.angle = 90
      
          manip90.initialConfig.setCropRotatedRect(rr, False)
          cam.out.link(manip90.inputImage) # problem with this line...
          # code to rotate camera by 90 degrees - end
      
          cam.setPreviewSize(300,300)
          cam.setResolution(depthai.ColorCameraProperties.SensorResolution.THE_1080_P)
          cam.setInterleaved(False)
          cam.setBoardSocket(depthai.CameraBoardSocket.RGB)
          cam_xout = pipeline.createXLinkOut()
          cam_xout.setStreamName("cam_out")
          cam.preview.link(cam_xout.input)

      Error message as below:
      cam.out.link(manip90.inputImage) # problem with this line...
      AttributeError: 'depthai.ColorCamera' object has no attribute 'out'

      Requesting your kind help here..
      Please let me know if my implementation is correct and also how to resolve this error.

      I need to simply implement the following:
      The camera is placed horizontally, so need to rotate it to 90 degrees so that straight image is captured...
      This needs to happen both at sending image for inference and the one which we display finally in imshow method.

      Thanks & Best Regards,
      Ram

      • erik replied to this.

        Hello ramkunchur,
        the colorCamera doesn't have out output, you should use preview instead. Please refer to our documentation for more information. I just tried it out with color camera instead of mono but it doesn't seem like it's working correctly (code here). We are investigating this further and it should be fixed this week, we apologize for the inconvenience.
        Thanks, Erik

          Hi erik

          Thanks a lot for your quick response, really appreciate it.

          I'll wait for an update about the fix.

          Thanks & Best Regards,
          Ram

            Hello ramkunchur ,
            sorry for the troubles, it turns out I was incorrect - rotating color frames is already available, the width of the (input) frame just has to be in multiples of 16. Here's a minimal example code.
            Thanks, Erik

              Hi erik ,

              Thanks so much.

              I have come across another issue now, since I set preview to multiples of 16 i.e (640,400) originally it was (300,300)
              cam.setPreviewSize(300,300)
              changed to...
              cam.setPreviewSize(640,400)

              Inference doesn't work due to above changes, I tried to reset (300,300) to (640,400) in all other instances, inference doesn't work.

              Can you please tell me how I should adjust in other instances after changing preview size to 640,400?

              I also tried to change to (640,640) in all instances, get following error:

              [NeuralNetwork(2)] [warning] Input image (640x640) does not match NN (300x300)

              Need your help with this last step.

              Thanks & Best Regards,
              Ram

              • erik replied to this.

                Hello ramkunchur,
                here I quickly prepared a minimal solution to achieve that. It uses another ImageManip to crop the frame into 300x300. You could also use cropping on the first imageManip, but for some reason, the output frame doesn't keep its aspect ratio (it gets stretched).
                Thanks, Erik

                  Hi erik
                  Thanks so much for detailed help.
                  The program doesn't seem to run... nothing happens

                  Could you please check once?

                  Thanks & Best Regards,
                  Ram

                  • erik replied to this.

                    ramkunchur just tried it again; works as expected. I have depthai version 2.8 (in case you have an older version where it potentially doesn't work). What is the output of the program?
                    Thanks, Erik

                      Hi erik

                      I am using Oak-1 and had depthai 2.1 installed

                      I now installed depthai 2.8 ...

                      I'm now getting rotated output without inference results.

                      below is the error:
                      [NeuralNetwork(4)] [warning] Input image (640x400) does not match NN (300x300)
                      FPS:0.00

                      Thanks & Best Regards,
                      Ram

                      • erik replied to this.

                        Hi erik

                        I will wait for your response...

                        Requesting your help in this regard.

                        Thanks & Best Regards,
                        Ram

                          ramkunchur

                          It looks like you'll need to use ImageManip to crop and or "squeeze" the 640x400 resolution you have to the 300x300 resolution of the neural network.

                          Thoughts?

                          Thanks,
                          Brandon

                            ramkunchur you are linking the incorrect imageManip output to the NN node. You should have linked the cropManip ImageManip node - that's the one that crops the 640x400 frames into 300x300 - to the NN input.
                            Thanks, Erik

                              Hi erik , Brandon

                              Thanks for your reply...

                              I've used same code from your example as below:

                              	camRgb = pipeline.createColorCamera()
                              	camRgb.setPreviewSize(640, 400)
                              	camRgb.setResolution(depthai.ColorCameraProperties.SensorResolution.THE_1080_P)
                              	camRgb.setInterleaved(False)
                              
                              	manipRgb = pipeline.createImageManip()
                              	rgbRr = depthai.RotatedRect()
                              	rgbRr.center.x, rgbRr.center.y = camRgb.getPreviewWidth() // 2, camRgb.getPreviewHeight() // 2
                              	rgbRr.size.width, rgbRr.size.height = camRgb.getPreviewHeight(), camRgb.getPreviewWidth()
                              	rgbRr.angle = 90
                              	manipRgb.initialConfig.setCropRotatedRect(rgbRr, False)
                              	camRgb.preview.link(manipRgb.inputImage)
                              
                              	cropManip = pipeline.createImageManip()
                              	cropManip.initialConfig.setResize(300, 300)
                              	manipRgb.out.link(cropManip.inputImage)
                              
                              	manipRgbOut = pipeline.createXLinkOut()
                              	manipRgbOut.setStreamName("cam_out")
                              	cropManip.out.link(manipRgbOut.input)

                              I still get same error as below:
                              [14442C1021FB92CD00] [140.524] [NeuralNetwork(4)] [warning] Input image (640x400) does not match NN (300x300)

                              I do get the output frame, however, without inference results...

                              Really need help to understand what I am doing wrong.
                              Alternatively, can you please provide an updated script of gen2-fatigue-detection with rotate option using the code suggested for rotate?

                              I am using this as an example... want to run inference with camera placed horizontally.

                              This is kind of important, thanks in advance for your time and help.

                              Thanks & Best Regards,
                              Ram

                              • erik replied to this.

                                ramkunchur yes, that's the correct code. I have created another demo code that links 300x300 rotated frames to mobilenet. You will need to place this script into depthai-python/examples, as it requires mobilenet blob. Unfortunately, I don't have time to update the script you mentioned, but I am sure you will be able to update it yourself with the help of the demo script I have just created - it should be straightforward.
                                Thanks, Erik

                                  Hi erik ...

                                  Thanks I'm able to get it right this time..

                                  However, my full screen mode doesn't work with this, probably as it needs output resolution to be in multiples of 16...

                                  Not sure how to resolve this as having full-screen output would have been nice

                                  Thanks so much for your time and help... 🙂

                                  Thanks & Best Regards,
                                  Ram

                                  • erik replied to this.

                                    ramkunchur You could just use cv2.resize() function to upscale the 300x300 frame to the desired size. You could also stream 1080P video output to the device and display detections on the video frames - not 300x300 preview frame. So something similar to this example.
                                    Thanks, Erik

                                    a year later

                                    Hello All,

                                    I am trying to rotate my camera but I am confused by the links and syntax of this api and I need this done very soon for production. Here is my code:

                                    def get_pipeline():
                                        pipeline = dai.Pipeline()
                                    
                                        # # Define a source - color camera
                                        cam = pipeline.createColorCamera()
                                        cam.setBoardSocket(dai.CameraBoardSocket.RGB)
                                        # cam.setInterleaved(False)
                                        cam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_48_MP)
                                        cam.setVideoSize(1920, 1080)
                                        cam.initialControl.setSceneMode(dai.CameraControl.SceneMode.FACE_PRIORITY)
                                    
                                        # Create MobileNet detection network
                                        mobilenet = pipeline.create(dai.node.MobileNetDetectionNetwork)
                                        mobilenet.setBlobPath(
                                            blobconverter.from_zoo(name="face-detection-retail-0004", shaves=3)
                                        )
                                        mobilenet.setConfidenceThreshold(0.7)
                                    
                                        crop_manip = pipeline.create(dai.node.ImageManip)
                                        crop_manip.initialConfig.setResize(300, 300)
                                        crop_manip.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888p)
                                        cam.isp.link(crop_manip.inputImage)
                                        crop_manip.out.link(mobilenet.input)
                                    
                                        # Create an UVC (USB Video Class) output node. It needs 1920x1080, NV12 input
                                        uvc = pipeline.createUVC()
                                        cam.video.link(uvc.input)