Hello,

I have a segmentation model trained on wide lens models. It does not work on the other cameras. Any suggestion other than retrain the model?

Thanks

    LcicC
    What does you model input look like? If it was trained with distortion, you can probably just mock these distortions for normal lenses as well. Though probably best to always first undistort the stream so the models work on all lenses.

    Thanks,
    Jaka

    The wide stream is flattened with alpha parameter 0.
    I made a trial reducing the size of the not-wide picture, that is, trying to reduce the actual size of the objects to keep them in line with the wide camera. The model works better but maybe there exist a better fix.

    Thanks

    6 days later

    LcicC
    Can you share some images of what you did? I am not sure I fully understand the transformations you are doing. As long as the images are undistorted, the model should work regardless of the object size..

    Thanks,
    Jaka

    • Edited

    Unfortunately I can't share the pics. However, le me try to give you a better overall description.

    Training

    Model is trained on wide pictures with Alpha parameter 0 and rgb-depth alignment. The images are 320x240.

    First Fix for Not-Wide lens

    1. Read image from the camera and resize it to 320x240 => img
    2. Create a 4:3 1280x960 black image => mask
    3. overlap img and mask => mask_overlap
    4. resize overlap_mask to 320x240
    5. Input to the model

    This way, the model seems to be working better but I'm wondering if there are better ways to fix.
    The resolution of the mask (point 3) has been found empirically by comparing the size of the objects in both the wide and not wide images.

      LcicC
      If your model is sensitive to object scales then I guess letter-boxing is the only way to do it. Would be better to train a multiscale model, but this should do fine if the object is at constant distance.

      Thanks,
      Jaka