I have a working camera and real-time application.
For my use case the camera stays in a fixed position and only has a variable distance from the area it observes. This distance is set by the user. When the camera is positioned physically further away then naturally the area it observes takes up a smaller pixel area of the resulting preview size image.
The application has a calibration step that I'd like to update so that when the camera is positioned further away like this, the user can effectually zoom-in so that the observed area takes up more pixels in the preview size image. The desire for this is so that the preview image has more "meaningful pixels" during inference time.
This is a real-time application with a small preview size so this is why I'm exploring this idea of more "meaningful pixels".
My questions are:
- Does this more "meaningful pixels" idea make sense? Is there a better term I should be using?
- What options and APIs do I have to achieve this?
- Are there any existing code examples that would help guide me?