OskarSonc Hi Oskar,
I have a question in the meantime (although I would like to reiterate that better docs are really vital for smooth development on our side!).
I have changed up my setup from a dynamic to semi-static one. However, I'm experiencing the opposite issue of Simon in this case: My temporal filtering does not seem to work at all. Moreover, I have a bunch of 'random' noise spots which only last a few frames each time. See the attached video. I'm specifically talking about the (typically red-coloured) 'blobs' which occur near the conveyor belt.
I have tried changing the temporal settings: Alpha ranging from 0.0 & 0.1 to 0.4 and 0.7, but to no avail. It also doesn't seem like I'm temporally filtering, as I'm observing no 'blur' effect. My code is attached below for initialisation:
class Camera():
"""
Camera class: Overarching Vision class, which connects with the camera and analyses frames for oriented boxes\\n
"""
def __init__(self):
"""
Constructor method\\n
:param settings: Settings related to the camera hardware
:type settings: CameraSettings
:param RGBFiltSettings: Settings related to the filtering of RGB frames to detect bundle contours
:type RGBFiltSettings: RGBFilterSettings
"""
self.camAttached : bool = True
self.newConfig : bool = True
devIP = "172.31.1.183"
deviceInfo = dai.DeviceInfo(devIP)
self.device = dai.Device(deviceInfo)
print(f"DepthAI v{dai.__version__}")
print(f"Connected to camera {self.device.getDeviceName()} at {devIP}")
print(f"Sensors: {self.device.getCameraSensorNames()}")
self.camResolution : List[int] = [640,480]
self.pipeline = dai.Pipeline(defaultDevice=self.device)
self.remoteConnector = dai.RemoteConnection(address="0.0.0.0",webSocketPort=8765,serveFrontend=True,httpPort=8080)
self.tofCam = self.pipeline.create(dai.node.ToF).build(dai.CameraBoardSocket.CAM_A)
self.tofConfig = dai.ToFConfig()
self.tofConfig.enablePhaseShuffleTemporalFilter = True
self.tofConfig.setMedianFilter(dai.MedianFilter.KERNEL_7x7)
self.tofConfig.enablePhaseUnwrapping = True
self.tofConfig.phaseUnwrappingLevel = 4 # Up to \~ 7.5 m
self.tofConfig.phaseUnwrapErrorThreshold = 1000 # [mm]
self.tofConfig.enableOpticalCorrection = True
self.tofConfig.enableBurstMode = False
self.tofCam.setInitialConfig(self.tofConfig)
self.depthConfigQ = self.tofCam.imageFiltersInputConfig.createInputQueue() # ToF node has built-in post-processing
self.depthProcessConfig =dai.ImageFiltersConfig()
temporal = dai.node.ImageFilters.TemporalFilterParams()
temporal.alpha = 0.1 # Tried variosu values here, e.g. 0.0, 0.4, 0.7, […]
temporal.delta = 0 # Auto
temporal.persistencyMode = dai.node.ImageFilters.TemporalFilterParams.PersistencyMode.VALID_8_OUT_OF_8 # Tried various modes here…
temporal.enable = True
self.depthProcessConfig.filterParams = [temporal]
self.rgbCam = self.pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
self.rgbConfig = dai.CameraControl()
self.rgbConfig.setAntiBandingMode(dai.CameraControl.AntiBandingMode.MAINS_50_HZ)
self.rgbConfig.setSaturation(3)
self.rgbConfig.setManualExposure(10000, 100)
self.rgbConfigQ = self.rgbCam.inputControl.createInputQueue()
self.rgbStream = self.rgbCam.requestOutput(tuple(self.camResolution), dai.ImgFrame.Type.BGR888p,
dai.ImgResizeMode.CROP, 15, True)
self.rgbQ = self.rgbStream.createOutputQueue(maxSize=20, blocking=False)
self.aligner = self.pipeline.create(dai.node.ImageAlign)
self.aligner.setRunOnHost(True)
self.tofCam.depth.link(self.aligner.input)
self.rgbStream.link(self.aligner.inputAlignTo)
self.depthQ = self.aligner.outputAligned.createOutputQueue(maxSize=20, blocking=False)
self.remoteRGBQ = self.remoteConnector.addTopic(topicName="RGB", group="common",maxSize=2,blocking=False,useVisualizationIfAvailable=True)
self.remoteRGBDQ = self.remoteConnector.addTopic(topicName="RGBD", group="common",maxSize=2,blocking=False,useVisualizationIfAvailable=True)
self.remoteDepthQ = self.remoteConnector.addTopic(topicName="Depth", group="common",maxSize=2,blocking=False,useVisualizationIfAvailable=True)
self.pipeline.start()
self.remoteConnector.registerPipeline(self.pipeline)
And then at the start of my main thread:
cam = Camera()
try:
while(cam.pipeline.isRunning()):
if cam.newConfig:
cam.rgbConfigQ.send(cam.rgbConfig)
cam.depthConfigQ.send(cam.depthProcessConfig)
cam.newConfig = False
newRGBFrame = cam.GrabRGBFrame()
newDepthFrame = cam.GrabDepthFrame()
[…]
Any idea what I might be doing wrong?