I have finally found two python scripts that actually generates point cloud data while streaming. The attached link is the exact repo with code: luxonis/depthai-experimentstree/gen2-rgb-depth-align/gen2-camera-demo
main.py and projector_3d.py are the scripts that I am running. It displays the image in grey scale. My question is how can I retrieve all the point cloud nodes to give me a better image with more point cloud data.
The attached image is the current state of my point cloud

There might be a problem with the calibration of your device
You can give us the MXID of your device, so we can check if there was something wrong in the calibration process

Thanks,
Filip

    FilipJenko
    Also the goal of my task is to get the camera to possible capture live cam footage to generate pointclouds in one window, have a window that also view the actual image itself. I am very new to depthai and luxnis products. This is a project for a company I am currently interning at.

    FilipJenko I tried that a few time before and I still ended up with similar results. Yet some point clouds were more visible than the previous time before.

    Hi @gdeanrexroth
    This looks ok; though the scene you are showing is really simple.
    I'm thinking it's likely a bad experiment example, so I'd urge you to use the examples in the depthai-python repo (once the issue in the other thread is fixed).

    Thanks,
    Jaka