Currently, the DepthAI API & SDK are not supported on iOS. What is the reason for this? What would be needed to add support for it?

I'm aware that connecting an 'accessory' to the iPad requires certification by Apple within the Made-for-iPad program and using the External Accessory Framework. However, using a Lightning/USB-C to LAN adapter and powering the OAK-D-PoE through the USB-C port using a power bank would work. Yes, this is a messy setup, but still, it would work for our use case.

  • erik replied to this.

    Hello pawi ,
    The main reason why we only have support for Android (and not iOS) is that we have an engineer that knows how to make Android apps on our team. As you mentioned, it's also a bit easier to develop DepthAI app for Android as you don't need any certificates from Apple. Android integration is quite unpopular, so we never went into adding integration for iOS, as it would most likely be even less popular. Thoughts?
    Thanks, Erik

    Hey Erik
    I admit our use case is quite specific. We need to get the pose of 30 aruco markers with a size of 18mm with millimeter (better sub-millimeter) precision, at a distance of around 30 cm. Our current implementation using Unity/ARFoundation (with Apple ARKit below) requires the user to scan each marker from at least 3 different angles. This process is not easy for the user to understand, and very time consuming (3 to 5 minutes). We hope that the OAK-D stereo camera provides us with two poses per marker and frame to be more accurate/precise. Furthermore, having fixed focal lengths will also support us, since ARKit constantly loses focus.
    From your answer, I get that it should be technically possible to run this on iOS. What do you think would be the biggest challenge going this route? Any other ideas (e.g. running the script/http server to get the frames and avoid using the DepthAI at all)? The last resort would be to implement this on a raspi in between (and then going for the OAK-D-CM4), right?

    • erik replied to this.

      Hi pawi ,
      If you go with the depthai route, you would probably need to add bindings (c++ -> swift) so you could develop your mobile app.
      Another option would be to use OAK PoE model and stream TCP packets directly to the server (hosted on the iOS?), but I'm not 100% sure that would work as expected as I don't know iOS/iphone limitations.
      Thanks, Erik

      a year later

      waking up this thread. I would really like to run an Oak (Type-C) directly to an iPad

      Anyone got this up&running? Would Luxonis be able to support with a light-weight SDK?
      I believe step 1 is to just get the camera feed in and allow picking depthai as well as Apple's VisionKit for processing

      • erik replied to this.

        Hi diddy-cam ,
        I believe we haven't pursued this as it involves paying Apple for a license (MFi program, and they charge per inquiry), so our HW would be able to talk to the iPhone. If you have a commercial product behind it perhaps it'd make sense to pay for that license, but it wouldn't make sense for us (as you can see in this thread, there isn't ton of interest). Thoughts?

          erik hi, actually most mfi programs now has the royalty waived. ie no per unit fee

          also, I primarily looking at ipads (type-c instead of lighting) as they provide a good UX for annotating incl the pens.

          pretty sure you can run both TCP and libusb under iOS on ipad without mfi today

          Interesting, that's good to know. As Android support is community-support only, I doubt we will develop support for iOS anytime soon.