Journey into AR — Part 3, User Interaction

--

Pivoting — From Core ML to Standalone ARKit

After three days of research and working through demos, the harsh reality was that a pivot was needed. Or a more appropriate statement may be we needed to revise our initial goals for user interaction.

As a recap, we are building a Jenga game and the intention was to use Core ML with Vision to perform tracking on a user’s fingers to allow them to interact with the tower of blocks. However it seems like that is not currently feasible given what is in ARKit and Core ML out of the box.

The Challenge

ARKit and Core ML are great libraries, but it seems that there is currently a gap in being able to implement 3D tracking in the way we had intended. Vision will allow you to track an identified feature, but only in two dimensions. ARKit can convert screen location into real world space based on projecting a ray through that feature down to a plane, but the result is tracking in the X and Z directions as computed from Vision’s X and Y direction.

Two peas in a pod

We may be missing something, but based on any available demos or tutorials on the subject, and the existence of only one company that solely creates products for tracking hands in 3D, it seems that the only way to achieve this is to use an external machine learning model trained specifically for hands and hand gestures. Given our 1 week timeline, this feels unrealistic.

The Alternative Solution

As an alternative, we will plan to implement user interaction by using gestures on the phone. This will provide CGPoints that can then be translated into 3D space using hit tests with the playing board which is an identified plane, and it still achieves the larger goal of deep interaction with AR generated objects.

Amazing Things that Happened (Yesterday)

Basic user interaction — using UI GestureRecognizers, we included methods for pinching and rotating that allow users to zoom and rotate a block respectively. We still need to implement panning for lateral movement of a block but that will be today’s goal.

Add textures to blocks — it is a small change, but it really makes a huge difference for visual appeal. Three different wood grain textures were applied and it’s hard to not have flashbacks to playing Jenga at family reunions when I see how realistic they look.

Getting a good night’s sleep — not technical at all, but after going very hard for almost three days straight, it was getting to a point were code was starting to blur together.

Well that’s all for now. It was frustrating to have to pivot from our original user interaction plan, but it feels good to be taking on such a new challenge, even if it didn’t work out. I will plan to provide a small walkthrough once we are moving blocks since there are not many tutorials out there regarding moving created objects / nodes.

--

--