I was able to run a capture device while running the Visualiser that came with the developer kit on my Ultrabook.
It impressively and accurately captures the movements of my fingers in real time while showing a visual representation of the touch points on the screen. I also ran my Ultrabooks’ inbuilt web cam in an attempt to capture what my fingers were doing, again, all in real time. The Leap Motion is a pretty impressive device but also a pretty impressive display of power from my Intel Ultrabook in running and displaying the unit, ditto web cam and and capturing video all at the same time!
Leap Motion / Intel Ultrabook
NB part way through the demo my other hand is dragging the visualizer around in 3D space.
Now that my Leap Motion Developer kit finally looks to be winging its way to me I thought I would share some of my thoughts and ideas I hope to be able to incorporate into my existing and future Apps.
My understanding is that normal Windows 8 gestures are automatically mapped onto standard Leap gestures, so my thinking is around going beyond what might be considered the norm for a Windows 8 App and more fully utilise the opportunities afforded by gesture recognitions, at the same time as ensuring that Touch and Mouse lead interaction remain optimal.
While the kind of interaction shown in MR could certainly be visualised using a Leap motion device my own feeling is that what might be lacking is the unique context of what it is like to hold a document. Holding a document, while a necessary task to facilitate reading also has other connotations. That document is uniquely identified as being current and selected in a way that is not quite met by the standard Windows based paradigms of selection and ownership. What is lacking is the tactile response that holding a document uniquely gives. There is other feedback that is going to be absent, for example holding a large document is a different experience to holding a short one, or a single piece of paper I am not sure how important this is but if nothing else it is interesting to explore and speculate on what it might mean in this context.
Suitability for Purpose.
While there is obviously going to be some initial interest in one attempting to control everything using the device I suspect that the novelty in doing so will soon wear off! For the device to be truly successful then it needs to find its place within the existing control ecosystem. The mouse and keyboard are natural partners, touch is still the new kid in town (and some might say has the advantage of some ‘built in’ tactile feedback). So it is going to be an interesting time. The Leap Motion is going to need some new Apps that are both indispensable and unequivocally bonded to the device (a device specific app store will ship with the product at launch).
Relationship with Kinect
One might immediately draw parallels between this device and Microsoft’s’ Kinect device. At the moment however they appear to be sufficiently differentiated with the Kinect largely being interested in whole body movements “Macro Movements” and the Leap in “Micro Movement”, largely wanting input from hand and finger based gestures.
Control of 3D space
While I have no doubt that developers will come up with interesting and novel uses for the device within the predominantly 2d based systems that are in use today I cannot help wondering if it is within 3D space that it might really come into its own? One can imagine ones hand becoming virtually elongated such that it is operating both in front and behind the screen itself. That offers up a whole world of possibilities for exploration that can only be hinted at without a 3D monitor and associated environment.
Imagine sculpting something in 3d space with your hands, then sending the result to a 3d printer?
I look forward to receiving and playing with the kit and reporting back my progress through this blog.
Leap Motion Developer Link