Tell a lie, I actually have done some work with Kinect since 1.5. I came to the conclusion while doing bounding box area changes that actually grabbing the frame or UI element in a general Win7 desktop isn't a good idea (probably why Win8 touts Metro and touch/gesture so much).
Using skeleton wrist positions, a custom winform, some buttons and position notification objects I threw together a sample for testing. It runs off a main hand and a secondary hand for manipulation. Some words follow:
1: Either hand mouse over UI elements causes them to change visuals as an indication.
2: Main hand hover over elements for X seconds causes them to be "picked up" and centered to the main hand, they can then be dragged about the usable area.
3: Sharp movement of the main hand breaks the pick up calculation and releases the UI element.
4: With a picked up element, moving the second hand into the area causes it to enter a resize mode. Main hand left/right changes width, secondary up/down changes height.
It's just a test setup but it has already pointed me to areas that need thought i.e. you end up crossing hands too much during re-sizing, "dropping" controls is just a side effect of framerates and WinForms aren't really a great way to go here for the UI. Something perhaps in OpenTK might be the better target, but that lends a fun issue of, if this is to be a presentation system, so I end up writing my own readers for document formats etc. Maybe I'll have to look at WPF eventually.
And I'm still kicking around ideas to manage Z axis (where a "proper" 3D environment for the UI would make it excellent).
No comments:
Post a Comment