John Underkoffler’s TED talk and MS Kinect previews have been making the rounds, but the following links feel more prescient…
Fujitsu’s motion sensing laptop interface makes no sense
Why talk to a computer? Surely talking to a human is traumatic enough?
Though I’d still be interesting in trying Kinect. The motion controls seem to be direct: real motion = onscreen motion (hopefully without much UI to bog you down). I also hear tell of some kinda DDR killer. I’m just not sure it’s the answer to input devices, or even to gaming (especially if you live above someone).
As for g-speak, I haven’t heard of or seen any concrete applications. Swimming through abstract 3D data or video editing via sign language sound like a lot of extra work – physically and conceptually. I’m starting to think the future peaked a couple years ago and tech is getting dramatic, fussy, and simply for the sake of making more tech. I wouldn’t throw away my mouse just yet.