[Glass] Creating a UI
When creating a you why, you need to think about how he usually is going to input data into the system. This is my Google Glass, he's not have (nor want) the ability to have a physical keyboard. You ideally wanted to be as intrusive to your workflow as possible. This includes using technologies like speech to text recognition cameras and natural gestures.
One of the other major downfalls of using technologies like speech to text recognition, is the ability to having some feedback. Without instant feedback, users left feeling like you're talking to me inanimate object and this can lead to users not being able to fully utilise the technology at hand, or feeling like it got it wrong after waiting for it to process the speech.
In Google Chrome (version 25) and using the Google Search app, you have been able to do continuous speech recognition which gives you near instantaneous feedback on what was said. This is thanks to Google's undocumented "full duplex" Speech API. More to come on this in a later post.
The next issue with computer human interfaces, is one of gestures. Getting someone to do a gesture command using a limb, finger or otherwise can be a bit embarassing if you're wearing something like Google Glass which is meant to be seamless, and integrate into your life. Better approach is to use what the person is very doing and to make a judgement upon whether they are taking interest in the subject matter on the device.
The Samsung Galaxy S3 highlights disability by recognising when you look at the screen, and dimming when you look away. It is primarily a security and power saving feature for the Samsung but in applications such as Google Glass, it can only be useful to see if the person is actually utilising the device (or wishes to) and to silently disappear when needed. Another practical use would be to use the devices camera, to see whether it's appropriate to display information to be user.