[Glass] Anatomy of digitally overlaid vision
When looking at the way Google Glass works, we have several stumbling blocks. How do you show a screen to the user without obsuring their view.
Looking to the Google patents for any clues, the Google glass unit uses a mini projector and projects (via a polarising filter) an image back to the user (reportedly onto their retina). The problem with Google Glass is because very few people have actually even used one, a lot of information regarding how they work is still unknown.
Near-eye displays are not a new thing of course. Video glasses have existed for years and a quick search of Instructables finds many implementations (some better than others) to give the user is seamless computer/realworld interaction.
Unfortunately a lot of these systems rely on the either blocking or partially obsuring one eye with a viewfinder style screen. Other implementations include complete lack of vision and a camera to mix together the real world and the digital world with overlays.
Other options for allowing the user to still see the real world and have a digital overlay include usings small screens and mirrors but all these solutions rely on one thing - the digital data being optically overlaid with the real world and not blocking or obsuring the natural vision.
Reader Comments