I was thinking about the next big leap forward in personal technology on my run today because that’s how I roll. Personal computing has basically gone from desktop to laptop to mobile, including both tablets and phones. What’s next? The limiting factor is primarily user-interface. Anything smaller than a phone will be too small to have a screen (no output) and too small to have any kind of keyboard (no input). Some kind of direct mental control might work for the input, but there are big privacy concerns. Google Glass is supposed to be the next leap forward in displays, but I’m skeptical. What we needed, I thought, was a contact lens with a screen in it. I figured that was still a long ways off, but the Internet is generous, and always provides. That’s right, folks, an LCD display on a contact lens is already a reality, although it’s just a simple monochrome display at this point.
So the next stage of personal computing will really be wearable devices where the device is actually distributed into smaller segments, like perhaps a CPU built into your watch that displays to a screen in your contact lenses.
Makes me think of the book called “Feed”. It’s a YA book from a few years back.
Too bad that the LCD screen described in the article is not for user-interface, but rather only for others to see. Because the image is so close to the eye, it cannot interpret it and can only display colors/symbols for others to see on the wearer’s eye. From what I gather, a Virtual Retinal Display (http://en.wikipedia.org/wiki/Virtual_retinal_display) is more along the lines of what may be up and coming. Plus, it has the added bonus of being able to send a different image to each eye to create very realistic 3-D images.