To Visually Process New Tech, Old Tech Might Need An Upgrade
post-template-default,single,single-post,postid-4369,single-format-standard,has_paypal_express_checkout,ajax_fade,page_not_loaded,,qode-title-hidden,columns-4,qode-child-theme-ver-1.0.0,qode-theme-ver-10.1.1,wpb-js-composer js-comp-ver-5.0.1,vc_responsive

To Visually Process New Tech, Old Tech Might Need An Upgrade

To Visually Process New Tech, Old Tech Might Need An Upgrade

The best upgrade your eyes could ask for.

by Curtis Silver

Another Consumer Electronics Show has come and gone, and while I wasn’t there, following the stories related to all the new tech was something that required little difficulty. One thing can be said though, when looking at the continuing advancements in portable and consumer technology, the future is here. But are we physically prepared for the future? The rate of human evolution is considerably slower than the recent evolution of technology (based on our evolved intelligence.) While we can make it, and imagine it – can we use it?

Obviously we can use it. Even someone without a drivers license can drive a car. That’s not the general argument. I suppose a more precise question would be, can we process the data that is being visually presented via new technology? While new technology is on the cutting edge, our eyes are still the same human evolved light filtering, image processing devices they’ve been for centuries. With the advent of glasses, then contacts and now corrective surgery, our vision can be altered, but only altered to the point of human perfection.

The speed at which technology is advancing, human perfection in vision might not be enough. Plus, it’s only a matter of time before our eyes are the controlling device for technology, rather than our hands. Which means our vision and the processing center in the brain need to be able to keep up, but will they? For that matter, with current technology, can they?

Let’s start with an oft discussed visual issue – frame rate. In the human world, our eyes don’t capture motion in the terms of single frames, it’s a constant stream of analog movement data. There is a common misconception that we cannot discern any frame rate over 30 FPS. While generally true in the sense that it becomes difficult to tell the difference, the human brain, when trained properly, can identify when the frame rate changes that much. The thing is, frame rates are constantly being adjusted to compensate for the technology we’re viewing them on.

For instance, TV and movies use motion blur, but it has to be noted that you are not controlling the action. This causes the images to have an almost constant blurred motion, which is easier for our eyes to translate into data. For video games, especially on the PC, there is no motion blur (though some games on higher end processors are incorporating it) so if you were to pause a game, you’d get a frame rather than a blurred image.

Consider when you are watching live television, a sporting event for instance. Since the camera has to move according to the action, no matter the frame rate of your television there is going to be some sort of pixelation or clipping when translated into your vision. That’s because neither you, or a production crew is controlling the action. In the physical world we wouldn’t see that clipping, but because we’re viewing through a screen, that’s the way our brains translate it.

What all this means is current visual tech runs at frame rates that our eyes can translate, most of it being created and filmed specifically so we can. But that’s entertainment technology, what about mobile and interactive technology? First, we have to understand visual perception.

Not only do we need to be able to sense (in this case, see) something, but we need to be able to process it and derive meaning from it. Visual perception is how different regions of the brain take all those small things we see in the world, then mash them all together to create the final sighted result. In the world of web design, visual perception is a huge factor in designing anything from layouts to the GUI on the screen.

Just by glancing and with minimal awareness we notice motion, shapes, color, contrast and all the while our brain is processing and putting these pieces together like a jigsaw puzzle to deliver a final image. This is all stored in our working memory, like a post-it note pad in our brains taking constant notes. So it’s important in design, especially with GUI, to be able to create that contrast, color, motion & shapes in the most efficient way possible for us to translate it all.

Think about how your current iPad or other device is laid out. No matter the background image (it’s My Little Pony, don’t lie), think about how the icons stand out. How there is a faint outline around them, how the image – once memorized – becomes always immediately recognizable. Think about how the windows open and close, think about the reaction after you touch your finger to the screen and how your eyes follow the motion. Be aware of something as simple as swiping to unlock, how it’s more than just a finger swipe, you are watching that motion, because it’s there.

Now, take away the background. Suddenly you can see through your iPad. There’s your shoes shining through where once was Rainbow Dash. You can see the cracks in the tile on the floor, but the GUI on the screen suddenly isn’t as easy to read. Now you have to focus on what you are swiping, your eyes constantly struggling for focus against the transparent background. This is all hyperbole of course, suggesting that the GUI wouldn’t keep up with the technology of the rest of the device, but it’s far from science fiction.

Take a look at this prototype for a transparent LED screen from HiSense (due to copyright, you’ll have to follow the link). That bowl of apples in the center? That’s not on the screen, that’s behind the screen. No matter what you do, you can’t focus them out. There’s no way your brain will let you do such a thing. Obviously this technology has a way to go, but it’s coming. As with the progression of any technology, it’ll start big and eventually translate into mobile.

There is another, deeper psychological factor to consider when designing future technology with more GUI on the screen than our eyes know what to do with. Psychologists are split on how our perception works, whether it relies directly on the information present (“bottom-up” theory), or that perception depends on your expectations and previous knowledge as well as the stimulus itself (“top-down” theory). For more on both, check out Simply Psychology.

The top-down theory is especially intriguing, suggesting that our perceived knowledge might actually restrict us from actually immediately seeing what is there, but rather what we think is there. This is an interesting factor to consider when developing future GUI, that not only does it have to be discernible from the background and device, but not what we already expect. Plus, the designers would also have to consider the cognitive abilities of their audience, if they even have the brain processing ability to translate any advanced GUI.

Another thing to consider is our field of vision and how our eyes focus. There are some products on the market that bring viewing less than an inch from your face. While innovative, in a relaxed state your eyes tend to focus out, rather than in. Viewing something that close would cause continued strain on your eyes, and the muscles controlling them. Something else to consider is what repeated close viewing would do to your long term perception of vision.

Remember when your mom told you not to sit so close to the damn television? It’s like that. Our eyes, our brains are not built for long term processing of data in that fashion. Now, perhaps someday science will catch up, but certainly a huge flaw of devices as linked above is that your can’t wear your Gunnars with them.

Something you can wear your Gunnars with is Eye Asteroids. This cabinet game, from the Swedish company Tobii is exactly what you think it is based on the name. The cabinet is full of cameras that track your eyes. You stand in front of it, and aim your eyes at the asteroids flying around and your laser base shoots them out of the sky. Tobii is showing the game as a demonstration of what they are working on, which is to put that kind of eye tracking technology in laptops, computers & tablets to dispense with the mouse.

Consider this, your eyes control the action on the screen which means you’d have to focus on one particular GUI function then perform that function, with your eyes, while not removing your gaze from that function. Think about your desktop. Now imagine trying to open one of those documents with your eyes. Can you think of a time when you were able to keep your eyes focused on one spot without blinking, moving or switching focus? No, you can’t because it doesn’t happen unless you are pretty much brain dead.

Yet, Stephen Hawking speaks with his eyes. That being his only option, I’m sure that it has taken him years to perfect the process and considering his situation, the side effects to his eyes are the least of his concerns.

So in the end, and possibly in the future we have to consider that our eyes will be the controller, much more complex than just a Kinect type device tracking our motions. While our motions outside of eye movement might come into play, the key will be the eyes and therefore protecting the eyes and training them as a muscle, rather than just an ancillary sense organ. Because the eyes are a muscle, and future technology will challenge them to no end.

Protection is key, which is why many eye focused devices have failed, because they do not take into consideration the damage or strain on the eyes that they are causing. Take the Nintendo 3DS game system. While lauded for it’s casual use of 3D technology without glasses, many people have complained that it has caused headaches due to eye strain. The same complaint has been made about conventional 3D, perhaps they just need the right glasses.

What exactly does the future hold? Will new technology really challenge our vision? Will tech companies take into consideration the possible damage being caused? Will our eyes simply evolve to handle the new tech? Well, all puns aside – we’ll just have to wait – and see.

Image: Bryant Figueroa

  • bx3800
    Posted at 17:40h, 13 February Reply

    Great article!

  • Video: Google Project Glass: One Day.... | Gunnar Optiks Eyewear
    Posted at 09:42h, 04 April Reply

    […] few weeks ago we posted an article that explored the conundrum that may occur when technology advances beyond our physical limitations. One of the items that was discussed was glasses that would act as an interface with the internet as […]

Post A Comment