Jump to content

January 3rd 2014 - The WAN Show Document

LinusTech

Hello everyone, this is my first post. Let me apologize in advance for my lack of prowess in the english language.

 

I wanted to share my opinion about one of the questions that arised during the show, what would you like to see this new eye-traking technology being used for?

 

I think that the two main advantages of this technology are the following ones, both of them related with how display technology is not optimized for the way in which we humans see.

 

 

1.- Fix 3D. The big problem with 3D, is that it only provide that 3D effect if you are looking exactly where the programer is expecting you to look at at any particular moment. This is because we focus at an object depending on its distance, so if there are two objects, one "near" you, and other at a distance, and the programer expect you to look at the one that is "near" you, that object appears focused and the second seems to be unfocused. If you look at the other one, the 3D effect is completely ruined. With this technology the software will know where you are looking at and create a 3D effect especifically for that moment. This could really make 3D something more than a trend that comes and goes because it never truly fool your sense of sight.

 

2.- Improve performance/efficiency. We humans take almost all information from the point we are looking at, and the bigger the angular distance from that point the less information our brains actually receive, this is because the optical nerve has not enough bandwith to process an unscaled image with the definition our eyes are able to receive. Furthermore, our brains process the signal, making some objects we see disapear, the nose for example, it is irrelevant therefore it is not procesed by our brains unless we focus on it, or glasses when people were them. This means we dont need the same amount of definition and therefore resolution in every point of the frame. With this technology the software will know where we are looking at and decrease the definition and resolution of the different areas as the angular distance from that point increases. This will make even more sense with curved ultrawide panoramic displays in order to improve immersion while saving computing power. I know that in order to take advantage of this the way in which images are rendered will have to change, being divided by sectors, but it will make a far more efficient system of artifical image generation than the one we are using now, in which we are creating images with the same definition at every point just because the software does not know where you are looking at in that picture.

 

 

I am curious about how people think about these two things, I am sure I will learn a lot in this community. Again, apologies for my english.

 

Regards

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×