2024/02/02
[Like you alluded, I think the most worrisome thing out of these reviews is the mention of the friction with using eye tracking as the main input method. I suspect our eyes happen to move more with our thoughts than with our actions, and we may have to align the thoughts and actions (and thus slow our thinking) to work with this interface. To be fair, this is kind of how the existing mouse interaction works, you can’t really click without checking that your pointer is where you expect it to be. But notably, proficient computer users try to stay on the keyboard as much as possible so they can have the comfort of separating actions from where their eyes are looking, which is more efficient. Perhaps the solution will lie in changing the interface (something other than buttons and text fields that’s not just voice), or in some new version of a mouse that allows us to move in 3D and use tactility to navigate the interface without having to look too concertedly.](graph/2024-02-01/66af8035-8883-453e-994f-00543684e7c9/66af8035-7913-4d4b-820e-5a28976697c9/66af8035-ca62-4cfd-b938-a011c86e2a62/65bd3221-af9a-4187-9c77-3dab3843f182)
Eric Welander mentioned that he felt it would be nice to be able to reach out and tap on UI elements.
Is it better to have a TV on your wall that blends seamlessly with your room, or is it better to have a device on our faces that can generate a TV whenever we want ?
I’m looking forward to trying different activities with the headset on, and thinking about how those activities could be improved
I have a bit of a worry that we won’t ever be able to get to Optical AR, ie actually transparent displays. I have to admit trepidation towards this future where we all have to actually work with displays on our faces.