2024/02/02

12:37

some thoughts

3D vs 2D

Three dimensions are natural, two dimensions are efficient.

Three dimensions are natural, two dimensions are efficient.

1

Being able to compress information into two dimensions ( writing) is an important super power

3D is not superior to 2D in all ways

3D does allow for the maximum number of visual dimensions to be utilized.

There could be a future where we get as good at reading 3D graphs as we are at reading 2D graphs, and coming up with terms for 3D graphs like bell curves, long tails, and other phenomena that are common in 2D representations

Spatial is not the same as 3D

The important components of spatial computing

15:40

on the train to Boston

thinking more about Vision Pro

Nilay Patel’s two doubts

Camera passthrough is a dead end

some devs said “overloading an input channel with output”

eyes are what we use for input,

devs do believe in hand tracking

this fits my thoughts

[Like you alluded, I think the most worrisome thing out of these reviews is the mention of the friction with using eye tracking as the main input method. I suspect our eyes happen to move more with our thoughts than with our actions, and we may have to align the thoughts and actions (and thus slow our thinking) to work with this interface. To be fair, this is kind of how the existing mouse interaction works, you can’t really click without checking that your pointer is where you expect it to be. But notably, proficient computer users try to stay on the keyboard as much as possible so they can have the comfort of separating actions from where their eyes are looking, which is more efficient. Perhaps the solution will lie in changing the interface (something other than buttons and text fields that’s not just voice), or in some new version of a mouse that allows us to move in 3D and use tactility to navigate the interface without having to look too concertedly.](graph/2024-02-01/6616aafd-062e-4b66-b390-c0d309c78741/6616aafd-bfa7-4875-9ce8-fbecd613b951/6616aafd-db3c-4922-aac5-869c6d97a238/65bd3221-af9a-4187-9c77-3dab3843f182)

I think I buy this, essentially

Eric Welander mentioned that he felt it would be nice to be able to reach out and tap on UI elements.

This seems like something that makes as we start placing apps more in physical space

Violet Whitney on the spaitial podcast

referencing the Niantic Labs CEO article on Vision Pro

about using devices that connect us to the real world

goal is not to be isolated

I agree with this, I think MIX is definitely on this path

points out that the eye tracking is not as good as more peripheral ways of interacting

wants the technology to be in the shadows, quieter, more sensory deprivation

book on japanese culture changes, in the shadows ?

but I think she doesn’t give enough credit to Apple that this is the best version of the way to head in this direction compared to existing tech

19:01

back in Cambridge

Violet Whitney brings up a good point about the two options for what spatial computing can be

augmented is where we augment our senses to add layers to the environment

distributed is where we add layers to the environment directly

->

I’ve always leaned towards the augmented approach (if only because it seems much more feasible), but some more thinking needs to be had about what is better.

The risk is probably in having full sensory control given to external devices. The downsides of that need to be managed very carefully

22:07

Picked up the Vision Pro! Haven’t tried it yet, will try it on in a couple minutes

Is it better to have a TV on your wall that blends seamlessly with your room, or is it better to have a device on our faces that can generate a TV whenever we want ?

I’m not sure on this yet. I’ve always loved the idea of hidden technology, like the frame tv or the G series LG OLED where it lies flat like a piece of art work.

I’m looking forward to trying different activities with the headset on, and thinking about how those activities could be improved

washing the dishes, cooking, working

I have a bit of a worry that we won’t ever be able to get to Optical AR, ie actually transparent displays. I have to admit trepidation towards this future where we all have to actually work with displays on our faces.

I want to note this before I try it on, to give myself the freedom to enjoy this device without necessarily subscribing to a future that relies on this specific direction of the technology.

It will be an interesting future…