Thoughts before trying the Apple Vision Pro

A few fundamental thoughts first

Spatial interfaces make sense. Before we mastered electricity, film, then computers, and now the 2D screen, all our interfaces were three-dimensional.

Writing has always been two-dimensional.

1

Spatial interfaces will feel hollow without tactility

1

The idiom of digital object permanence will need to be fleshed out

1

Apple Vision Pro

I heard someone online call this device an “emulator” of what Apple really wants to build one day. And I think that’s essentially right, and it allows us to think about what is actually possible without getting stuck too much in what this generation of the technology can do.

Vision Pro as emulator of the future

Ways to think about the Apple Vision Pro. When I speak about the Vision Pro here, I’m really speaking about the idea of what it can become: lightweight, with perfect, display-free passthrough, and as affordable as an iPhone.

At home, alone

At home, with other non-wearers

At home, with other wearers

At home, remotely with other wearers

On the go, alone

On the go, with other non-wearers

On the go, with other wearers

Vision Pro today

Questions about the Apple Vision Pro specifically. Here I am curious about what we can discover from this generation to try to see which of the scenarios above will be easier or harder (impossible?) to achieve.

Space is going to be at a premium. Competition for apps will not be so much for attention, but for space in your room or home.

Two people wearing Vision Pros in the same room watching a show together, will that feel natural or not?

Do we expect it ever will feel natural if passthrough isn’t perfect?

Even if we get perfect, display-free passthrough, how much confidence will we need to have that we’re sharing the same experience to feel comfortable? If my virtual screen is not exactly where your virtual screen is, will that feel uncomfortable? This is related to the spatial interfaces need to allow us to point

Just the idea that something could go wrong on your device, or that you can’t exactly trust that you’re seeing the exact same thing as someone else, will that add a basic level of discomfort to the social experience?

When I grab a virtual object and move around, does the virtual object track properly? I expect this to work pretty well given my experience with ARKit on MIX

No surface anchors in the shared space, Apple probably doesn’t feel confident about this working well. This will be the big breakthrough if it happens, but maybe there’s a reason why we don’t want to start replacing our existing surfaces with digital layers. If your digital objects are anchoring to physical objects, do we run the risk of beginning to feel unsound in our space because we don’t know where our physical surfaces actually begin and end? We still need to know where to sit, and where to lean.

Is the occlusion dynamic? Not just humans and body parts, but using lidar too? if a ball is thrown in front of the screen, does it break my immersion?