Low res or no res?

I sometimes get asked about Croquet for computing devices with lower graphics capability, such as today’s phone/PDA/iPods. I think the train of thought is that there’s so much in Croquet that could be valuable independently of the immersive 3D environment, so shouldn’t that part be available on lesser machines?

I feel it is only worthwhile to initially build Croquet – all of Croquet and only one Croquet – on machines with the best commonly available graphics capability and also on those with no visual capability whatsoever!

There’s a lot in Croquet that has nothing to do with graphics: the shared simulation model of collaboration, edge-network scalability, and transparent persistence. Surely we could build applications that made use of these alone. However, I feel that the combination of these capabilities gives rise to a whole that is greater than the sum of its parts, and that this is best embodied in a spatially oriented user interface. Even small cheap complex devices will soon have good graphics capabilities. I call these “nextPods.” Hardware capabilities have tend to develop faster than our collective ability to develop appropriate user interface models that make effective use of them, so let’s aim high and let the hardware catch up.

Besides, community is crucial in the adoption of a new disruptive technology. I don’t want to create a low-res Croquet network that doesn’t play well with the high-res Croquet network.

However, while nextPods will someday have better graphics, many users will always be visually impaired. But the beautiful thing about a really “good idea” is that it is useful outside the limits of its original implementation. Genius is in the generalities, not the details. A spatially oriented, direct manipulation model of the world is A Good ThingTM regardless of whether it’s shown visually. Here’s three levels of how that could be realized:

How did people play computer games before there were video games? When I was a kid, I used to play a game called “adventure” (aka “Zork” or “Wumpus”).
“You are in a maze of twisty passages all different. There is a wall in front of you.”
> Look right.
“There is a wall in front of you.”
> Look left.
“There is a passage there.
> Turn left. Walk.
”There is a mailbox here.“
> Look inside.
”There is a map here!“
I’m undoubtedly misreporting the details. (I was pretty bad at the game.) But the point is that all the high level objects can be reported to the user audibly as a descriptive video service. (Our Brie objects already have recordable labels.) The user can then focus their attention on one of them and the whole process can repeat. This is an aural version of the Zoom User Interface. It all works because there are human-centered objects to be manipulated, not arbitrary computer stuff like ”files,“ ”windows“ and ”applications.“ Creating an immersive 3D user interface happens to create just the right sort of model. We should take a cue from literature and ”serious games“ and create learning environments, toys, and other applications as a narrative, so this fits right in.

Now, a good musician can hit an arbitrary note at the end of the piano keyboard without looking, and many visually impaired people are very good at acting on their own internal map of the space around them. If properly registered and oriented, many should be able to use a mouse if hovering over an object emitted a consistent and unique tone (and perhaps a spoken label). This will save time versus waiting for a spoken descriptive menu of choices. This is actually already built into Brie. The tones are stereo-located and distance-attenuated to the objects that produce them. We did it to provide a richer and more natural experience for all users, but I suspect that it’s A Good ThingTM for the visually impaired as well.

Finally, we can use the spatial orientation to help the user find things. As in any population, some blind folks have motor-coordination difficulties, and it wouldn’t do to have them waving a mouse around trying to hit an object. While we currently play sounds when the mouse enters or leaves an object, we could also play an attenuated sound as the mouse approaches an object. It would get louder as you got closer as a sort of ”warmer… colder… warmer… getting hot… hot… ding” guide. Each object in the current field of attention could produce a chord. A scan of the room would produce a series of chords: each space is a composition!

Clearly, the aural zoom interface is useful to all of us: in the car, as we walk down the street with our nextPod mic/earbud in, or as we take a bath. It’s really no different then a personal computer interface from any ordinary science fiction story. I do think this is worth doing, particularly as it can work for all Croquet spaces and thus allows more participation in a single world-wide Croquet network rather than segregating folks into a low-res and high-res network

About Stearns

Howard Stearns works at High Fidelity, Inc., creating the metaverse. Mr. Stearns has a quarter century experience in systems engineering, applications consulting, and management of advanced software technologies. He was the technical lead of University of Wisconsin's Croquet project, an ambitious project convened by computing pioneer Alan Kay to transform collaboration through 3D graphics and real-time, persistent shared spaces. The CAD integration products Mr. Stearns created for expert system pioneer ICAD set the market standard through IPO and acquisition by Oracle. The embedded systems he wrote helped transform the industrial diamond market. In the early 2000s, Mr. Stearns was named Technology Strategist for Curl, the only startup founded by WWW pioneer Tim Berners-Lee. An expert on programming languages and operating systems, Mr. Stearns created the Eclipse commercial Common Lisp programming implementation. Mr. Stearns has two degrees from M.I.T., and has directed family businesses in early childhood education and publishing.

3 Comments

  1. There are some interesting similarities (and differences) in Craig Latta’s ‘quoth’ system (http://www.netjam.org/quoth…).

  2. While your points about a no res Croquet are very good, I found the question that prompted this confusing. Both 3D UIs and ZUIs (which you mentioned) scale just fine to lower resolutions. It is the traditional 2D GUIs that break down. Just compare using Windows XP or KDE on a QVGA (320×240) display with using Croquet on it to see what I mean. In fact, it is easy to resize the Croquet Morph inside a normal display to see just how usable it would be.

  3. Quite right. I’m taking some literary license and lumping all the graphics capabilities into one category, somewhat inappropriately labeled “low res.”

    In fact, the issue for devices with lesser capability is really more about stencil buffers and VRAM and other estoric things. But I didn’t want to get into a technical discussion about how much resolution is necessary, how many stencil buffers are required, and so forth. I don’t want to let the main point get parsed into oblivion: that I feel there is little point in putting resources into creating an alternate user interface model for display devices with lower capability, but rather there is value in creating a general user interface model that also works without any display capability at all.

Comments are closed