Testing 1, 2, 3. Check. Check. …. Waiter?!

I’ve been working with some test harnesses for our Croquet worlds. It’s been a real pain working outside of Croquet: getting things to happen across multiple platforms. Moving data around. It’s all so much easier in a virtual space that automatically replicates everything.

Anyway, we finally got it working enough that there are several machines in Qwaq’s Palo Alto office that are all running around as robots in a virtual world, doing various user activities to see what breaks. Being (still!) in Wisconsin, I have to peek on these machines via remote. I’m currently using Virtual Network Computing (VNC), but there’s also Windows Remote Desktop (RDP). These programs basically scrape the screen at some level, and send the pictures to me. So when these robots are buzzing around in-world, I get a screen repaint, and then another, and then another. And that’s just one machine. If I want to monitor what they’re all doing, I have to use have a VNC window open for each, scraping and repainting away. Yuck. If only there were a better way….

Well, these robots are all in a dedicated “Organization” called “Tests”. All I have to do is sign in to Tests with the normal Qwaq Forums client directly from my own home machine. I take a birds-eye position in the sky (with one press of the “End” key) and can watch all the robots do their thing in real time. There’s even a panel that lists all the (robot) users in the org and tells me if they are in a different space in the Organization, and puts up a little exclamation mark if they are being unresponsive. And of course, the graphics are all smooth.

How cool is that?

This is a nice illustration of the difference between the two approaches for distance collaboration:

VNC: I run a dumb client to see a slow-rate frame rate view of the whole desktop of each of the N individual remote machine. I have to manage N 2D windows, and each shows what that machine shows. I can’t change my observational perspective, nor participate in the test itself. There are N pairwise traffic connections. Each observed robot has to run additional software (a VNC server) and I have access to the whole machine on which it runs.

Virtual worlds: I run a smart client with one low-traffic connection that uses my graphics card to give a high interactive frame rate. I can independently choose my observational viewpoint and can interact with the robot participants. There’s nothing else needed on the robot machines, and I don’t get access to those machines – just access to the shared/common virtual world.

About Stearns

Howard Stearns works at High Fidelity, Inc., creating the metaverse. Mr. Stearns has a quarter century experience in systems engineering, applications consulting, and management of advanced software technologies. He was the technical lead of University of Wisconsin's Croquet project, an ambitious project convened by computing pioneer Alan Kay to transform collaboration through 3D graphics and real-time, persistent shared spaces. The CAD integration products Mr. Stearns created for expert system pioneer ICAD set the market standard through IPO and acquisition by Oracle. The embedded systems he wrote helped transform the industrial diamond market. In the early 2000s, Mr. Stearns was named Technology Strategist for Curl, the only startup founded by WWW pioneer Tim Berners-Lee. An expert on programming languages and operating systems, Mr. Stearns created the Eclipse commercial Common Lisp programming implementation. Mr. Stearns has two degrees from M.I.T., and has directed family businesses in early childhood education and publishing.

3 Comments

  1. Wow.

    This is almost, like, um, *computer science*.

    Cool beans.

    jrs

  2. It was kind of cute when I showed this to our QA team. Because I was in Wisconsin and they in California, I joined them in in-world. We spoke by voice and video as I showed them the conventional Web-based UI for starting tests and examining the archived results. The Web browser was running on a wall of the in-world test center.

    But when I pushed the button to start the tests, the robots started operating all around us. We were standing and talking _IN_ the test.

  3. On reflection six months later…

    There’s a lot of things I do as a programmer that I don’t use as a user. The test environment is an example where I have found it beneficial to be a user of a virtual world as a convenient way to experience what I had developed.

    I wonder if virtual worlds have as a general property that they tend to push developers into being actual users.

Comments are closed