Howard Stearns

Howard Stearns works at Teleplace, Inc., creating business collaboration technologies and products. Mr. Stearns has a quarter century experience in systems engineering, applications consulting, and management of advanced software technologies. He was the technical lead of University of Wisconsin's Croquet project, an ambitious project convened by computing pioneer Alan Kay to transform collaboration through 3D graphics and real-time, persistent shared spaces. The CAD integration products Mr. Stearns created for expert system pioneer ICAD set the market standard through IPO and acquisition by Oracle. The embedded systems he wrote helped transform the industrial diamond market. In the early 2000s, Mr. Stearns was named Technology Strategist for Curl, the only startup founded by WWW pioneer Tim Berners-Lee. An expert on programming languages and operating systems, Mr. Stearns created the Eclipse commercial Common Lisp programming implementation. Mr. Stearns has two degrees from M.I.T., and has directed family businesses in early childhood education and publishing.

Hackers and Painters

I went to last weekend.

Billed as the World’s First Virtual Reality Painting Exhibition, it featured:

  • artwork one could view individually, using a Head Mounted Display (HMD) with single-camera tracking;
  • artists at work wearing HMD with dual lighthouse tracking (and the results displayed live on a big monoscopic screen).

The work was done with, which appears to be a couple of guys who got bought by Google. The project – I’m not sure that its actually available for sale – appears to be evolving along several dimensions:

  1. 3d Model Definition: covering stroke capture (including, symmetric stroke duplication), “dry” and “wet” (oil paint-mixing/brushwork effects), texture patterns, volumetric patterns, emmision, particles.
  2. Interactive Model Creation: tool pallette in one hand, and brush in the other.
    1.   Videos at suggest an additional moveable flat virtual canvas (a “tilt canvas”?) that one can hold and move and paint against. The art on display was clearly made this way, as they all felt like they were a sort of a rotational 2.5D — the brush strokes were thin layers of (sometimes curved) surfaces.
    2. The artists last night appeared to be working directly in 3D, without the tilt canvas.
    3. The site mentions an android app for creation. I don’t know if it is one of these techniques or a third.
  3. Viewing: HMD, static snapshots, animated .gifs that oscillate between several rotated viewpoints (like the range of an old-fashioned lenticular display).

  I haven’t seen any “drive around within a 3D scene on your desktop” displays (like standard-desktop/non-HMD versions of High Fidelity).

  The displays were all designed so that you observed from a pretty limited spot. Really more “sitting”/”video game” HMD rather than “standing”/”cave” exploration.

My reactions to the art:

  • Emmission and particle effects are fun in the hands of an artist.
  • “Fire… Light… It’s so Promethean!”
  • With the limited movement during display, mostly the art was “around you” like a sky box, rather than something you wandered around in. In this context, the effect of layering (e.g., a star field) – as if a Russian doll-set of sky boxes (though presumably not implemented that way) – was very appealling.

Tip for using caves: Put down a sculpted rug and go barefoot!

Posted in history: external milestones and context, Inventing the Future | Comments closed

Two Great Summaries, and “Where do ideas live?”


Philip gave a terrific quick demo and future roadmap at MIT Technology Review’s conference last week. See the video at their EmTech Digital site.

Today we put up a progress report (with even more short videos!) about the accomplishments of the past half year. Check it out.


I wonder if we’ll find it useful to have such material in-world. Of course, the material referenced above is intended for people who are not yet participating in our open Alpha, so requiring a download to view is a non-starter.  But what about discussion and milestone-artifacts as we proceed? At High Fidelity we all run a feed that shows us some of what is discussed on thar interwebs, and there are various old-school IRC and other discussions. It’s great to jump on, but it kind of sucks to have an engaging media-rich discussion with someone in realtime via Twitter. Or Facebook. OR ANYTHING ELSE in popular use today.

William Gibson said that Cyberspace is the place where a phone call takes place. I have always viewed virtual worlds as a meta-medium in which people could come together, introduce any other media they want, arrange it, alter it, and discuss it. Like any good museum on a subject, each virtual forum would be a dynamic place not only for individuals to view what others had collected there, but to discuss and share in the viewing. The WWW allows for all of this, but it doesn’t combine it in a way that lets you do it all at once. Years ago I made this video about how we were then using Qwaq/Croquet forums in this way. It worked well for the big enterprises we were selling to, but they weren’t distractible consumers. High Fidelity could be developed to do this, but should we? When virtual worlds are ubiquitous, I’m certain that they’ll be used for this purpose as well as other uses. But I’m not sure whether this capability is powerful enough to be the thing that makes them ubiquitous. Thoughts?

Posted in history: external milestones and context, Inventing the Future | Comments closed

Are You In My Game? part 2


When an artist paints en plein air, they expect people to look over their shoulder and see what they’re doing. As an engineer, almost no one does that. I’ve had the delight of deliberately sharing my work with my children, and recently I got to share with unexpected visitors.

In High Fidelity, running your own virtual world is trivially easy, and I often do development work using a mostly empty workspace on my laptop. Nine years after playing hide-and-seek with my son (first link above), I played air hockey with him over the network. I was thrilled that this now jaded teenager was still able to giggle at the unexpected realtime correctness of the experience. But we had set out to get together online, and this time he wasn’t surprised to find me there.

It turns out that my online visibility had been set to “visible to everyone”. The next weekend I was online and someone clicked on my name and ended up in my world. I was startled to not only see another avatar, but to hear such a clear voice saying, “Hi!” Despite the cartoon avatar, it was as though she were in the room with me. I explained that I was “just working on some development” and that this space would be going up and down a lot, and her voice sounded crushed as she said, “Oh. Ok. Bye…”

An hour later, another visitor came. When I told him the same thing, he left immediately without saying goodbye. Then I was the one who felt crushed.

Of course, one can control access to your own domain — I can’t quite explain why I don’t feel like doing that. But I have turned my online presence visibility to “friends only”.

Posted in Inventing the Future, workflow | Comments closed

What Goes Wrong?

I-Can-Do-It Beware of initial content loading.

We’ve just started our open Alpha metaverse at High Fidelity. It works! It’s sure not feature-complete, and just about everything needs improvement, but there’s enough to see what the heck this is all about. It’s pretty darn amazing.

It’s all open source and we do take developer submissions. There’s already a great alpha community of both users and developers — and lots of developer-users. We even contract out to the community for paid improvements — some of which are proposed directly by the community.

So, suppose you’re reading about High Fidelity and seeing videos, and you jump right in. What is the experience like? To participate, you need one medium-sized download called “interface”. Getting and using that is not difficult. To run your own world from your own machine, accessible to others, you need a second download called “stack manager”, which is also easy to get and use.  It’s really easy to add content, change your avatar, use voice and lip-sync’d facial capture with your laptop camera, etc. (Make sure you’ve got plenty of light on your face, and that you don’t have your whole family sticking their heads in front of yours as they look at what your laptop is doing. Just saying.)

The biggest problem I encountered — and this is a biggie — is that the initial content is not optimized. You jump in world and the first thing it starts doing is downloading a lot of content.  While it’s doing that, the system isn’t responsive. Sound is bad. You can’t tell what the heck is going on or what you should be seeing. We’ve got to do a better job of that initial experience. However, once you’ve visited a place, your machine will cache the content and subsequent visits should be much smoother.

Also, your home bandwidth is probably plenty, but your home wifi might not be. If your family are all on Skype, Youtube, and WoW on the same wifi while you’re doing this, it could make things a bit glitchy.

Posted in General, Inventing the Future | Comments closed

What is the Metaverse?

Our Philip Rosedale gave a talk this week at the Silicon Valley Virtual Reality Conference. The talk was on what the Metaverse will be, comprising roughly of the following points. Each point was illustrated with what we have so far in the Alpha version of High Fidelity today. There are couple of bugs, but it’s pretty cool to be able to illustrate the future with your laptop and an ad-hoc network on your cell phone. It’ll blow you away.

The Metaverse subsumes the Web — includes it, but with personal presence and a shared experience.

The Metaverse has user generated content, like the web. Moroever, it’s editable while you’re in it, and persistent. This is a consequence of being a shared experience, unlike the Web.

A successful metaverse is likely to be all open source, and use open content standards.

Different spaces link to each other, like hyperlinks on the Web.

Everyone runs their own servers, with typable names.

The internet now supports low latency, and the Metaverse has low latency audio and matching capture of lip sync, facial expressions, and body movement.

The Metaverse will be huge: huge spaces, with lots of simultaneous, rich, interactive content. The apps are written and shared by the participants in standard, approachable languages like Javascript.

The Metaverse will change education. Online videos have great content, but the Metaverse has the content AND the student AND the teacher, and the students and teachers can actually look at each other. The teacher/student gaze is a crucial aspect of learning.

The Metaverse scales by using participating machines. There are 1000 times more desktops on the Web than there are in all the servers in the cloud.

The talk starts at about 2:42 into the stream:

Posted in history: external milestones and context, Inventing the Future | Comments closed

What do you want it to be?

balance We were featured at last week’s NeuroGaming Conference in San Francisco. Philip’s presentation is the first 30 minutes of this, and right from the start it pulls you in with the same kind of fact-based insight-and-demonstration as Alan Kay’s talks. (Alas, the 100ms lag demo doesn’t quite work on video-of-video.)

But everyone has their own ideas of what the metaverse is all about. This Chinese News coverage (in English) emphasized a bunch of “full dive” sorts of things that we don’t do at all. The editor also chose to end Philip’s interview on a scary note, which is the opposite of his presentation comments (link above) in which he shared his experiences in VR serving to better one’s real-life nature.

Posted in Inventing the Future, Memology | Comments closed

Inventing the Future: Act II

Eleven and a half years ago, I started to blog.

I had just joined an open-source computer science project that aimed to use virtual worlds to allow people to work, learn and play together, and I thought it would be fun to narrate our progress.

Let’s try that again.


Today I started work at High Fidelity, a nice summary (and video) of which is in this Technology Review piece.

Posted in history: external milestones and context, Inventing the Future | Comments closed


I continue to find myself thinking about this photo shoot. There is something compelling such thought, and so I feel that one way to think about it is as art.

There are technical issues that can be thought of in artistic terms. For example, I seem to be upset about the variations of paint schemes. I like my aerospace to be engineered. Isn’t there A Right Answer(TM)? How can there be several best paint schemes? (I have the same objection to BMW’s line about “We only make one thing: the Ultimate Driving Machine.”) And yet my favorite paintings are not photographic. If “too perfect”, I would be instantly distracted by whether or not the display was Photoshopped or Computer Generated. But how can one create a Wabi-Sabi esthetic on an aircraft? Maybe the answer is variations.

Hmm. Not satisfying. If the variations were created as deliberate imperfection, I think a much better choice would be to have an artist deliberately create visual asperity in the same way that game artists make a flat glass screen look like rough and rugged material.

Maybe the variation is symbolic? After all, Airbus is uniquely a product of multiple countries. Maybe the variation gives one a feel for laborers of many countries coming together to put these great birds in the air.  Indeed, the making-of film does give a sense of this. Hmm, again, I think other designs could have achieved that better.

Another consequence of an artistic perspective is that it gives a lot of room for the enormous sums of money. How much is art worth? There is something stirring about the site of these planes, so who am I to say they did it wrong in some way? How much did this shot cost, and how much is it worth?


Posted in Inventing the Future, Memology | Comments closed

Billion Dollar Program Management

This picture is from petapixel. I’m still trying to wrap my head around this.
The cost of this photo shoot (fuel, chase planes, pilots, etc.) is estimated north of $75k.
The planes themselves are $300M each, or $1.5B for the five. BILLION.
But of the five planes, there are at least four different paint schemes. This is pre-release of the plane.
Think of of the direct cost to produce a new paint scheme for one of these, and the implications on schedule and coordination (e.g., getting the five planes in the same place at the same time with the paint dry), and that nonetheless, some number of project/program-managers approved the changes. Nay, demanded the changes.
  • If it was right to do so, there is a staggering amount of costs at stake, for what most engineers would mistakenly think is silly.  (Heck, I thought it was wrong. But I’m not sure I was right!)
  • If it was wrong to do so, there is a staggering amount of money and time being mis-applied.
Am I nuts to wonder about such things? Have I been too long at startups, where all I can think of is what it takes to get it out the door and not muck with the schedule and the dependencies?
(I’ll promise to post more with some reflection, respecting what people have already shared with me out-of-band.)
Posted in Inventing the Future, workflow | Comments closed

What town contains Apple’s flagship Palo Alto store?

The answer is not Menlo Park. We’ve all made products that don’t work as well in the field as we’d like, but the Apple Maps folks really have to get out more.

Apple Maps showing labeling downtown Palo Alto as Downtown Menlo Park.

Apple Maps labeling downtown Palo Alto as Downtown Menlo Park.

And Safari froze when uploading this image to WordPress. Had to use Chrome…


Posted in General, Inventing the Future, Software, workflow | Comments closed

If you do not have an account: Register