Beware of initial content loading.
We’ve just started our open Alpha metaverse at High Fidelity. It works! It’s sure not feature-complete, and just about everything needs improvement, but there’s enough to see what the heck this is all about. It’s pretty darn amazing.
It’s all open source and we do take developer submissions. There’s already a great alpha community of both users and developers — and lots of developer-users. We even contract out to the community for paid improvements — some of which are proposed directly by the community.
So, suppose you’re reading about High Fidelity and seeing videos, and you jump right in. What is the experience like? To participate, you need one medium-sized download called “interface”. Getting and using that is not difficult. To run your own world from your own machine, accessible to others, you need a second download called “stack manager”, which is also easy to get and use. It’s really easy to add content, change your avatar, use voice and lip-sync’d facial capture with your laptop camera, etc. (Make sure you’ve got plenty of light on your face, and that you don’t have your whole family sticking their heads in front of yours as they look at what your laptop is doing. Just saying.)
The biggest problem I encountered — and this is a biggie — is that the initial content is not optimized. You jump in world and the first thing it starts doing is downloading a lot of content. While it’s doing that, the system isn’t responsive. Sound is bad. You can’t tell what the heck is going on or what you should be seeing. We’ve got to do a better job of that initial experience. However, once you’ve visited a place, your machine will cache the content and subsequent visits should be much smoother.
Also, your home bandwidth is probably plenty, but your home wifi might not be. If your family are all on Skype, Youtube, and WoW on the same wifi while you’re doing this, it could make things a bit glitchy.
The answer is not Menlo Park. We’ve all made products that don’t work as well in the field as we’d like, but the Apple Maps folks really have to get out more.
Apple Maps labeling downtown Palo Alto as Downtown Menlo Park.
And Safari froze when uploading this image to WordPress. Had to use Chrome…
No, I’m not talking about IBM’s Watson. The Guardian has this short plug for SXSW this month. Among such uber-geek’s, note Wetmachine’s own John Sundman heading this panel. I’m very pleased to see so many women on the panel.
I’ll just be sore for a bit. There was no slow motion fear or skid to replay over and over — just a flash of brown in the headlamp, and then I was crumpled on the street. It must have come down through the wooded hillside to the right, or that black hole of a driveway in the apex of the turn. Not the creek on the left, or I would have seen it. The witnesses said it took off into the woods, just leaving behind some fur in the shattered plastic.
I guess I’ll have to take some time off to get the scooter fixed, and a new helmet. Or maybe a small Hummer.
Ouch. Maybe a really big Hummer.
I love this tweet. Sounds like some grumpy old man using the twitter:
“You know what looks cute but sounds worse than fingernails on chalkboard? Baby red-tailed hawk. #beenscreechingallsummershutupalready”
Julian Lombardi makes some terrific points about asset risk for virtual worlds on his blog. I think the issue is a pretty fertile area for exploration as we all continue to invent new ways of working together, but Blogspot simply doesn’t allow that much content in discussion, so I’ll have to fork it here.
I see the asset risk issue-space as breaking out into at least two dimensions:
* Bit storage vs bit usage
* Point assets vs context
I remember how I felt when the media announced the fait accompli that W would be the next president. Twice. This is worse.
I’ve been bumming about my postings (or lack) lately. I want to write about cool possibilities and what they might mean, but most of what I do can’t be talked about until it is released. It seems like it shouldn’t matter whether you write about what you’re doing versus what you’ve done, but I think it does. I feel like everything I write about the latest cool thing my colleagues or I did ends up sounding like an ad. Not an effective and entertaining thing, but just that it sounds like I’m trying to sell something.
Sorry about that. As far as I am aware, I write to sort out ideas. I was taught that if I can’t name something or talk about it effectively, then I don’t understand it. And I write to to document my journey. In both cases, I should be discussing work in progress. But even the entries I made while working at the University of Wisconsin all seem to be about actual working results, rather than projects that I was still designing. And I’m not sure why, but it feels like the out-of-sync aspect is getting worse. There is a commercial relevance. For example, way more than a year ago I had been very happy when a new reader told me what a delight it was to find my blog, and he offered some interesting comments. But it turns out that this fellow was from a ginormous company that is now a (hopefully) happy repeat customer. While I don’t clear anything I write with anyone at work, I can’t pretend that I am unaware of any potential commercial impact. Not sure what to about all that.
I don’t remember hearing the phrase “time shifting” before VCRs and DVRs. I now appreciate the value in being able to capture something while I’m doing something else and then view the capture later when I think I’ll have more time. With digital photography I can easily and sloppily capture my world and shift the difficult task of composition and editing to a later time. (Like, after I’m dead maybe.) I thought I learned in economics that land was the one universally limited resource, but I think that finite time is far more significant. Any tool that helps me shift time is valuable.
From an announcement one second before its time, by Google:
“Research group switches on world’s first ”artificial intelligence“ tasked-array system.
For several years now a small research group has been working on some challenging problems in the areas of neural networking, natural language and autonomous problem-solving. Last fall this group achieved a significant breakthrough: a powerful new technique for solving reinforcement learning problems, resulting in the first functional global-scale neuro-evolutionary learning cluster.
Since then progress has been rapid, and tonight we’re pleased to announce that just moments ago, the world’s first Cognitive Autoheuristic Distributed-Intelligence Entity (CADIE) was switched on and began performing some initial functions. It’s an exciting moment that we’re determined to build upon by coming to understand more fully what CADIE’s emergence might mean, for Google and for our users. So although CADIE technology will be rolled out with the caution befitting any advance of this magnitude, in the months to come users can expect to notice her influence on various google.com properties. Earlier today, for instance, CADIE deduced from a quick scan of the visual segment of the social web a set of online design principles from which she derived this intriguing homepage.
These are merely the first steps onto what will doubtless prove a long and difficult road. Considerable bugs remain in CADIE’S programming, and considerable development clearly is called for. But we can’t imagine a more important journey for Google to have undertaken.”
Follow the links through to the autoblog. Right out of Cheap Complex Devices.