Because You Can

Ever since Shelly’s “Frankenstein”, the distinguishing characteristic of science fiction (as opposed to fantasy and other literature) has been the postulation that beings can change the circumstances of the world in which they live. We can alter the human condition, for better or worse. An idea of the last few decades has been that we can create an alternative reality for ourselves that is better than the one we inhabit in the flesh. For example, the movie “Avatar” has the characters access an improved natural world through a virtualized experience.

This terrific short blog applies this idea wonderfully to learning and collaboration. “The real power of a virtual immersive environment is the ability to transport the learner or collaborators into an environment that is ideally suited for the learning or collaborating that needs to take place and this usually requires an altering of the spaces.”

In principle, we can abstractly virtualize such an experience with 2D photographs, or even 1D text, but that doesn’t tend to cross the threshold of immersion that is necessary for deep learning and deep collaboration. As this commenter on the above puts it, “In most 2-D meeting tools, the data is the center of focus, not the human. Think about a Web meeting. The leader is simply showing participants slides. But the participants are not interacting with the information, nor one another.” Simply reading about nature or viewing it from a helicopter was not enough for the characters in Avatar, they had to “be” there and interact with it.

A Fatal Exception Has Occured In Your White Spaces Sensing Device

It would be funny were it not so easy for NAB to exploit.

The Microsoft prototype shut itself down last week and would not restart. Users familiar with MS products that are scheduled for release, never mind pre-beta versions, will find this so unremarkable as to wonder at the sensation. It goes up there with “Apple denies latest i-rumor.”

Unsurprisingly however, the folks opposed to the use of white spaces (primarily the broadcasters and the wireless microphone folks, with a dash of the cable folks thrown in for good measure), will spin this as the entire technology for sensing if a channel is occupied as “failing.” This ignores the other prototypes of course (Phillips and Google), and ignores the fact that the failure had nothing to do with the sensing (the thing being tested). Finally, of course, it ignores the fact that this is a proof of concept prototype.

The fact is, that the FCC testing shows that “sensing” as a technology works at levels that easily detect operating television channels and even wireless microphones. In fact, it is too bloody sensitive. In a foolish effort to appease the unappeasable, the companies submitting prototypes keep pushing the level of sensitivity to the point where the biggest problem in recent rounds appears to be “false positives.” i.e., it is treating adjacent channels as “occupied.”

As a proof of concept, that should be a success. The testing demonstrates that you can detect signals well below the threshold needed to protect existing licensees. Logically, the next step would be to determine the appropriate level of sensitivity to accurately protect services, set rules, and move on to actual device certification based on a description of a real device.

But that is not how it works in NAB-spin land. Instead, NAB keeps moving the bar and inventing all sorts of new tests for the devices to “fail.” For example, the initial Public Notice called for prototypes for “laboratory testing.” MS and Phillips submitted prototypes that performed 100% in the lab. But then, the MS people did something very foolish, but very typical — they decided their laboratory device was good enough for field testing. No surprise, it did not work as well in the field as in the lab. As this was a laboratory prototype, the failure to perform flawlessly in the field should have been a shrug — it would have been astounding beyond belief if a prototype designed for the lab had worked perfectly the first time in the field. But the fact that the prototype did not work in the field was widely declared a “failure” by NAB, which unsurprisingly gave itself lots of free advertising time to spin the results this way.

So the FCC went to round two, and again the NAB and white spaces opponents have managed to move the bar so they can again declare a “failure.” Back in 2004, when the FCC first proposed opening the white spaces to unlicensed use, it concluded that operation of white spaces devices would not interfere with licensed wireless microphone users. The FCC has never reversed that determination. Unsurprisingly, businesses developing prototypes according to the FCC’s proposed rules have not taken particular care to address wireless microphones. Because the FCC explicitly said “don’t worry about them.”

But suddenly, if the devices can’t accurately sense and detect wireless microphones, they will be “failures.” It doesn’t matter that the devices have proven they can protect wireless microphones. It doesn’t matter that Google has proposed additional ways of protecting wireless microphones besides sensing. As long as NAB can frame what defines “failure” (rest assured, there will never be any successes of NAB gets to call the tune), and can keep changing that definition at will, the political environment will ensure that the actual engineering is irrelevant.

Which is why the companies need to stop trying to placate the NAB by agreeing to an endless series of tests with ever-shifting criteria. And OET needs to write up a report that does what the initial notices promised to do, use the data collected from prototypes to determine if the concept works and, if so, to set appropriate technical standards. The prototypes have proven they can detect signals with a sensitivity better than an actual digital television set or wireless microphone receiver, so the “proof of concept” aspect stands proven. Rather than buy NAB spin, the next step should be to determine what level of sensitivity to set as the standard.

Hopefully, the Office of Engineering and Technology, which is conducting the tests, will not suffer the fate of the Microsoft prototype and shut down under pressure.

Stay tuned . . . .

Definitely Not Smarter Than the Average Bear

Much of the press surrounding the first two days of the FCC’s 700 MHz auction has been like this Information Week story. I confess to being both amazed at the shallowness of the reporting and amused at its gloom and doom tone. To hear the press tell it, it’s time to be very bearish on this auction.

A look at historical precedent is salutory. The FCC’s Integrated Spectrum Auction System files for Auction 66 and Auction 73 are the places to start.

At the end of round four in Auction 66 (AWS-1), the high bids for the EAs, CMAs, and REAGs were, respectively, 4.15%, 7.09%, and 12.03% of the final net PWB prices with 47.84% of licenses receiving at least one bid. At the end of round 4 in Auction 73 (700 MHz Band) the high bids for EAs (A and E Blocks), CMAs (B Block), REAGs (C Block), and the nationwide D Block license were, respectively, 31.87%, 43.03%, 39.06%, and 26.99% of reserve price with 83.80% of licenses receiving at least one bid.

Auction 66 netted $13.7 billion. Auction 73 has a reserve price threshold of $10,386,011,520. By any objective criteria Auction 73 is off to a much better start generally than Auction 66 was. The fact that the D block has had only one bid in the first four rounds isn’t terribly unusual; several licenses which eventually went in Auction 66 for very substantial sums had very little early-round action. It’s important to point out that auctions with relatively high reserve prices tend to exhibit slow convergence bidding on reserve price and provide significant incentive to try to obtain the license for as little over reserve price as possible. When this tendency is coupled with the FCC’s bidding increment rules, it is rather obvious that the auction is going to take some serious time and that it’s rather impressive how close to reserve price the bidding is at so early a stage.

Auction 66 ran 161 rounds. I expect Auction 73 to run at least 100 rounds, and probably significantly longer. It is much too early to announce that the results of Auction 73 are disappointing… unless you appear to know as little about how FCC spectrum auctions actually work as much of the press does.

The 77% Solution, or Even with Three Different Methods You Still Get a Take Rate Greater than 70%

There has long been reason to suspect the data which the cable industry provides to various reporting services like Warren Communications News, Kagan Research, and Nielsen Media Research for U.S. cable coverage and subscribers precisely because the cable industry has considerable incentive to lie about it. Specifically they have incentive to under-report both coverage and subscribers so as to avoid a finding that the 70/70 limit – that seventy percent of American homes are passed by cable and that seventy percent of homes subscribe to cable – has been reached, thus triggering additional FCC regulation of the industry. The numbers have danced around the mid- to upper-60% range reported in these sources since 2004, only tipping over in Warren Communications News’ Television and Cable Factbook, which recently reported a 71.4% take rate to the FCC.1 When it became clear that the FCC was prepared to take action to invoke the 70/70 rule on the basis of the Warren data, the managing editor of Warren Communications News’ Television and Cable Factbook immediately called its own data into question in an interview in Communications Daily:

The figures from the Television and Cable Factbook aren’t well suited to determining whether the threshold has been met, said Managing Editor Michael Taliaferro. Taliaferro said Factbook figures understate the number of homes passed by cable systems — and the number of subscribers — because not all operators participate in its survey. “More and operators are just not giving up” those numbers, he said. “We could go with two dozen footnotes when we start to report this data.” Cable operators participating in the Factbook survey said they passed 94.2 million homes and had 67.2 million subscribers.

The FCC official who asked him for the cumulative figure didn’t say how it would be used, Taliaferro said. If he had known, he would have provided a list of caveats, he said. “It would have been a very lengthy email,” he said. Taliaferro said he did point out the shortcomings in a phone conversation with the FCC official but didn’t put it in writing because he wasn’t asked to. “I had no idea what they were doing with it.”2

More below…

Continue reading

Evocative Performance vs. Information Transmission

An interesting thing happens when a medium has enough bandwidth to be a “rich medium.” It crosses a threshold from merely being an informational medium to being an evocative medium.

Consider radio, which was initially used to carry Morse code over the wireless tracts between ships at sea and shore. The entire communications payload of a message could be perfectly preserved in notating the discrete dots and dashes. Like digital media, the informational content was completely preserved regardless of whether it was carried by radio, telegraph, or paper. But when radio started carrying voice, there was communication payload that was not completely preserved in the context of other media. The human voice conveys more subtlety than mere words.

Thus far, the Internet has been mostly informational. We do use it to transmit individual sound and video presentational work, but the Internet platforms in these situations are merely the road on which these travel rather than the medium itself. (My kids say they are listening to a song or watching a video, rather than that they are using the Internet or that they are on-line. The medium is the music and video.)

So, what happens when an Internet platform supports voice and video, both live and prerecorded, and allows individual works to be combined and recombined and annotated and added to and for the whole process to be observed? Do “sites” become evocative? Do presentations on them become a performance art? Do we loose veracity or perspicuity as the focus shifts to how things are said rather than what is said? Here’s a radio performance musing on some of this and more.

I think maybe this is the point where the medium becomes the message. If a technology doesn’t matter because everything is preserved in other forms, then the technology isn’t really a distinct medium in McLuhan’s sense.