Broadcasters Try To Embed Denial of Service Trojan Horse In White Spaces Rules

The official agenda for the FCC’s September Open Meeting on Thursday lists the broadcast white spaces as one of the items. This Order will resolve the details left hanging from the 2008 Order (although it now appears that it will not select the database operator), finally allowing development of this technology and forming the foundation for the next generation of unlicensed wireless technology.

Or maybe not. Even more than usual, this Order relies on getting all the details right. The limitations and interference mitigation mechanisms have left very little in the way of usable spectrum in the largest urban markets most attractive to manufacturers. Lose what’s left and you lose national markets necessary to interest developers and achieve economies of scale. Do anything further to drive up cost of manufacture or add a new layer of uncertainty and would-be developers – who have already been at this for [8 years] and poured millions of dollars into prototypes and pilot projects -– will likely pull the plug and walk away. Anyone who remembers such promising technologies as ultrawideband should recognize the death by a thousand cuts approach favored by incumbents.

[We’re having some technical issues here at Wetmachine, so I can’t link back to my previous posts on White Spaces. Sorry about that. Hopefully it will get resolved soon.]


Allow me to select one example out of many to illustrate how a seemingly harmless little change in some small technical rule makes the difference between a viable service and something too expensive to develop profitably. Under the rules established in 2008, we have a database that keeps track of what channels are available. A low-power mobile device can either operate in “Mode 2,” which means that it can independently contact the database. Or it can operate in “Mode 1,” which means that it check with a Mode 2 device or fixed base station to find out what channels are available. I refer to this as a “ping,” because it is reminiscent of the ping function used by routers to rout packets based on the domain name system.

Right now, the rules require a Mode 2 (the ones that access the database directly) to ping the database every 24 hours. Mode 1 devices “listen” to Mode 2, according to the 2008 Order (which I understand means “get told when an actual change occurs). The broadcasters want the Mode 1 devices to ping the Mode 2s every 60 seconds and want Mode 2 to ping the database every 15 minutes, if not more frequently. Since television broadcast towers are big stationary things, not Ents marching on Isengard, one may ask why devices need to check more than once a day. In response, broadcasters explain that if some day some news team somewhere they might possibly be running down the street after some hot news lead if they ran into someone using a smart phone with white spaces capability it might, possibly, cause some sort of interference with the mobile news crew’s wireless microphone system.

That’s a heck of a lot of “ifs” and “possiblies” — especially in light of the fact that the actual engineers at the FCC have examined this thing to death over the last 8 years and determined that (a) that’s kind of unlikely, and (b) even if it did happen, it’s not clear you’d get any interference worth noticing. Still, argue the broadcasters to the non-engineers on the 8th flloor, what is the harm with guarding against such an unlikely scenario by mandating check-ins once a minute/every 15 minutes for Mode 1/Mode 2 devices? After all, how hard can it be to send a little ping every minute?

Lets work it out. There are 1,440 minutes in a day. So every Mode 1 device will go from “listening” (the current rule) to a Mode 2 device to pinging the Mode 2 device 1,440 times a day. Meanwhile, Mode 2 devices will go from pinging the network once every 24 hours to 96 times every 24 hours.

That’s quite a load change. Sure, it’s a low-power ping. But multiply something low-power by 1,440 times and it starts to add up. For the Mode 1 devices, it becomes a serious battery killer. For the Mode 2 “server” devices, however, the destructive consequences increase exponentially for every Mode 1 “client.” On one end, rather than having Mode 1 devices listen, it must devote processing time and power to answer the same question 1,440 times a day. For every new device added, it increases the workload by 1,440 times. On the back end, the Mode 2 device must also ping the database every 15 minutes, increasing the total number of database pings from 1 every 24 hours to 96 times every 24 hours.

At best, this increases costs throughout the entire system to the point where it threatens the viability of the technology. That’s a heck of a cost to pay “just to be safe” from the off chance that some hypothetical news crew of the future might experience a minor burst of interference and deprive us of Lindsey Lohan’s next brilliant utterance. But more likely this “minor change” causes the system to collapse under the increased demand. Remember for the Mode 2 devices, this is a synchronized ping from all devices coming in at 60 second intervals, which multiplies by 1,440 times per day for every new device that turns on.

Those familiar with network security may recognize this as strikingly similar to a “distributed denial of service” (DDoS) attack. The hacker attacking the network causes the client to ping the server for routing information repeatedly, and from an increasing number of devices, until the system becomes overloaded. The same dynamic comes into play here. Except that instead of being planted in the system by a malicious hacker, this gets planted in the network by well-meaning FCC Commissioners at the insistence of broadcasters who have spent the last 8 years trying to convince anyone gullible enough that white spaces means the end of broadcast television. Lest you think I exaggerate in the heat of advocacy rhetoric, I invite you to review this “educational” film, Your Neighbor’s Static, pushed out by the broadcasters to persuade regulators and the public that if you approve white spaces devices poor Grandma will not be able to watch her programs because of the interference from mean old Mr. Google and Ms. Microsoft running their nasty interfering white spaces devices next door.

The FCC absolutely needs to get this right, and get it right now. Developers have hung on for 8 years, in the face of endless testing designed to address the criticisms from incumbents with every interest in remaining unsatisfied. For the last two years, developers, the FCC, and the NTIA have looked to the white spaces architecture as providing the next generation of innovation for unlicensed technology. If we blow it here, odds are good that the companies looking to develop this technology in the United States will roll their eyes and head for other countries, like China and Brazil, that want to get serious about new wireless technologies.

Stay tuned . . . .

One Comment

  1. There are several aspects to your scenarios that are quite lacking (and misleading) as it applies to the real world …

    First, as far as the reasons to contact the database more frequently, there are additional considerations:

    a) It’s more than just ENG wireless mics that are highly dynamic; location shoots for movies and TV (yes, Part 74 licensed operations) are not always scheduled firmly in advance, and last minute wireless equipment additions to scheduled live broadcast events. (In case you forgot, all this is the audio for most of that content you want people to be able to access more readily through their TVBDs.)

    b) Since spectrum sensing never did work well for Part 74 devices, it’s no longer a requirement which means there’s no back up for a failed or untimely database look up, which

    c) as it stands now, is required only once every 24 hours with a 24 hour grace period: Should the TVBD fail to establish a link to the database, it’s granted a 24 hour grace period before it’s next look up. Since the look ups only need to occur by midnight, if the failed attempt occurred at 12:01am, the TVBD is not required to look up until midnight of the following night – almost 48 hours later. In our highly dynamic and mobile society, that’s far too long a wait.

    Second, your presumption that increased look ups will overload the system is also misguided; all one has to do is look to cellular/PCS architectures (both GSM and CDMA) to understand that constant polling and validations in subscriber units, where more than just a few bits of data are exchanged, along with rather substantial battery life has been a reality for a number of years (and that’s while you’re making a phone call or surfing the web). Further, as the FCC is looking at permitting multiple databases (another factor that is likely to be a problem area, BTW), all those TVDB inquiries will be divided up among several database providers, just as there are multiple cellular/PCS providers in each market. Incidentally, there’s nothing to say the individual base station sites couldn’t buffer geo-location information, for say an hour’s worth, thereby further offloading demands on the databases and backbone connections (something cellular/PCS architectures can’t do).

    I’ll grant you that look ups every 60 seconds for fixed bases maybe somewhat excessive, but 24-48 hours is ridiculous. Far too much can occur in that time as to make the database ineffective. Even personal/portable TVBDs probably don’t need a lookup every 60 seconds, as long as they must look up whenever they move the 100 meters and if the 24 hour grace period would be eliminated. Something along the lines of once every 30 minutes is likely sufficient if not moving.

    “The FCC absolutely needs to get this right, and get it right now.” With this I agree wholeheartedly.

Comments are closed