The T-Mobile Data Breach and Your Basic Primer on CPNI – Part I: The Major Background You Need to Know for This to Make Sense.

T-Mobile announced recently that it experienced a major cybersecurity breach, exposing personal information (including credit card numbers) for at least 53 million customers and former customers. Because T-Mobile is a Title II mobile phone provider, this automatically raises the question of whether T-Mobile violated the FCC’s Customer Proprietary Network Information (CPNI) rules. These rules govern, among other things, the obligation of telecommunications service providers to protect CPNI and how to respond to a data breach when one occurs. The FCC has confirmed it is conducting an investigation into the matter.

 

It’s been a long time since we’ve had to think about CPNI, largely because former FCC Chair Ajit Pai made it abundantly clear that he thought the FCC should not enforce privacy rules. Getting the FCC to crack down on even the most egregious violations – such as selling super accurate geolocation data to bounty hunters was like pulling teeth. But back in the Wheeler days, CPNI was a big deal, with Enforcement Bureau Chief Travis LeBlanc terrorizing incumbents by actually enforcing the law with real fines and stuff (and much to the outrage of Republican Commissioners Ajit Pai and Mike O’Reilly). Given that Jessica Rosenworcel is now running the Commission, and both she and Democratic Commissioner Geoffrey Starks are both strong on consumer protection generally and privacy protection in particular, it seems like a good time to fire up the long disused CPNI neurons with a review of how CPNI works and what might or might not happen in the T-Mo investigation.

 

Before diving in, I want to stress that getting hacked and suffering a data breach is not, in and of itself, proof of a rule violation or cause for any sort of fine or punishment. You can do everything right and still get hacked. But the CPNI rules impose obligations on carriers to take suitable precautions to protect CPNI, as well as obligations on what to do when a carrier discovers a breach. If the FCC finds that T-Mobile acted negligently in its data storage practices, or failed to follow appropriate procedures, it could face a substantial fine in addition to the FCC requiring it to come up with a plan to prevent this sort of hack going forward.

 

Assuming, of course, that the breach involved CPNI at all. One of the fights during the Wheeler FCC involved what I will call the “broad” view of CPNI v. the “narrow” view of CPNI. Needless to say, I am an advocate of the “broad” view, and think that’s a proper reading of the law. But I wouldn’t be providing an accurate primer if I didn’t also cover the “narrow” view advanced by the carriers and Pai and O’Reilly.

 

Because (as usual) actually understanding what is going on and its implications requires a lot of background, I’ve broken this up into 2 parts. Part I gives the basic history and background of CPNI, and why this provides the first test of how the Biden FCC will treat CPNI enforcement. Part II will look at application of the FCC’s rules to the T-Mobile breach and what issues are likely to emerge along the way.

 

More below . . .

Continue reading

A Tax on Silicon Valley Is A Dumb Way to Solve Digital Divide, But Might Be A Smart Way To Protect Privacy.

Everyone talks about the need to provide affordable broadband to all Americans. This includes not only finding ways to get networks deployed in rural areas on par with those in urban areas. As a recent study showed, more urban folks are locked out of home broadband by factors such as price than do without broadband because of the lack of a local access network. The simplest answer would be to simply include broadband (both residential and commercial) in the existing Universal Service Fund. Indeed, Rep. Doris Matsui has been trying to do this for about a decade. But, of course, no one wants to impose a (gasp!) tax on broadband, so this goes nowhere.

 

Following the Washington maxim “don’t tax you, don’t tax me, tax that fellow behind the tree,” lots of people come up with ideas of how to tax folks they hate or compete against. This usually includes streaming services such as Netflix, but these days is more likely to include social media — particularly Facebook. The theory being that “we want to tax our competitors, “or “we hates Facebook precious!” Um, I mean “these services consume more bandwidth or otherwise disproportionately benefit from the Internet.” While this particular idea is both highly ridiculous (we all benefit from the Internet, and things like cloud storage take up more bandwidth than streaming services like Netflix) and somewhat difficult —  if not impossible — to implement in any way related to network usage (which is the justification), it did get me thinking about what sort of a tax on Silicon Valley (and others) might make sense from a social policy perspective.

 

What about a tax on the sale of personal information, including the use of personal information for ad placement? To be clear, I’m not talking about a tax on collecting information or on using the information collected. I’m talking a tax on two-types of commercial transactions; selling information about individuals to third parties, or indirectly selling information to third parties via targeted advertising. It would be sort of a carbon tax for privacy pollution. We could even give “credits” for companies that reduce the amount of personal information that they collect (although I’m not sure we want to allow firms to trade them). We could have additional fines for data breaches the way we do for other toxic waste spills that require clean up.

 

Update: I’m apparently not the first person to think of something like this, although I’ve expanded it a bit to address privacy generally and not just targeted advertising. As Tim Karr pointed out in the comments, Free Press got here ahead of me back in February — although with a more limited proposed tax on targeted advertising. Also, Paul Roemer wrote an op ed on this in the NYT last May. I have some real problem with the Roemer piece, since he seems to think that an even more limited tax on targeted advertising is enough to address all the social problems and we should forget about either regulation or antitrust. Sorry, but just as no one serious about global climate change thinks a carbon tax alone will do the trick, no one serious about consumer protection and competition should imagine that a privacy pollution tax alone is going to force these companies to change their business models. This is a push in the right direction, not the silver bullet.

 

I elaborate below. . . .

Continue reading

Information Fiduciaries: Good Framework, Bad Solution.

By and large, human beings reason by analogy. We learn a basic rule, usually from a specific experience, and then generalize it to any new experience we encounter that seems similar. Even in the relatively abstract area of policy, human beings depend on reasoning by analogy. As a result, when looking at various social problems, the first thing many people do is ask “what is this like?” The answer we collectively come up with then tends to drive the way we approach the problem and what solutions we think address it. Consider the differences in policy, for example, between thinking of spectrum as a “public resource” v. “private property” v. “public commons” — although none of these actually describes what happens when we send a message via radio transmission.

 

As with all human things, this is neither good nor bad in itself. But it does mean that bad analogies drive really bad policy outcomes. By contrast, good analogies and good intellectual frameworks often lead to much better policy results. Nevertheless, most people in policy tend to ignore the impact of our policy frameworks. Indeed, those who mistake cynicism for wisdom had a tendency to dismiss these intellectual frameworks as mere post hoc rationalizations for forgone conclusions. And, in fact, sometimes they are. But even in these cases, the analogies till end up subtly influencing how the policies get developed and implemented. Because law and policy gets implemented by human beings, and human beings think in terms of frameworks and analogies.

 

I like to think of these frameworks and analogies as “deep structures” of the law. Like the way the features of geography impact the formation and course of rivers over time, the way we think about law and policy shapes how it flows in the real world. You can bulldoze through it, forcibly change it, or otherwise ignore these deep structures, but they continue to exert influence over time.

 

Case in point, the idea that personal information is “property.” I will confess to using this as a shorthand myself since 2016 when I started on the ISP privacy proceeding. My 2017 white paper on privacy legislative principles, I traced the evolution of this analogy from Brandies to the modern day, similar to other intangibles such as the ‘right of publicity.’ But as I also tried to explain, this was not meant as actual, real property but shorthand for the idea of a general, continuing interest. Unfortunately, as my Public Knowledge colleague Dylan Gilbert explains here, too many people have now taken this framework as meaning ‘treat property like physical property that can be bought and sold and have exclusive ownership.’ This leads to lots of problems and bad policies, since (as Dylan explains) data is not actually like physical property or even other forms of intangible property.

 

Which brings me to Professor Jack Balkin of Yale Law School and his “information fiduciaries” theory. (Professor Balkin has co-written pieces about this with several different co-authors, but it’s generally regarded as his theory.) Briefly (since I get into a bit more detail with links below), Balkin proposes that judges can (and should) recognize that the nature of the relationship between companies that collect personal information in exchange for services is similar to professional relationships such as doctor-patient or lawyer-client where the law imposes limitations on your ability to use the information you collect over the course of the relationship.

 

This theory has become popular in recent years as a possible way to move forward on privacy. As with all theories that become popular, Balkin’s information fiduciary theory has started to get some skeptical feedback. The Law and Political Economy blog held a symposium for information fiduciary skeptics and invited me to submit an article. As usual, my first draft ended up being twice as long as what they wanted. So I am now running the full length version below.

 

You can find the version they published here, You can find the rest of the articles from the symposium here. Briefly, I think relying on information fiduciaries for privacy doesn’t do nearly enough, and has no advantage over passing strong privacy legislation at the state and federal levels. OTOH, I do think the idea of a fiduciary relationship between the companies that collect and use personal information and the individuals whose information gets collected provides a good framework for how to think about the relationships between the parties, and therefore what sort of legal rights should govern the relationship.

 

More below . . .

Continue reading

The Market for Privacy Lemons. Why “The Market” Can’t Solve The Privacy Problem Without Regulation.

Practically every week, it seems, we get some new revelation about the mishandling of user information that makes people very upset. Indeed, people have become so upset that people are actually talking about, dare we say it, “legislating” some new privacy protections. And no, I don’t mean “codifying existing crap while preempting the states.” For those interested, I have a whitepaper outlining principles for moving forward on effective privacy legislation (which you can read here). My colleagues at my employer Public Knowledge have a few blog posts on how Congress ought to respond to the whole Facebook/Cambridge Analytica thing and analyzing some of the privacy bills introduced this Congress.

 

Unsurprisingly, we still have folks who insist that we don’t need any regulation and that if we don’t have a market that provides people with privacy protection, it must be because people don’t value privacy protection. After all, the argument goes, if people valued privacy, people would offer services that protect privacy. So if we don’t see such services in the market, people must not want them. Q.E.D. Indeed, these folks will argue, we find that — at least for some services — there are privacy friendly alternatives. Often these cost money, since you aren’t paying with your personal information. This leads some to argue that it’s simply that people like “free stuff.” As a result, the current Administration continues to focus on finding “market based solutions” rather than figuring out what regulations would actually give people greater control over their personal information, and prevent the worst abuses.

 

But an increasing number of people are wising up to the reality that this isn’t the case. What folks lack is a vocabulary to explain why these “market approaches” don’t work. Fortunately, a Nobel Prize winning economist named George Akerlof figured this out back in the 1970s in a paper called the Market for Lemons. Akerlof’s later work on cognitive dissonance in economics is also relevant and valuable. (You can read what amounts to a high level book report on Akerlof & Dickens “The Economics of Cognitive Dissonance” here.) To summarize: everyone knows that they can’t do anything real to protect their privacy, so they either admit defeat and resent it, or lie to themselves that they don’t care. A few believe they can protect themselves via some combination of services and avoidance I will call the “magic privacy dance,” and therefore blame everyone else for not caring enough to do their own magic privacy dance. This ignores that (a) the magic privacy dance requires specialized knowledge; (b) the magic privacy dance imposes lots of costs, ranging from monthly subscription to a virtual private network (VPN) to opportunity cost from forgoing the use of services like Facebook to the fact that Amazon and Google are so embedded in the structure of the internet at this point that blocking them literally causes large parts of the internet to become inaccessible or slow down to the point of uselessness; and (c) Nothing helps anyway!  No matter how careful you are, a data breach by a company like Equifax or a decision by a company you invested in to change their policy means all you magic privacy dancing amounted to a total expensive waste of time.

 

Accordingly, the rational consumer gives up. Unless you are willing to become a hermit, “go off the grid,” pay cash for everything, and other stuff limited to retired spies in movies, you simply cannot realistically expect to protect your privacy in any meaningful way. Hence, as predicted by Akerlof, rational consumers don’t trust “market alternatives” promising to protect privacy. Heck, thanks to Congress repealing the FCC’s privacy rules in 2017, you can’t even get on to the internet without exposing your personal information to your broadband provider. Even the happy VPN dance won’t protect all your information from leaking out. So if you are screwed from moment you go online, why bother to try at all?

 

I explore this more fully below . . .

Continue reading

CPNI Is More Than Just Consumer Privacy — How To Apply It To Digital Platforms.

This is the fourth blog post in a series on regulating digital platforms. A substantially similar version of this was published by my employer Public Knowledge. You can view the full series here. You can find the previous post in this series on Wetmachine here.

 

“Customer proprietary network information,” usually abbreviated as “CPNI,” refers to a very specific set of privacy regulations governing telecommunications providers (codified at 47 U.S.C. §222) and enforced by the Federal Communications Commission (FCC). But while CPNI provides some of the strongest consumer privacy protections in federal law, it also does much more than that. CPNI plays an important role in promoting competition for telecommunications services and for services that require access to the underlying telecommunications network — such as alarm services. To be clear, CPNI is neither a replacement for general privacy nor a substitute for competition policy. Rather, these rules prohibit telecommunications providers from taking advantage of their position as a two-sided platform. As explained below, CPNI prevents telecommunications carriers from using data that customers and competitors must disclose to the carrier for the system to work.

All of which brings us to our first concrete regulatory proposal for digital platforms. As I discuss below, the same concerns that prompted the FCC to invent CPNI rules in the 1980s and Congress to expand them in the 1990s apply to digital platforms today. First, because providers of potentially competing services must expose proprietary information to the platform for the service to work, platform operators can use their rivals’ proprietary information to offer competing services. If someone sells novelty toothbrushes through Amazon, Amazon can track if the product is selling well, and use that information to make its own competing toothbrushes.

 

Second, the platform operator can compromise consumer privacy without access to the content of the communication by harvesting all sorts of information about the communication and the customer generally. For example, If I’m a mobile phone platform or service, I can tell if you are calling your mother every day like a good child should, or if you are letting her sit all alone in the dark, and whether you are having a long conversation or just blowing her off with a 30-second call. Because while I know you are so busy up in college with all your important gaming and fraternity business, would it kill you to call the woman who carried you for nine months and nearly died giving birth to you? And no, a text does not count. What, you can’t actually take the time to call and have a real conversation? I can see by tracking your iPhone that you clearly have time to hang out at your fraternity with your friends and go see Teen Titans Go To The Movies five times this week, but you don’t have time to call your mother?

 

As you can see, both to protect consumer privacy and to promote competition and protect innovation, we should adopt a version of CPNI for digital platforms. And call your mother more often. I’m just saying.

Once again, before I dig into the substance, I warn readers that I do not intend to address either whether the regulation should apply exclusively to dominant platforms or what federal agency (if any) should enforce these regulations. Instead, in an utterly unheard of approach for Policyland, I want to delve into the substance of why we need real CPNI for digital platforms and what that would look like.

Continue reading

Better Privacy Protections Won’t Kill Free Facebook.

Once upon a time, some people developed a new technology for freely communicating with people around the world. While initially the purview of techies and hobbyists, it didn’t take long for commercial interests to notice the insanely popular new medium and rapidly move to displace the amateur stuff with professional content. But these companies had a problem. For years, people had gotten used to the idea that if you paid for the equipment to access the content, you could receive the content for free. No one wanted to pay for this new, high quality (and expensive to make) content. How could private enterprise possibly make money (other than selling equipment) in a market where people insisted on getting new content every day — heck, every minute! — for free?

 

Finally, a young techie turned entrepreneur came up with a crazy idea. Advertising! This fellow realized that if he could attract a big enough audience, he could get people to pay him so much for advertising it would more than cover the cost of creating the content. Heck, he even seeded the business by paying people to take his content, just so he could sell more advertising. Everyone thought he was crazy. What? Give away content for free? How the heck can you make money giving it away for free? From advertising? Ha! Crazy kids with their whacky technology. But over the course of a decade, this young genius built one of the most lucrative and influential industries in the history of the world.

 

I am talking, of course, about William Paley, who invented the CBS broadcast network and figured out how to make radio broadcasting an extremely profitable business. Not only did Paley prove that you could make a very nice living giving away content supported by advertising, he also demonstrated that you didn’t need to know anything about your audience beyond the most basic raw numbers and aggregate information to do it. For the first 80 or so years of its existence, broadcast advertising depended on extrapolated guesses about total aggregate viewing audience and only the most general information about the demographics of viewership. Until the recent development of real-time information collection via set-top boxes, broadcast advertising (and cable advertising) depended on survey sampling and such broad categories as “18-25 year old males” to sell targeted advertising — and made a fortune while doing it.

 

We should remember this history when evaluating claims by Facebook and others that any changes to enhance user privacy will bring the digital world crashing down on us and force everyone to start paying for content. Setting aside that some people might actually like the option of paying for services in exchange for enhanced privacy protection (I will deal with why this doesn’t happen on its own in a separate blog post), history tells us that advertising can support free content just fine without needing to know every detail of our lives to serve us unique ads tailored to an algorithms best guess about our likes and dislikes based on multi-year, detailed surveillance of our every eye-muscle twitch. Despite the unfortunate tendency of social media to drive toward the most extreme arguments even at the best of times, “privacy regulation” is hardly an all or nothing proposition. We have a lot of room to address the truly awful problems with data collection and storage of personal information before we start significantly eating into the potential revenue of Facebook and other advertising supported media.

 

Mind you, I’m not promising that solid and effective privacy regulation would have no impact on the future revenue earning power of advertising. Sometimes, and again I recognize this will sound like heresy to a bunch of folks, we find that the overall public interest actually requires that we impose limits on profit making activities to protect people. But again, and as I find myself explaining every time we debate possible regulation in any context, we don’t face some Manichean choice between libertarian utopia and a blasted regulatory Hellscape where no business may offer a service without filling out 20 forms in triplicate. We have a lot of ways we can strike a reasonable balance that provides users with real, honest-to-God enforceable personal privacy, while keeping the advertising-supported digital economy profitable enough to thrive. My Public Knowledge colleague Allie Bohm has some concrete suggestions in this blog post here. I explore some broader possible theoretical dimensions of this balance below . . . .

Continue reading

Is Net Neutrality (And Everything Else) Not Dead Yet or Pining For the Fjords? Contemplating Trump’s Telecom Policy.

The election of Donald Trump has prompted great speculation over the direction of telecom policy in the near future. Not surprisingly, everyone assumes that the primary Republican goal will be to completely roll back net neutrality and just about every other rule or policy adopted by the Wheeler FCC — perhaps even eliminating the FCC altogether or scaling back it’s authority to virtual non-existence. Why not? In addition to controlling the White House, Republicans have majorities in the Senate and the House.  Jeff Eisenach, the head of Trump’s FCC transition team (now called “Landing Teams”), has been one of the harshest critics of the FCC under both Wheeler and Genachowski. So it is unsurprising to see a spate of articles and blog posts on the upcoming death of net neutrality, broadband privacy, and unlicensed spectrum.

 

As it happens, I have now been through two transitions where the party with the White House has controlled Congress. In neither case have things worked out as expected. Oh, I’m not going to pretend that everything will be hunky-dory in the land of telecom (at least not from my perspective). But having won things during the Bush years (expanding unlicensed spectrum, for example), and lost things in the Obama years (net neutrality 2010), I am not prepared to lay down and die, either.

 

Telecom policy — and particularly net neutrality, Title II and privacy — now exists in an unusual, quantum state that can best be defined with reference to Monty Python. On the one hand, I will assert that net neutrality is not dead yet. On the other hand, it may be that I am simply fooling myself that net neutrality is simply pining for the fjords when, in fact, it is deceased, passed on, has run up the curtain and joined the choir invisible.

 

I give my reasons for coming down on the “not dead yet” side — although we will need to work our butts off to keep from getting clopped on the head and thrown into the dead cart. I expect the usual folks will call me delusional. However, as I have said a great deal over the years: “If I am delusional, I find it a very functional delusion.”

 

More below . . . .

Continue reading

Broadband Privacy Can Prevent Discrimination, The Case of Cable One and FICO Scores.

The FCC has an ongoing proceeding to apply Section 222 (47 U.S.C. 222) to broadband. For those unfamiliar with the statute, Section 222 prohibits a provider of a “telecommunications service” from either disclosing information collected from a customer without a customer’s consent, or from using the information for something other than providing the telecom service. While most of us think this generally means advertising, it means a heck of a lot more than that — as illustrated by this tidbit from Cable One.

 

Continue reading

Phone Industry To The Poor: “No Privacy For You!”

Back in June, the FCC released a major Order on the Lifeline program. Lifeline, for those not familiar with it by that name, is the federal program started in the Reagan era to make sure poor people could have basic phone service by providing them with a federal subsidy. Congress enshrined Lifeline (along with subsidy programs for rural areas) in 1996 as Section 254 of the Communications Act. While most of the item dealt with a proposal to expand Lifeline to broadband, a portion of the Order dealt with the traditional FCC Lifeline program.

As a result, the wireless industry trade association, CTIA, has asked the FCC to declare that poor people applying for Lifeline have no enforceable privacy protections when they provide things like their social security number, home address, full name, date of birth, and anything else an identity thief would need to make your life miserable. Meanwhile, US Telecom Association, the trade association for landline carriers, has actually sued the FCC for the right to behave utterly irresponsibly with any information poor people turn over about themselves — including the right to sell that information to 3rd parties.

 

Not that the wireless carriers would ever want to do anything like that, of course! As CTIA, USTA, and all their members constantly assure us, protecting customer privacy is a number one priority. Unless, of course, they’re running some secret experiments on tracking without notifying customers that accidentally expose customer information to third parties. Oh, and it might take longer than promised to actually let you opt out once you discover it. And in our lawsuit against the FCC’s Net Neutrality rules, they explicitly cite the inability to use customer information for marketing, the inability to sell this information to third parties, and the requirement to protect this information generally as one of the biggest burdens of classifying broadband as Title II. But other than that, there is no reason to think that CTIA’s members or USTA’s members would fail to respect and protect your privacy.

 

So how did the Lifeline Reform Order which most people assumed was all about expanding Lifeline to broadband became the vehicle for the phone industry to tell poor people they have no privacy protections when they apply for a federal aid program? I explain below . . .

Continue reading