Friday, April 15, 2022

Data Competition Won’t Protect Your Privacy


Regulators propose democratizing data and encouraging competition to reign in Big Tech. But such moves won’t go far enough in protecting user privacy.

With the bustle of policy proposals and antitrust enforcement, it looks like the tech giants Google, Apple, Meta, and Amazon will finally be reined in. The New York Times, for example, recently heralded Europe’s Digital Markets Act (DMA) as “the most sweeping legislation to regulate tech since a European privacy law was passed in 2018.” As Thierry Breton, one of the top digital officials in the European Commission, said in the article, “We are putting an end to the so-called Wild West dominating our information space. A new framework that can become a reference for democracies worldwide.”

So, will the DMA, along with all the other policies proposed in the United States, Europe, Australia, and Asia make the digital economy more contestable? Perhaps. But will they promote our privacy, autonomy, and well-being? Not necessarily, as my latest book Breaking Away: How to Regain Control Over Our Data, Privacy, and Autonomy explores.

Today a handful of powerful tech firms – or data-opolies – hoard our personal data. We lose out in several significant ways. For example, our privacy and autonomy are threatened when the data-opolies steer the path of innovation toward their interests, not ours (such as research on artificial neural networks that can better predict and manipulate our behavior). Deep learning algorithms currently require lots of data, which only a few firms possess. A data divide can lead to an AI divide where access to large datasets and computing power is needed to train algorithms. This can lead to an innovation divide. As one 2020 research paper found: “AI is increasingly being shaped by a few actors, and these actors are mostly affiliated with either large technology firms or elite universities.” The “haves” are the data-opolies, with their large datasets, and the top-ranked universities with whom they collaborate; the “have nots” are the remaining universities and everyone else. This divide is not due to industriousness. Instead, it is attributable, in part, to whether the university has access to the large tech firms’ voluminous datasets and computing power. Without “democratizing” these datasets by providing a “national research cloud,” the authors warn that our innovations and research will be shaped by a handful of powerful tech firms and the elite universities they happen to support.

When data is non-rivalrous, that is when use by one party does not reduce its supply, many more firms can glean insights from the data, without affecting its value. As Europe notes, most data are either unused or concentrated in the hands of a few relatively large companies.

Consequently, recent policies, such as Europe’s DMA and Data Act and the U.S.’s American Choice and Innovation Online Act, seek to improve interoperability and data portability and reduce the data-opolies’ ability to hoard data. In democratizing the data, many more firms and non-profit organizations can glean insights and derive value from the data.

Let us assume that data sharing can increase the value for the recipients. Critical here is asking how we define value and value for whom. Suppose one’s geo-location data is non-rivalrous. Its value does not diminish if used for multiple, non-competing purposes:

  • Apple could use geolocation data to track the user’s lost iPhone.
  • The navigation app could use the iPhone’s location for traffic conditions.
  • The health department could use the geolocation data for contact tracing (to assess whether the user came into contact with someone with COVID-19).
  • The police could use the data for surveillance.
  • The behavioral advertiser could use the geolocation data to profile the individual, influence her consumption, and assess the advertisement’s success.
  • The stalker could use the geolocation data to terrorize the user.

Although each could derive value from the geolocation data, the individual and society would not necessarily benefit from all of these uses. Take surveillance. In a 2019 survey, over 70% of Americans were not convinced that they benefited from this level of tracking and data collection.

Over 80% of Americans in the 2019 survey and over half of Europeans in a 2016 survey were concerned about the amount of data collected for behavioral advertising. Even if the government, behavioral advertisers, and stalkers derive value from our geo-location data, the welfare-optimizing solution is not necessarily to share the data with them and anyone else who derives value from the data.

Nor is the welfare-optimizing solution, as Breaking Away explores, to encourage competition for one’s data. The fact that personal data is non-rivalrous does not necessarily point to the optimal policy outcome. It does not suggest that data should be priced at zero. Indeed, “free” granular personal datasets can make us worse off.

In looking at the proposals to date, policymakers and scholars have not fully addressed three fundamental issues:

  • First, will more competition necessarily promote our privacy and well-being?
  • Second, who owns the personal data, and is that even the right question?
  • Third, what are the policy implications if personal data is non-rivalrous?

As for the first question, the belief is that we just need more competition. Although Google’s and Meta’s business model differs from Amazon’s, which differs from Apple’s, these four companies have been accused of abusing their dominant position, using similar tactics, and all four derive substantial revenues from behavioral advertising either directly (or for Apple, indirectly).

So, the cure is more competition. But as Breaking Away explores, more competition will not help when the competition itself is toxic. Here rivals compete to exploit us by discovering better ways to addict us, degrade our privacy, manipulate our behavior, and capture the surplus.

As for the second question, there has been a long debate about whether to frame privacy as a fundamental, inalienable right or in terms of market-based solutions (relying on property, contract, or licensing principles). Some argue for laws that provide us with an ownership interest in our data. Others argue for ramping up California’s privacy law, which the realtor Alastair Mactaggart spearheaded; or adopting regulations similar to Europe’s General Data Protection Regulation. But as my book explains, we should reorient the debate from “Who owns the data” to “How can we better control our data, privacy, and autonomy.” Easy labels do not provide ready answers. Providing individuals with an ownership interest in their data doesn’t address the privacy and antitrust risks posed by the data-opolies; nor will it give individuals greater control over their data and autonomy. Even if we view privacy as a fundamental human right and rely on well-recognized data minimization principles, data-opolies will still game the system. To illustrate, the book explores the significant shortcomings of the California Consumer Privacy Act of 2018 and Europe’s GDPR in curbing the data-opolies’ privacy and competition violations.

For the third question, policymakers currently propose a win-win situation—promote both privacy and competition. Currently, the thinking is with more competition, privacy and well-being will be restored. But that is true only when firms compete to protect privacy. In crucial digital markets, where the prevailing business model relies on behavioral advertising, privacy and competition often conflict. Policymakers, as a result, can fall into several traps, such as when in doubt, opting for greater competition.

Thus, we are left with a market failure where the traditional policy responses—define ownership interests, lower transaction costs, and rely on competition—will not necessarily work. Wresting the data out of the data-opolies’ hands won’t work either – when other firms will simply use the data to find better ways to sustain our attention and manipulate our behavior (consider TikTok). Instead, we need new policy tools to tackle the myriad risks posed by these data-opolies and the toxic competition caused by behavioral advertising.

The good news is that we can fix these problems. But it requires more than what the DMA and other policies currently offer. It requires policymakers to properly align the privacy, consumer protection, and competition policies, so that the ensuing competition is not about us (where we are the product), but actually for us (in improving our privacy, autonomy, and well-being).


Read More

No comments:

Post a Comment