The Western world has been rocked by news that Cambridge Analytica — the data analytics firm formerly run by Steve Bannon and bankrolled by Trump-funding billionaire Robert Mercer — unethically harvested the data of 50 million Facebook users to target them with ads meant to benefit the Trump campaign. The news prompted declarations of a “data breach,” in which users’ “stolen data” was “hijacked” by the firm, which was “misusing” personal data for nefarious ends. “We ‘broke’ Facebook,” the whistle-blower involved said, prompting a reporter to ask if that meant the platform had been “hacked.”
The Cambridge Analytica story is outrageous, for a variety of reasons. But while we’re fuming at this violation, we should extend that anger to the way our online behavior is routinely documented, collected, and used for commercial purposes, and channel it into efforts to at last properly regulate the companies we’ve simply trusted to be responsible caretakers of what has become one of the most intimate aspects of our daily lives.
To be sure, there are elements unique to the Cambridge Analytica case that raise specific legal and ethical concerns. For one, the company acted in a grossly unethical way by lying to users to access their data, with the help of an app that told users the firm was conducting academic research, rather than doing work for a political campaign. Then there are the potential legal violations: the United States bars the employment of foreigners in political campaigns.
But the distinction between the company’s actions and business as usual is really one of degree, rather than a difference in kind.
What Cambridge Analytica did was not actually a “data breach.” The data harvesting took place in 2014, when Facebook’s terms of service and API allowed third-party apps to collect data on a users’ friends, a feature exploited by many thousands of other apps and which the company didn’t ban until a year later. As Lorenzo Franceschi-Bicchierai wrote at VICE Motherboard, this means something much worse than a breach: it means this outrageous violation of users’ privacy was, until recently, just the way things worked.
But even if Facebook no longer allows companies to collect and use your data just because one of your “friends” happened to get the urge to play a farming simulator, that doesn’t mean your data is now safe. In reality, Facebook — and just about every tech company that exists — is continually violating our privacy for the purpose of manipulating us, albeit for commercial purposes.
As we (hopefully) should all know by now, the unimaginably large volume of data Facebook collects about our activities while using its platform — as well as when we’re not — is handed over to advertisers, who use it to better target us with ads to sell their products. The breadth of this tracking is staggering, encompassing even the locations you visit and stores you shop in (if you have its mobile app installed), and can detect things as minute as whether you sign up to loyalty programs, add items to shopping carts online, and much more.
But companies can also gain direct access to users’ data through Facebook Connect, a single sign-on feature that lets users employ their Facebook profiles on other websites, to post comments, for example. Companies have used this feature to create profiles of users, which document everything from their lifestyles and life stages, to what kind of household they live in and their personalities — identical to the kinds of psychological profiles Cambridge Analytica put together for its own purposes.
The same concerns around ethics and consent raised by the Cambridge Analytica case also surround Facebook’s own data collection. Sure, Facebook allows you to turn off some of these features, such as location tracking. But how many Facebook users are actually aware they can do this? How many are even aware they’re being tracked in the first place, or how thoroughly Facebook is collecting information about their lives?
And then there are the countless types of data collection that users can’t turn off because they come with the Facebook experience. Your choice is either to accept that your activities will be collected and exploited, or leave Facebook. This is hardly an adequate definition of consent.
Par for the Course
And the trouble is, it’s not just Facebook. This is the case with all tech companies, including Google, Apple, and Microsoft, which do the same kind of invasive data mining — particularly Google, which stores all of our searches, and collates it with the data it gathers through its browser, its email service, and services like Google Docs to build ever more detailed portraits of us. We’re not even safe on our browsers, with websites that we visit attaching countless trackers that monitor our browsing behavior, data used by online advertising companies (Facebook, Google, and Twitter are leaders in this type of tracking too). Then there are the consumer data companies that hoover up, analyze, and sell the information we make public on sites like Facebook, Twitter, and LinkedIn.
Some of this is leveraged for political ends. One company created websites and used ads connected to Google search terms to spread inflammatory messages during last year’s Kenyan elections, most likely targeting them from data collected through such platforms.
The Cambridge Analytica example isn’t even the first time data has been leveraged in US politics. As numerous outlets have now reported, it was the Obama campaign that pioneered the use of such data mining — including efforts to match the viewing habits of millions of cable subscribers to its own list of voters and poll responses — in elections. Carol Davidsen, the 2012 campaign’s director of integration of media analytics, boasted that they “were able to ingest the entire social network … of the US,” exploiting the same lax privacy rules that Cambridge Analytica took advantage of in 2014. The only difference was that the users who gave the Obama campaign permission to use their data weren’t told it was for academic research.
But should we really limit our outrage to when our data is used for political ends? If we object to the idea of our intimate information being tracked and collected in order to manipulate our behavior, surely the ultimate purpose for which we’re being manipulated is a secondary issue.
If so, then the use of our data by the vast majority of tech companies for the purpose of selling us products is just as outrageous. Facebook’s “success stories” page touts examples from, among others, pharmaceuticals and financial services. Given their checkered histories, how comfortable are we with, say, drug producers, payday-lending companies, and even major banks using intimate psychological profiles of these platforms’ users to target them with products?
And the implications of such data collection don’t stop there. Right now, the responsible use of our data is entirely reliant on the ethical scruples of companies like Facebook, Google, and others, which doesn’t exactly inspire confidence. And if you truly believe that Cambridge Analytica was able, to borrow a popular phrase, to “hack” the 2016 election and install Trump in the White House, then it’s hard to overstate the danger of letting even more massive stores of data build up in the hands of companies like Google and Facebook — or anyone in the future who manages to carry out an actual data breach of their systems.
Scandals like this one — or Facebook’s notorious secret psychological experiment on its own users — will keep happening as long as the tech industry’s collection and handling of our data is left unregulated. But if the idea of privacy and freedom from corporate and state surveillance means anything, then it means being able to use the internet and platforms like Facebook without fear of having one’s activities constantly surveilled, documented, and turned into someone else’s property— and, eventually, commodified.
For decades now, tech companies and others working in the murky world of online data harvesting have been given a free ride, allowed to track our online behavior and use the resulting data as they see fit with few rules or restrictions. The anger over the Cambridge Analytica case is encouraging, but it shouldn’t stop there. It’s time to demand a radical change in how we treat companies like Facebook and Google.