In the ocean of prognoses on how the coronavirus pandemic will reshape the world, one claim stands out: we will return to a renewed welfare state. “The big state is back,” as the Economist put it; reversing Thatcher’s motto, Boris Johnson noted that “there is such a thing as society.” The pandemic could, it is argued, succeed in overcoming neoliberalism where the 2008 financial crisis failed. To some extent, the shift seems to be already underway. Governments are bailing out companies, nationalizing payrolls, and redirecting industrial production towards urgent health needs. Ideas long discussed but strongly resisted, such as minimum income guarantees, have suddenly become reality in countries like Spain. Some claim (or hope) that the collective nature of the current catastrophe will promote the resurgence of social solidarity, reviving a certain “spirit” of World War II.
But if any renewal of social welfare does emerge from this crisis, it will be a very different “welfare” from that envisaged in the post-1945 period. It will be strongly driven by private corporations, and it will use their tools and platforms — whose ultimate goal is generating profit. Crucially, it will be based on opaque and intrusive forms of datafication. By this we mean not merely the intensification of surveillance — which, as many have noted, is already happening — but two interconnected processes: “the transformation of human life into data through processes of quantification, and the generation of different kinds of value from data.” A datafied welfare system will consolidate Big Tech companies as institutions essential to the basic functioning of the state and society. Should that happen, we will see not a return to the world before neoliberalism but the emergence of a new social order centered on what Nick Couldry and Ulises Mejias have recently called data colonialism.
Making sense of this process requires some context. The entwinement of datafication and welfare is not new, and it often poses a threat to the very human rights it is expected to protect. An excellent introduction to these issues was published in October 2019 by Philip Alston, NYU law professor and UN special rapporteur.
Alston’s report demonstrates that in both rich and poor countries, social security is increasingly driven by “digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish.” These technologies encompass the development of biometric identification systems, such as India’s massive Aadhaar, and of automated systems in countries like the United Kingdom, Germany, and the United States which evaluate who is eligible to certain welfare programs, and how much beneficiaries should be paid.
The evidence examined by Alston suggests that the negative consequences largely outweigh the positive. Despite some gains in efficiency, these systems are prone to both produce errors and injustices and reduce citizens’ ability to understand and negotiate these unfair computational decisions. They also commonly depend on data collected under unequal conditions — compromising proper consent — not to mention the constant threat of such data being accessed or hacked for commercial, criminal, and political purposes. They also risk, as Virginia Eubanks put in her extraordinary study of datafication of welfare in the United States, creating an “ethical distance” between classes, while reproducing older “hierarchies of human value and worth.” Even where these specific harms are absent, the right of targeted populations to live free from continuous surveillance and data-driven experimentation has been eroded.
The situation is hardly better when we turn to welfare on the global scale — what is usually called “international development” and humanitarian aid. “In the name of development,” scholars Linnet Taylor and Dennis Broeders write, unaccountable systems and unfair data collection practices have been deployed.
Indeed, vulnerable populations have been used as testing grounds for new technologies of data extraction and deployment. A case in point is the 2014 Ebola outbreak, when actors in the humanitarian community were granted access to detailed phone records of Liberian citizens. The records were hardly useful — but organizations that lobbied for access “stood to gain commercially” from it, “through competitive advantage over other humanitarian organizations or through the testing of commercial products.”
As the Ebola case demonstrates, states are only one actor in the public-private partnerships judged necessary to provide “innovative” solutions to social welfare. Programs and projects are commonly designed and executed by organizations that either work in competitive markets, such as NGOs, or are themselves for-profit companies. These arrangements emerge from the perception that governments lack the necessary human expertise, technological capacity and accurate data necessary to “solve” complex social issues such as poverty, inequality, and health care.
In these various ways, the datafication of welfare and development implements a neoliberal rationality, and its guiding belief that private firms ought to fill the void left by ineffectual states, in both the Global South and North.
AI for Social Good?
A similar rationale affects projects across the world that, in the jargon of digital entrepreneurs and consultants, have been termed “AI for social good.” Big Tech companies have eagerly joined in the hype. Facebook launched its “Social Good Forum” in 2017, offering “tools and initiatives to help people keep each other safe and supported on Facebook.” The company has launched multiple initiatives to provide connectivity to disconnected people, and its controversial “Free Basics” program is still expanding, even though it requires users to give up their use data in return for the stripped-down internet they access (a reason it was banned in India).
After “Lucy,” a project rolled out in Kenya in 2014 which promised to use AI to “solve healthcare, education, water and sanitation, human mobility and agriculture” — widely criticized for its vague solutions and venal motives — IBM came up with a “Science and Social Good” initiative. Its goal is nothing less than to “solve the world’s toughest problems” through “science and technology.”
In 2019, Google.org — the search giant’s philanthropic division — redirected its multimillion-dollar “Impact Challenge” project toward proposals to “use AI to help address societal challenges.” Microsoft calls its initiative “AI for Good.” It tackles issues such as climate change, humanitarian crisis, and health. Some of these projects are traditional forms of charity — for instance, the donation of money and human and computational resources for NGOs and academics to develop their own technologies. Increasingly, though, they involve practices that intensify the collection of data from vulnerable populations.
All these projects represent a massive increase in corporations’ ability to intervene in the management of social life: the medium only improves their ability to extract and process data. Consider Facebook’s “suicide prevention tool,” that employs machine learning to interpret various behavioral signs to identify posts suggesting that “someone might be at risk” of killing oneself: this project simply could not exist without Facebook’s preexisting power to access and analyze all users’ actions on its platform, however personal. Recently, Facebook patented the system.
In South America, Microsoft is a partner of Project Horus, which aims to help governments use “artificial intelligence in the prevention of teenage pregnancy and school dropout.” In Argentina, the project collected a “total, constant, shared and updated information” of vulnerable minors in the Salta Province. This “unique database,” as its creators argued in 2018, is then fed into a machine-learning system that could allegedly “predict” who is more likely to stop attending school or getting pregnant, so as to inform government’s action upon these individuals. The fact that the project was widely critiqued in Argentina for its aggressive surveillance and faulty forecasts did not stop it from being encouraged by UNICEF and piloted in Brazil, with the cooperation of Jair Bolsonaro’s far-right government. There, the initiative would focus on poor kids who are already registered in the federal government’s Cadastro Único — one of the world’s largest social welfare databases, with the data of over 73 million people.
When the coronavirus hit, Big Tech turned to a version of this “social good” discourse. While nation-states have often fumbled with their responses to the crisis, companies have reacted swiftly and, to all appearances, decisively. Mark Zuckerberg first mentioned COVID-19 on March 4. Facebook, he said, was going to not only help people to “to stay connected” but also provide “credible and accurate information” and contribute to the “broader efforts to contain the outbreak.”
Since then, the news kept coming: Facebook would fund “the acquisition of state-of-the-art FDA approved COVID-19 diagnostic machines,” start a “Solidarity Response Fund”, “invest” $100 million to assist small businesses, launch a “Coronavirus Information Center,” and allow researchers to run an “symptom survey” in their platform, not to mention various multimillion-dollar donations to health research and fact-checking. He is not alone. Google has also donated many millions of dollars and mobility data, and collaborated with multiple public officials. It has also created a portal “that records people’s symptoms, triages them to determine who requires drive-thru testing, and displays test results once they are available,” leading some to suspect it might be a data grab. Twitter’s CEO pledged to donate a quarter of his wealth, approximately $ 1 billion.
There is no evidence that these organizations are insincere about their desire to help. Nor it can be said that their work will not save lives; in all likelihood, it will. Yet the implications for wider society are much more ambiguous. To say that this crisis might help Big Tech to regain their earlier moral high-ground, reversing the post-2016 “techlash,” may be correct, but also myopic. Their responses to the pandemic (and its social divisions) are inherently entangled with companies’ main form of value creation: data colonialism. That is the bigger story here: the continuation, and acceleration, of a new land grab of historic proportions, meriting the term “colonialism.” Where five centuries ago historic colonialism seized land, the land’s resources, and the bodies to work them, today’s land grab is targeted at human life itself, and the value that can be extracted from it in the form of data. Since such data extraction only works through the continuous tracking of myriad aspects of daily life, human beings’ fundamental right to live free from surveillance becomes the collateral damage of corporate advancement.
What sort of “welfare” will this leave us with? When it is said that platforms will emerge from this crisis as “digital utilities,” it is usually assumed that such utilities relate to providing people with information and spaces for virtual interaction — the core of Big Tech’s business models. But from the outset Big Tech has been marked by an aggressively expansionist vision which colonizes ever larger parts of human life for data extraction.
The goal is to datafy not just one kind of social practice but human life itself. Google began as a database organizer, but is now a conglomerate (Alphabet) operating in areas as diverse as health, urban infrastructure, transportation, and private equity. Facebook began as a networking tool for Ivy League students, but now claims to convene a “global community” of more than 2 billion people, with plans for a global digital currency, prompting concerns that it could “erode national control over money,” long considered a prerogative of governments. There are no clear limits on which areas can be exploited for profit, even if certain segments of the population represent particularly useful targets.
How far rich states, weakened by the COVID-19 crisis, will take up the new opportunities for data-driven welfare provision is, at this point, necessarily unclear. But what is apparent is that disease presents one more — particularly important — opportunity for this expansion. Some months ago, these companies would have come under fire if they had tried to use the data of billions of people to track a virus outbreak, as Facebook, Google, and Apple are doing. The COVID-19 emergency appears suddenly to have rendered this work desirable.
Many have raised critical questions on the perils of these initiatives. Yet the novelty resides less in any particular set of apps or datasets, or what will happen to them after the pandemic, than on the assumption that it is to this novel kind of corporate power, with its unprecedented global capability of producing new forms of social knowledge for social control, that we should now turn in a moment of public crisis. The result is less that Big Tech provides a new kind of public utility, and more that the data resources of Big Tech companies become essential to the state’s continuing authority and the orderliness of social life. They will be not only “social media platforms,” “search engines,” and computer makers, but — alongside governments — the very sustainers of our welfare.
During and after this crisis, we will see an upscaling of the “public-private surveillance partnership” that has been building for some years. Health and welfare are just two areas of population management affected; others may be education, labor infrastructure, and law enforcement. There will be resistance, no doubt, and this transformation is unlikely to happen homogenously across the planet — parallel developments are under way in China, too, even if with a different balance between corporations and state.
There is very little evidence, as of today, that this emerging resistance presents a substantial hurdle to the acceleration of a land grab that was largely underway even before the pandemic.
As data colonialism unfolds, the result will be a new, much more complex sort of welfare, that can only “give” by simultaneously eroding fundamental freedoms. Much has been said recently about how renewed appreciation of public health services means the end for that strident neoliberal pro-market rhetoric. But this may be proved right in a surprising way, when, at the end of this crisis, the dividing-line between market and society that neoliberalism once had to challenge has been dismantled in the name of a new, corporate-sponsored “social good.”