[Exploring AI Perspectives] Data Colonialism and the Decolonization of Artificial Intelligence

  1. Nick Couldry and Ulises Mejias point out that in contemporary society, data is treated as if it were “the new oil.” In reality, however, unlike oil, data does not exist “naturally,” but rather must be “extracted from human life,” making it an artificial product. Big data goes beyond merely collecting information; it integrates and transforms human life itself into a new social order, converting everyday activities into data streams through what they call “data relations.” The problem is that this process creates a new form of colonialism—namely, “data colonialism.” Much like the history of colonialism, data colonialism is buttressed by an ideology that justifies the collection and use of personal data and that legitimizes transforming human beings into a resource. The severity of this issue lies precisely in these parallels.
  2. For data colonialism to operate effectively, there must first be a promoted belief that data is not individual property but rather a “public resource” that corporations or states can freely extract. For example, the World Economic Forum has argued that data is the “new oil of the 21st century,” thereby justifying it as a resource that should naturally be collected. As a result, people’s everyday activities are increasingly converted into data, and thus become objects of surveillance.
  3. Today, the key agents driving this data colonialism are Big Tech corporations. Through the collection, analysis, and trading of data, they establish the basic framework of surveillance capitalism. Within this system, people voluntarily provide their personal data while using social media or AI chatbots. Corporations continually induce users to willingly hand over their data, thereby generating economic value. However, as personal identities are dissected into multiple fragments of data for analysis, users are reduced to an object of quantified scrutiny—no longer regarded as individuals with distinct personalities but as a target of analyses that focus on consumption patterns or behavioral data.
  4. Under this “data colonialism,” individuals are no longer autonomous consumers but rather fall to the status of datafied objects. Human life is constantly tracked, and data becomes a means of social control. The possibility that algorithms could operate in ways biased for or against certain individuals or groups continues to grow. In the end, the data extracted from users—now rendered into objects—becomes an asset controlled by global corporations and states, functioning like “raw material” similar to oil for corporate and national profit-making.
  5. Is there a way to avoid repeating the colonial errors of the past in the midst of this era of comprehensive digital transformation? Rachel Adams proposes that, between the two pillars supporting the macro-level social system of “smart information systems”—namely “big data” and “artificial intelligence”—the path forward may lie in “decolonizing” AI so that, as an agent, it cannot be used in a way that reproduces colonial power structures.
  6. To this end, Adams first points out that most major criticisms people have regarding AI are linked to colonial logic. Specifically, she notes that (1) as the UN Human Rights Council report (2020) has warned, AI algorithms continue to produce racist outcomes, exacerbating existing racial and ethnic inequalities; (2) AI commodifies human experience and generates new forms of economic exploitation through data, thereby reducing humans to aggregated data sets; and (3) global powers leading AI development form a structure that compels technological dependence on the Global South, thus using AI as a tool of hegemonic competition among powerful nations. This, she argues, reproduces the core competencies of colonial ideology.
  7. Although attempts under the banner of “AI ethics” are currently being emphasized worldwide, Adams believes that these efforts are being thoroughly distorted by the self-serving logic of AI superpowers. In other words, elements such as fairness and responsibility, which are offered as ethical standards, are being “universalized” according to “their” values. In reality, however, we see instances where these same superpowers use smaller or less influential countries as testing grounds for new AI technologies, contradicting their professed ethics and values.
  8. Therefore, Adams claims that “decolonizing AI” is not merely an issue of technology or ethics. Instead, she argues, it requires deconstructing the very Western-centric knowledge system itself. The uncritical transplant of a homogenized understanding of “intelligence” into AI—drawn from eugenics and based on mathematical-statistical systems—has inevitably led to cultural, geographical, and technological monopolies. Hence, she insists on developing AI through an open stance toward diverse conceptual frameworks, embracing and reconstructing various ideological foundations. Strengthening data sovereignty at both individual and collective levels, promoting technological pluralism, and building ethical technology governance in a robust way are the paths to decolonizing AI and overcoming the crisis posed by expanding data colonialism.

References
Couldry, N., & Mejias, U. A. (2019). Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television & New Media, 20(4), 336–349.
Adams, R. (2021). Can Artificial Intelligence Be Decolonized? Interdisciplinary Science Reviews, 46(1-2), 176–197.

Leave a comment