Bailey Tucker

Colonialism and the Legacy of Bias in AI Research and Development

This learning module focuses on the legacy of colonialism and its impact on bias in AI research and development. It explores how the history of colonialism has shaped the way that AI is developed and used, including the ways in which colonialism has led to biased data sets and algorithms.

The module also highlights the ways in which decolonial approaches to AI research can lead to more equitable and just outcomes. Furthermore, the supplemental readings and descriptions in this learning module will serve as meta-representations of how AI-sourced learning materials can result in epistemicide—the destruction or devaluation of knowledge systems. Materials in the supplemental section were curated by the large language model ChatGPT and are themselves illustrations of AI bias.

The module is intended primarily for students or academics studying computer science or related fields at the collegiate and postgraduate levels. After giving an overview of the history and contemporary issues related to AI and colonialism, the module features further readings, and discussion questions, and offers a model of an active engagement exercise using ChatGPT and DALL-E 2 that reveals stereotype-reinforcing biases.


Structure

After completing the module, learners will be able to explain verbally and in writing how colonialism has shaped the development and use of AI. They will be able to identify biases and omissions in algorithms and data sets and understand the ways in which these can perpetuate existing inequalities. Learners will also be able to analyze the intersectionality of AI across multiple systems of oppression, such as race, gender, and class. Finally, learners will be able to apply decolonial approaches to AI research, leading to more equitable and just outcomes.

By synthesizing a variety of historical and applied resources, the learning module aims to demonstrate the profound influence of colonialism on bias in AI research and development. It emphasizes the need for critical reflection, ethical considerations, and decolonial approaches. By understanding the historical context, biases in data sets and algorithms, and the intersectionality of AI with other systems of oppression, learners can actively engage with these issues and work towards creating a more inclusive, equitable, and just AI ecosystem.


Bailey Tucker is majoring in Economics and Computer Science at the University of Chicago.

Learning Resources

Section 1

This section explores the complex relationship between colonialism and bias in AI research and development, drawing from a range of resources that shed light on this topic.

The Stupidity of AIin The Guardian: This article explores the ways in which colonialism has shaped the development and use of AI, including the ways in which colonialism has led to biased data sets and algorithms.

Race After Technology by Ruha Benjamin: This book explores the ways in which race and racism have shaped the development of technology, including AI.

Algorithms of Oppression by Safiya Noble: This book explores the biases and blind spots in algorithms and data sets, and the ways in which these can perpetuate existing inequalities.

Section 2

This section exposes biases in AI technology and the potential harm they can cause. It raises awareness about the disproportionate impact of these technologies on marginalized communities, emphasizing the importance of challenging the existing power dynamics and biases within AI technologies and developing inclusive and accountable AI systems.

Coded Biasdocumentary by Shalini Kantayya: This documentary explores the biases in facial recognition technology and the ways in which they can lead to harmful outcomes.

How I’m Fighting Bias in Algorithmsby Joy Buolamwini: This TED Talk explores the ways in which decolonial approaches to AI can lead to more equitable and just outcomes.

Section 3

This section highlights the imperative to consider ethical implications when designing and implementing AI technologies, with a focus on protecting vulnerable populations, and calls for a critical examination of these systems to foster more inclusive and unbiased AI research.

Artificial Intelligence: The Technology That Threatens to Overhaul Our Rightsby Amnesty International: This campaign exposes how the AI ecosystem is being used to violate human rights, including the rights of refugees and migrants.

The Problems AI Has Today Go Back Centuriesin MIT Technology Review: This article explores the ways in which colonialism has impacted the development and use of AI, including the ways in which colonialism has led to biased data sets and algorithms.

Section 4

This section showcases the potential of AI in addressing social and environmental challenges, providing examples of how AI can be leveraged for positive impact. The first resource is paired with articles that delve into the history of bias in data sets to highlight the impact of colonialism on data collection and usage and emphasize the importance of diversity and representation in AI research and development.

AI for Social Goodby Google AI: This resource provides examples of how AI can be used to address social and environmental challenges.

AI Is Biased. Here’s How Scientists are Trying to Fix Itin Wired: This article explores how to address bias in AI, including the need for diverse perspectives in AI research and development.

Discussion Questions

  1. Use these discussion questions to spark further reflection on these materials:

  2. How can decolonial approaches to AI research and development lead to more equitable and just outcomes? Provide specific examples or strategies.

  3. What ethical considerations should be taken into account when designing and implementing AI technologies in post-colonial contexts? How can these technologies be used responsibly and ethically?

  4. How does the documentary “Coded Bias” highlight the biases in facial recognition technology and their potential harmful consequences? What lessons can we learn from it?

  5. Discuss the intersectionality of AI with multiple systems of oppression, such as race, gender, and class. How do these intersecting dimensions of power affect AI technologies and their impact on marginalized communities?

  6. How can AI be leveraged for social good, as demonstrated in the resource "AI for Social Good" by Google AI? What are the potential benefits and challenges in using AI to address social and environmental challenges?

  7. Reflect on the concept of epistemicide, defined as “the destruction or devaluation of knowledge systems.” How can the bias inherent in AI-sourced learning materials contribute to epistemicide? What steps can be taken to mitigate this bias?

Self Referentiality and Active Engagement Exercise

To illustrate the ways that coloniality shapes AI and highlight the importance of human oversight, the module includes an example and active-engagement exercise created using an AI model. The example uses ChatGPT to showcase the power of AI in examining biases and fostering learning by incorporating diverse perspectives and knowledge from various sources. It also exemplifies how AI can serve as a valuable tool in promoting critical thinking and generating educational content.

On the other hand, the self-referential aspect of this learning module also highlights the inherent limitations of relying solely on AI for understanding AI and its faults. While AI models like ChatGPT can provide insights and generate content, they are ultimately products of their training data, which may contain biases and limitations. The exercise and the learning module overall serve as a reminder that AI systems are not infallible or unbiased sources of knowledge. They prompt us to approach AI-generated information with a critical eye, recognizing that these systems can perpetuate and reinforce existing biases, including those related to colonialism.

Through this exercise, learners can get firsthand experience of the importance of diverse perspectives and human judgment in interpreting and contextualizing the information generated by AI models. It highlights the need for human oversight, critical analysis, and engagement to mitigate the limitations and biases inherent in AI systems. By critically examining the content and considering alternative viewpoints, learners can navigate the complexities and potential pitfalls of relying solely on AI-generated information, ultimately fostering a more nuanced and comprehensive understanding of the subject matter.

Check out the Active Engagement Exercise

Previous
Previous

The People and Politics Behind Partition and Pakistan

Next
Next

Contradictions of Decolonization, Postcolonialism, and Nation Building in India and (South) Korea