skip to content

Sanaa Khan is a PhD student in Education Studies at the University of California, San Diego. She received her Masters in Education from Brooklyn College in 2018. She formerly organized to increase access to public higher education, and taught middle school computer science. Her work is focused on examining the role of race and gender in computing education, and aims to push past reductionist requests for increased diversity in STEM education for a more nuanced view of inclusion and equity in the field. Her research combines approaches from critical race theory, archival studies, critical disability studies, and feminist science, technology and society studies. You can find more about her work on her website.

Research overview

Predictive analytics and algorithms have shown to include racial bias in many different contexts, including education. Discrimination resulting from such systems in education only compounds existing issues of student equity. Most recently, this has come to a head in the 2020 Ofqual grading algorithm for A-level exams that left students in UK state schools, often with a higher number of BAME students, scored lower than students in private schools. The Ofqual algorithm was touted as a way to neutrally assign A-level scores in the light of pandemic precautions interrupting usual testing; instead it made students in lower-income and racialized communities vulnerable, putting student futures at risk.

This project aims to understand the effects of Ofqual's sorting and grading algorithm on BAME students in the United Kingdom. It fits into a broader desire to study the phenomena of algorithmic sorting/grading, and to create an archive of student experiences that gives voice to students' concerns and reflections upon their interactions with such an algorithm. This archive will provide a way to study sorting algorithms in schooling systems, and explicitly fight for anti-bias algorithmic policy reformation in the future. This project aims to create a living historical account that highlights the precarity of algorithmic intervention in educational spaces; consider the effects of distance schooling and stop-gap measures due to COVID-19 in an historical scale; and bring forward the voices of marginalized, racialized students in the sphere of educational technology. In doing so, it aligns with the History of Artificial Intelligence seminar's aims to co-develop oral histories with communities impacted by discriminatory practices.