skip to primary navigationskip to content

Reading Group #1

Reading Group #1 on 'Introduction to Seminar Theme: Encoded Behaviour'

Wednesday 20 May 2020, 16:00–18:00 BST / 11:00–13:00 EDT

boukThe first reading session of the HoAI seminar series took place on May 22nd 2020, and focused on the theme of 'Encoded Behaviour' (more on themes can be found here). This session was made possible by the input of three participants: core speaker Dan Bouk, who presented on the genealogies of data collection and aggregation in the 19th century; and two researchers, Stefanie Felsberger (PhD student, Cambridge) and Sharad Pandian (Research Associate, Nanyang Technological University), who offered brief accounts of their research and provocations relevant to the session.

Some core points of discussion which arose in these presentations were:

  • That the 19th century and late 18th century histories of data collection and aggregation are not confined to the past but part of the assemblage of 'Artificial Intelligence.'
  • The existence of 'Artificial Intelligence' as a palimpsest, and the need to identify where within this palimpsest the relevant object of critique (in this case, data) may lie.
  • The indebtedness of today's data collection and aggregation processes to the colonial matrix of power and forms of governance.
  • The intersection between economic value and power/knowledge, and how disciplinary power flows from data as it is aggregated.
  • The need to question labour itself in relation to data, and how individuals perceive of our own labour in constructing our digital selves (or 'data doubles').
  • The need to interrogate the 'cloak of boringness' in the historical origins of the data (i.e. 'how data became dry' when interrogation shows it to be foundationally political).

Comments and provocations were far ranging, and over ninety people participated in the meeting. In response to assigned readings, participants asked:

  • Dan, you mentioned that the larger question of AI is about how tech alters human nature – can you elaborate on how this focus can aid historical studies of AI?
  • One thing that's interesting is the very pointed interest in ability/disability in the metropole vs the colony — hernia maps, 'census of abilities,' etc as crucial examples in Dan's piece but not as present in the materials Appardurai cites. Is it a methodological difference (ie, Arjun didn't 'notice' disability?) or a difference in the discourses?
  • 'Statistical missionaries' -> is this a turn of phrase or a real figure? how do we think about enlisting, coercing, or otherwise getting people to participate in the project if large scale data / information collection?
  • A good question Appadurai asks is 'where lies the colonial difference?' (p.333) particularly for teasing the thread in the palimpsest of AI and also relating to encoded behaviours or classification versus measurement
  • Dan, in your piece, you trace a shift in power from data aggregates to data doubles in the past two centuries in the US. Throughout, there's an underlying 'doubling' up of data doubles as individuals and/or data doubles' rising power as rising individualism, even as you end with the data aggregate as setting up the liberal subject in the first act. Appadurai, on the other hand, seeks to explain the dominance of 'community' and not individual, especially as it continues in contemporary Indian politics. Could you say a little more about the relationship between individuation and datafication (data doubles and/or data aggregates) in your piece and if/how you see it in relation to Appadurai?
  • A small and probably unimportant comment on my reading of Appadurai: surprisingly few numbers (or at least, numerals) in an essay about number/ Is this a studied de- emphasis attempting to shift focus to how number is used (referential → rhetorical) rather than what it is? Or does it say something about what 'number' is for AA. It's claimed to be 'a shared language for information- transfer, disputation, and linguistic commensuration between center and periphery' (255). But different languages/cultures denote (conceptualize, even) quantities differently. Does AA's argument need (and assume) a 'universal' notion of what number is to get off the ground? It's interesting that for all that's relativized in the essay, number escapes.
  • Like the contributor just said about 'number', the idea of 'modelling' is something that we should contest >> can think of three meanings
    • I suspect modelling and morality are historically inseparable – and 'historically.' Modeling meaning:
      • reductive (normalise life to range 0-1)
      • distancing
      • performative
  • It's definitely not only capitalism, nor capitalism + orientalism I think. Many 'specifically colonial political arithmetic[s]' (333) are possible. Appadurai even emphasizes how in India's case, preexisting numerological and categorical practices and regimes shaped interpretations and outcomes

Key topics discussed in the chat

The readings and discussion prompted a number of direct questions. With participants introducing themselves in the Chat feature, several undercurrents of conversation, commentary and lines of questioning also developed over time. We offer a summary of them also, together with a list of the many citations that participants offered for works relevant to particular themes that had been raised.

  • Q: What informs the cut the historian makes when deciding which layer of the palimpsest is important? For organizers, these 'cuts' are informed by the campaign, which is temporally and locally constrained. For historians, I am guessing there is also an obligation to empiricism/etc. How does the historian choose and what are the consequences of this process?
  • How do historians define the 'artificial' within artificial intelligence, and what would be the shapes of data that come from the definition of artificial. Maybe the history of AI is not the history of 'AI'?
  • How can we (or should we) disentangle the genealogy of AI from the genealogy of data?
    • AI today is bastardised statistics (100% data)
    • But the history of AI is not, e.g. expert systems, toy problems, cybernetics. And bayesian methods are still used a lot, many types of robotics (e.g. swarms) don't rely on ML
    • Rhetorically and from a media history perspective, there is a clear connection between AI and mechanical automation/calculation
    • Expert systems are sort of a dead end.
    • Data driven AI only became more common and then dominant starting in the 1990s, and was largely driven by improvements in computer hardware and storage capability on disk and in memory. That said, a lot of data was certainly collected and computed upon in computing more generally (think of massive punched card systems).
    • I think data-hungry deep learning AI is certainly a derivative of stats, but other AI paradigms have also emerged; the older symbolic paradigm was much less reliant on social data at scale
    • Consider also pattern recognition, data mining, machine translation...

On the risk of erasure of embodied approaches to AI:

  • There's a quote that I could have sworn was in Maggie Boden's book but I have been unable to locate it there or anywhere, to the effect of 'The modern AI practitioner doesn't know or care anything about the nature of intelligence'.
  • That's my experience too from working with computer scientists and computer engineers.
  • It is probably the bias of my research as I work on the military use of AI, but embodied AI is extremely central here – and a lot of the greatest advances made in AI over the past decades that are still in use (e.g. missile defence, target recognition) is not built on ML. ML/data is the hype in the tech world obviously but I think it's easy to overlook other less fashionable uses of AI, especially since the boundaries of what constitutes AI are so blurry
    • Another possible connection between Vaucanson's duck is what Dan notes on p.93 of his article which is the non-coincidental co-occurrence of the will to govern and control, and the desire to display.
    • Maybe the commonality is not 'continuity of idea of AI' but 'resonances of preemptive intervention'
  • Military, yes, is an exception: particularly around 'simulation'. Military is the only place where social simulation is taken as legitimate at large. And that ties intimately with 'artificial life' and 'older' AI
    • Are we dealing with a possible Wittgensteinian family resemblance in relation to the cluster just mentioned by Alan, i.e. data science, AI, Ml etc.? The Wittgensteinian and Foucauldian perspectives both abstract out data, though the former is more interested in the individual in a cluster while the latter is about clusters of individuals
  • Now deep learning seems ascendant, but symbolic approaches are still taught in many programs and I'd be cautious about completely divorcing these two strands too cleanly even if their more radical proponents seem to want to keep the two approaches separate. I guess it's really a question of: what we mean by AI in the context of this community?
    • I think AI/ML/DS usage/choice speaks to who you are trying politically or economically mobilize
    • In terms of family resemblance, there's who mobilizes the terms, and what happens under the term. What happens under the term: data science is applied ML, with some statistics. ML is 'mercenary', positivist statistics: statistics used without necessarily caring about ideas of underlying 'truth' but only on demonstrable performance.
    • Fun fact: at Carnegie Mellon, within the flagship CS PhD program, the machine learning course was delisted as fulfilling the AI distribution requirement. The lasting influence, and continued presence, of people committed to symbolic AI.

On the impact of recent fascinations with AI:

  • Isn't the erasure of old-style AI also related to corporations rebranding their machine learning divisions as 'artificial intelligence' a few years ago? I know anecdotally that it caught many of the actual researchers working in the corps by surprise.
    • And we have to remember that companies routinely lie to both their customers and their own employees about how their technologies work and what they are doing, which means giving a genealogy extra difficult.
    • When you say 'companies', don't forget that this also includes CS faculty members who follow the money.
      • Music & AI is of course a key point of crossover / infection between academic CS and the corporates and startups...
  • Even if the technical practitioners of ML/AI feel there is a break from the 'older' way of thinking about AI: Are the people who allocate the limited resources that fuel research, who may not have the technical skills, still influenced by this (maybe dated) image of what AI is?

Natural Language Processing (NLP):

  • A very simple history of AI in the context of NLP is that starting in the 1950's people tried to encode human knowledge via rule based systems to carry out some task with language (understand a story, parse a sentence). what was found is that these very interesting methods didn't scale well, and so starting in the late 1980s and early 1990s sufficient volumes of text were available online, and so the focus shifted to trying to learn those rules from the data. and that's kind of where we are now. that said deep learning and machine learning methods emphasize classification problems and so now there is a heavy emphasis on how to make problems fit classification (and that can be 'solved') with large amounts of data.

How to account for human suffering?

  • We cannot forget that this process of accumulation and use of a number of data means also impositions, violence, exclusions and that the domination of peoples.
  • In these colonization processes or even in the processes of technology today, which bodies are most violated by numbers and cataloging?
    • Esp given systemic Islamophobia, the targeting of Muslims...
  • To add comment to recent speakers on coding of bodies, performative modelling, risk assessments and pattern recognition, there is a strain of pattern recognition in operational imaging of bodies via algorithms in cameras. Having worked in sleep disorder medicine as a technician, I was tasked with training LED sensors to measure eye blink variance and velocity as a measure of fatigue and thus capability. This has all sorts of real world ramifications in heavy machinery use in underground mining in Australia for example. e.g. insurance premiums, shift allocations. These are in a sense automated via sensors and cameras in workplace monitoring.
  • I will just post my question here: I had a question for the historians: how would they study the encoded behavior of the data editors/aggregators of today compared to say those in the colonial times, because platform economy has dehumanized, to use Dan's word, them to an extent that was perhaps not to the same extent as in the colonial times.
  • What about risk rating that 'discriminates' against the group in power? I'm thinking in this case about how in the US car insurance is more expensive for young men, or the 1970s legal suit in which Ruth Bader Ginsburg had a role (Craig vs Boren), and determined that statutory sex classifications were subject to scrutiny under the Fourteenth Amendment's Equal Protection Clause (the law in dispute made the drinking age higher for men and lower for women).

Assorted questions and provocations for the Seminar community:

  • The colonization of the imaginary seems to be one phenomenon that we may want to discuss further
  • One risky strategy in contesting AI is 're-humanising'
    • Some probs with humanism that continue the problems of AI (or vice versa). Esp boundary (who qualifies as human)
  • Do we want to accept data science's claim that it is a 'science'?
  • Do you think symbolic methods and theoretical computer science have lost steam or support because of the emphasis on machine learning?
    • Anecdotally, yes – the researchers who do that stuff have had to follow funding and students to shift to ML. Many pursue experiments with hybrid approaches though
  • How easy is the transition between symbolic and statistical? Doing algorithms that are statistical modeling seems to me to require a whole other set of training: e.g., concentration of measure, much more real analysis and measure theory, plus tons of experience in model-building
    • Going through my own ML+stats grad training, that's something that surprised me, that people training in symbolic things do contribute thing that are rigorous, statistically (although I also see lots of things that are not rigorous, statistically).
    • Yes, and this is a constant problem in ongoing curriculum reform in CS
  • Data science is post-truth science. That suggests some different histories
  • Might be good to have a diff session on genealogies of AIs as they are understood across disciplinary practices concerned with, well, AI
    • Yes, developing a shared framework for discussing AI in the seminar and its genealogies would be useful.
  • Can we have a session that focuses closely on different 'sources of the present': he mentioned cybernetics and pattern recognition, their distinctive histories - both pre 'AI'. and resists the teleologies a bit? It would also be great to introduce ways of having a subtle account of, say, the relations between science and (corporate / government) Power, no?
  • On the line of localization (sans local elites) within reading/writings/publications, just wondering are there more references that exemplify an effective exercise of power from 'people at the bottom'? A poem that was written on a collection tool was cited earlier within the chat and we heard about peer-to-peer group exercises of power but even those were said to be mostly from educated, white, men. Gaps in research, for me, appear when marginalized expressions/narratives that have effectively intercepted power, especially within AI, aren't brought to the fore

On entanglements with law:

  • Question for the community: have there been writings which explore the mis/use of these terms (AI, ML, Big Data) in legal venues? e.g. opinions, motions, orders. I think it would be fascinating to see how technical practices / technical imaginaries are mis/appropriated in legal change?
    • We just had a very interesting lawsuit against the State in NL about a fraud risk indication system (SyRI). A large part of that court session was the judges trying to figure what it was to begin with. The verdict is translated to ENG and AlgorithmWatch also published on the case.
    • This is really interesting! That's one reason I prefer to talk about models rather than algorithms, I suspect that could have legal relevance, but I haven't seen anything yet (e.g., 'algorithms' are much more broad than ML algorithms, which can almost always be interpreted as statistical models in terms of fitting parameters of some model, that makes assumptions about the world, to data—focusing on the assumptions of the model rather than the algorithm that does the fitting/'learning')
    • So could blurring of words create strategic confusion about what is happening?
      • Yes, partly, I think. (One problem here is that judges are not necessarily well equipped in understanding and handling these concepts) But in this case I think that the opacity of this 'sociotechnical algorithmic system' is key. There was not much released about how it worked, where it fitted in work processes so people seemed to assume it was AI. The State Attorney really seemed bent on selling it as a 'simple', 'neutral' tool, completely devoid from 'encoded behavior' (to hark back to this meeting's title).
        • Yeah, well. We have municipalities winning innovation prizes for their use of blockchain in an app that's just a regular database etc. *shrugs* The data- /algorithm literacy is public administration in the NL is not that high on average.

Closing remark:

  • I told Arjun Appadurai (my boss at the moment) we were reading some of his work for this group and he said he was 'honored'! Think this is a great start.

Works cited (in the chat)

Some works on biopower with particular reference to gender, race, and ability (extending Foucault/post-Foucauldian):

  • Butler, Judith. Gender Trouble: Feminism and the Subversion of Identity. Routledge Classics. New York: Routledge, 2006.
  • Butler. Precarious Life: The Powers of Mourning and Violence. London ; New York: Verso, 2006.
  • Chun, Wendy Hui Kyong. 'Race and/as Technology, or How to Do Things to Race'. In Race after the Internet, edited by Lisa Nakamura and Peter Chow-White. New York: Routledge, 2012.
  • De Lauretis, Teresa. Technologies of Gender: Essays on Theory, Film, and Fiction. Theories of Representation and Difference. Bloomington: Indiana University Press, 1987.
  • Dolmage, Jay. Disabled upon Arrival: Eugenics, Immigration, and the Construction of Race and Disability. Columbus: The Ohio State University Press, 2018.
  • Ferguson, Roderick A. Aberrations in Black: Toward a Queer of Color Critique. Critical American Studies Series. Minneapolis: University of Minnesota Press, 2004.
  • Haritaworn, Jinthana, Adi Kuntsman, and Silvia Posocco. Queer Necropolitics, 2015.
  • Mbembe, Achille, and Steve Corcoran. Necropolitics. Theory in Forms. Durham: Duke University Press, 2019.
  • Mbembé, J-A, and Libby Meintjes. 'Necropolitics'. Public Culture 15, no. 1 (Winter 2003): 11–40.
  • Nakamura, Lisa. Cybertypes: Race, Ethnicity, and Identity on the Internet. New York: Routledge, 2002.
  • Puar, Jasbir K. The Right to Maim: Debility, Capacity, Disability. Anima. Durham: Duke University Press, 2017.

History of Natural Language Processing and/or History of NLP vs. Computational Linguistics:

Digital Labour

  • Federici, Silvia Beatriz. Caliban and the Witch. 2., rev. Ed. New York, NY: Autonomedia, 2014.
  • Fraser, Nancy. 'Contradictions of Capital and Care.' New Left Review 100, July/August (2016): 99–117.
  • Fuchs, Christian. Digital Labour and Karl Marx. New York, NY: Routledge, Taylor & Francis Group, 2014.
  • Fuchs, Christian. 'Theorising and Analysing Digital Labour: From Global Value Chains to Modes of Production'. The Political Economy of Communication; Vol 1, No 2 (2013), 2014.
  • Jarrett, Kylie. Feminism, Labour and Digital Media: The Digital Housewife. Routledge Studies in New Media and Cyberculture 33. New York London: Routledge, 2016.
  • Mies, Maria, and Silvia Federici. Patriarchy and Accumulation on a World Scale: Women in the International Division of Labour. Critique, Influence, Change 6. London: Zed Books, 2014.


  • Sterne, Jonathan. MP3: The Meaning of a Format. Sign, Storage, Transmission. Durham: Duke University Press, 2012.
  • Sterne, J., Razlogova, E. (forthcoming). Tuning sound for infrastructures: Artificial intelligence, automation, and the cultural politics of audio mastering. Cultural Studies



War & Policing