robbie fordyce, phd
based in naarm/melbourne
educator + researcher
dog-owner + terrible tweeter
about this site
i’ve made this front page to introduce myself and my area of research, and to provide some short summaries of my work in less-academic language.
if you’re a grad student, or if you’re interested in doing a phd with me, read on. there are some useful links for grad students to some job-seeking resources i’ve found over the years.
what’s on this page:
- about my work
- to resources for research student/ECR career guidance materials
- to research groups
- recent publications
about my work
my work is focused on the analysis of digital platforms, politics, and social life. i study how modern firms, the public, and governments affect each other through digital services. i look at both macro level effects (including political advertising) and micro level effects
i’m particularly interested in discovering commonalities in use and implementation across both professional platforms (such as Salesforce) and social media systems (such as LinkedIn and Meta).
since mid-2020 i have been focusing on entertainment services (such as Netflix and Steam) due to their increased global uptake during the pandemic. this has meant looking at how these platforms shape what kinds of content are available to people, and what digital services leads people from engaging with one piece of content (such as a film, tv series, or video game) into engaging with another, especially across platforms.
i use methods from both quantitative and qualitative approaches. i have a personal love of philosophy and critical theory, but my day-to-day approaches are primarily driven by data science techniques, interface and software studies, archival research of databases and patents, and technological infrastructural studies.
my institutional profile has my contact information for supervision or media inquiries. you can also contact me using the same username at protonmail.
- on this site:
- research groups:
- automated society working group – we study the role of automation within social settings
- platform pedagogies – we study the relationship between digital platforms and education
- culture media economy – this is a bunch of aligned scholars who study how media tech influences material aspects of society, including wealth and value, and social and community practices.
summaries of some recent publications
this section covers my research outputs.
i’ve provided some summaries that give a sense of what the articles argue and achieve, which i think is more interesting that just a list of titles. i’ve listed a few recent publications below. for pre-prints and links to my other work, visit my google scholar page.
my articles are divided into two sections:
- platforms and their politics and ethics
- video games and their political-economic impacts
- disciplinary commentaries
platforms and their politics and ethics
I have a detailed digital/data ethics explainer post here for context on data ethics.
the limits of applying the hippocratic oath to data science (2022)
This paper was a genuine pleasure to write with two of my dearest colleagues. I even went so far as to meme this paper. The paper itself responds to an idea that’s circulated in regulatory circles that suggest the solution to data ethics issues is a kind of Hippocratic Oath. The core concept is to adopt the Hippocratic Oath from medicine for the purposes of trying to make sure that data science, and ‘big data’ companies are more ethical. The concept would seem to be “if we ensure that the people doing the science don’t do anything bad, then we’re killing the problem at its source”. These sorts of discussions often focus on the mantra of “Do No Harm” as the core principle. You can see a range of news outlets covering this idea, including Wired, Forbes (albeit written by an independent consultancy), the Guardian, among others. This is also supported closer to the regulatory space, including from the EU’s European Data Protection Supervisor Chief Giovanni Buttarelli (sadly deceased a few months after his comments) as well as EU Commissioner Věra Jourová.
The thing is, we don’t think the Hippocratic Oath will work for data science, for quite a few reasons.
The crux of our position is that the ethics in medicine’s Hippocratic Oath are deontological rather than utilitarian in nature. Deontological ethics is generally associated with Kant and focuses on duty and virtue above all else, where one acts in the best manner they believe should be applied in all cases. Utilitarian ethics is the process of trying to work out what decision to make through a sort of ethical calculus, and is generally associated with Jeremy Bentham and John Mill. They each have wild variations and major changes and developments since Kant/Mill/Bentham. Apologies to any moral philosophers in advance, but our shorthand on these is that we see deontological ethics as being effective in n=1 contexts, while utilitarian ethics is effective in n=many contexts.
This might seem like a somewhat arcane and academic observation, but it effectively means that the physician is beholden to the patient as a duty of care. They are working out what is best for the patient at that time and in that moment. Despite some critiques of the way that medical consent is managed, the process operates with a principle of informed consent and participation by the patient, who is given a sense of what the costs and benefits are for accepting or refusing care. In comparison, a utilitarian ethics would involve dealing with 200 patients and working out which ones to save and which to let die (or, less dramatically, which ones to help first). Triage is governed by other ethics and other economics of care.
So, consider the phrase that is often focused upon for adopting the Hippocratic Oath: “Do No Harm”. Despite the fact that this phrase does not appear in any of the modern Hippocratic Oaths we could find (either the Lasagna version of 1964, or later modifications), this is not feasible under a utilitarian ethics. In any situation other than a perfectly equitable division of resources to a perfectly identical set of people, harm will be done. Consider a tool for assigning welfare payments to a group. Some will win, some will lose. Some will receive more, others might be punished less, some will receive a bill. Any kind of data science is reliant on sorting information within a group, of assigning resources or punishments, or targeting some individuals and not others.
At its most literal, the Hippocratic Oath would simply not work even in a situation where all other principles of data ethics could be followed. Even then, the Hippocratic Oath is not the sum total of ethical management in medicine. It’s almost at the level of performance relative to the way that regulation actually manages , and it’s backed up by a major commitment from the practitioner – literally years of highly exclusive professional education and practice with multiple layers of oversight and exclusivity. There is also a massive national and institutional regulation, as well as insurance systems and the moral conviction of the medical community.
There’s so much more to say on this, including a history of the Hippocratic Oath, and we do so in the article, linked below. We have some free e-prints that I can provide if anyone would like a free copy.
Mannell, K., Fordyce, R. and Jethani, S., 2022. Oaths and the ethics of automated data: limits to porting the Hippocratic oath from medicine to data science. Cultural Studies, [Advance publication].
data ethics and provenance (2021)
I wrote “Critical data provenance as a methodology for studying how language conceals data ethics” with my good friend Suneel Jethani. Within this article, we make an argument that there needs to be more work done to ensure that any dataset has a record of its origins built into it. We see that datasets can be bought and sold, obtained, released, captured, aggregated, triangulated, re-identified, stumbled upon, leaked, illicitly obtained, or whatever. We think that data can be more ethical if we keep track of how the data was created within the dataset or database. This means that people can know what appropriate uses of the data are, what was consented to, and – if the data was obtained illicitly or obliquely – a possibility that some history of the data’s carriage and transactions are retained in its records.
We see this as contributing to the ethics of data provenance. Data provenance as an idea that we develop from the work of Peter Buneman, Sanjeev Khanna, and Tan Wang-Chiew. These authors present ‘where-provenance’ as question of correspondence between a datum and the thing that it references. This is a kind of question of accuracy, and they seek to point out the importance of ensuring that there need to be technical measures for ensuring the validity and reliability of data. We think that data ethics can add to this idea of ‘where-provenance’ by incorporating data that explains the origins of the data not in terms of validity or reliability (but these are important) but in terms of how it was justified ethically, legally, or discursively. To put it simply, we think that data gathered within a dataset should have transparent, human-readable information about the clauses or processes that led to its creation.
There are all sorts of implicit and explicit justifications and mechanisms at play that lead people to providing data to someone. Sometimes these are clearly laid out at the moment of capture, such as in the plain language statements of some university research ethics clearances. Some data is less clear in relation to how its capture is justified, such as the terms and conditions of commercial websites. Some data capture is even less clear, for instance when someone ‘posts’ or ‘likes’ material on social media, it may not be very clear to them how their post or like will influence the creation of an advertising profile or information about their susceptibility to political influence campaigns. Finally, we have dark patterns, illicit, illegal, or incidental capture where data is created without any kind of clear communication to the user, such as zombie cookies, Facebook’s pixel tracking methods, digital fingerprinting, or other methods.
Fordyce, R. and Jethani, S., 2021. Critical data provenance as a methodology for studying how language conceals data ethics. Continuum, 35(5), pp.775-787.
boredom and streaming video / games (2021)
I wrote about the Bandersnatch game on Netflix, where I suggested that the game itself was essentially just an extension of the browsing interface for the films and shows on Netflix. While this is partly about a game, it’s also about the Netflix platform itself. The game only allows you to choose from a range of limited options, and at some points you have to repeat stuff over and over again to get to the point where you can continue the story. Each fork in the path of the overall narrative for Bandersnatch is presented to you as a simple choice – the choice to do one thing or the other, usually, but never questioning the idea that you might not want to keep scrolling.
Sometimes the choices are boring and limited. In a way each choice you can make within Bandersnatch is a microcosm of a new story. Like browsing Netflix, nothing about the game is particularly exciting, and I think this signals something interesting about how we tolerate boredom on digital platforms.
I wrote this with my friend Tom Apperley, and it was published in Reading Black Mirror as Exhausting Choices.
Fordyce, R. and Apperley, T.H., 2021. Exhausting Choices. Reading» Black Mirror «: Insights into Technology and the Post-Media Condition, 75-87. transcript-Verlag
games and political economics
games and the future (2021)
When we tell stories about what the future might hold we are often constrained by our present circumstances. If we look back to the way that different generations and cultures have tried to present what the future looks like, they change remarkably. Cyber-punk, diesel-punk, and solar-punk are all moments where we can see different values about what authors think needs to change, or what might happen if we don’t. They’re very much tied up in the era in which they were produced. The future that people dreamed about in the 1890s was very different to what people dreamed about in the 1980s.
In 2005, William Uricchio argued that we could use games to explore how history works, and that maybe we could use the simulated, chaotic, and random aspects of game environments to explore what history could be. I’ve inverted Uricchio’s idea to suggest that we could also use games to explore what the future could hold, and maybe (maybe) break out from some of the material or social limits to what we can do in the future. It’s published here: Play, History and Politics: Conceiving Futures Beyond Empire.
Fordyce, R., 2021. Play, History and Politics: Conceiving Futures Beyond Empire. Games and Culture, 16(3), pp.294-304.
It will surprise no one that esports became extremely popular during the early phases of the COVID-19 pandemic. I contributed an overview of the esports environment with a number of legal scholars, looking at the role of children’s rights and agency during a global pandemic.
I wrote this with my friends Valerie Verdoodt, Lisa Archbold, Faith Gordon, and Damien Clifford in Esports and the Platforming of Child’s Play During covid-19 for The International Journal of Children’s Rights.
Verdoodt, V., Fordyce, R., Archbold, L., Gordon, F. and Clifford, D., 2021. Esports and the Platforming of Child’s Play During covid-19. The International Journal of Children’s Rights, 29(2), pp.496-520.
media and time (2021)
In 2021, I was fortunate enough to finally have my annotated bibliography published in the Oxford Bibliography project. This is a list of articles, books, and resources that I think are particularly useful for academic research into the area. You can find it here: Media and Time.
Fordyce, R., 2020. Media and Time. In Oxford Bibliographies online. Oxford University Press.