AI Teaching Resources


---------------------
Robbie Fordyce

27 May 2025
---------------------

On the various resources I used in creating my unit, Working with Artificial Intelligence.

---------------------
  1. Overview
02. Academic and para-academic readings
03. Reports and cases


01. Overview
Below is the full reading list for the unit Working with Artificial Intelligence.

These have been just crudely pasted in from the document I shared with my students, and so some of the formatting has been lost - primarily the italicisation of various works.

The unit is a part of a combined humanities and social science course, and is somewhat activist in its sensibilities. I’m less concerned about the nature of AI as a technology by itself, and more concerned with the impacts that come with its use, ownership, and effects beyond the individual. In this, the unit is fairly idiomatic, and covers my interests, while also acknowledging that there’s a lot that this unit doesn’t cover in-depth.

02. Academic and para-academic readings
The readings for each of these themes is listed below in the order of their appearance in the unit. The unit was broken into four themes of three weeks each:

  1. How we got to where we are
  2. How to think about humanity and machinery
  3. How to control AI
  4. How to think radically about AI

Theme 1: How we got to where we arE


The first theme provided a context for AI in education and the role/effects of the replacement narrative. Then it proceeded into a historical overview for the technical origins of AI, various mythical and real attempts of synthetic life from across human history, and models of thought around the mind/brain for automation.

  • Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. https://doi.org/10.1016/j.techfore.2016.08.019
  • Blanchot, Maurice. (1949/2004). Lautréamont and Sade. Translated by Stuart Kendall and Michelle Kendall. Meridian: Crossing Aesthetics. Stanford, Calif: Stanford University Press.
  • Corbin, T., Dawson, P., Nicola-Richmond, K., & Partridge, H. (2025). ‘Where’s the line? It’s an absurd line’: towards a framework for acceptable uses of AI in assessment. Assessment & Evaluation in Higher Education, 1–13. https://doi.org/10.1080/02602938.2025.2456207
  • Descartes, R. (1637). Discourse on the Method: Of Rightly Conducting One’s Reason and of Seeking Truth in the Sciences (J. Veitch, Trans.). Project Gutenberg. https://ia801906.us.archive.org/17/items/rmcg0001/Descartes-Discourse-a1.pdf
  • Shagrir, O. (2006). Why we view the brain as a computer. Synthese, 153(3), 393–416. https://doi.org/10.1007/s11229-006-9099-8
  • Von Neumann, J. (1986). The computer and the brain. Yale University Press.
  • Searle, J. R. (1990). Is the Brain a Digital Computer? Proceedings and Addresses of the American Philosophical Association, 64(3), 21. https://doi.org/10.2307/3130074
  • Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424.
  • McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5(4), 115–133. https://doi.org/10.1007/BF02478259
  • Kelty-Stephen, D. G., Cisek, P. E., Bari, B. D., Dixon, J., Favela, L. H., Hasselman, F., Keijzer, F., Raja, V., Wagman, J. B., Thomas, B. J., & Mangalam, M. (2022). In search for an alternative to the computer metaphor of the mind and brain (arXiv:2206.04603). arXiv. https://doi.org/10.48550/arXiv.2206.04603
  • Richards, B. A., & Lillicrap, T. P. (2022). The Brain-Computer Metaphor Debate Is Useless: A Matter of Semantics. Frontiers in Computer Science, 4, 810358. https://doi.org/10.3389/fcomp.2022.810358
  • Ihde, D. (1983). 1. Technology and Human Self-Conception. In Existential technics. (pp. 9-24) Albany : State University of New York Press. http://archive.org/details/existentialtechn0000ihde
  • Crawford, K. (2021). Introduction. In Atlas of AI: Power, politics, and the planetary costs of artificial intelligence (pp. 1-21). Yale University Press.
  • Norton, P. D. (2007). Street Rivals: Jaywalking and the Invention of the Motor Age Street. Technology and Culture, 48(2), 331–359. https://doi.org/10.1353/tech.2007.0085
  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539
  • Yang, F., Goldenfein, J., & Nickels, K. (2024). GenAI concepts: Technical, operational and regulatory terms and concepts for generative artificial intelligence (GenAI). ARC Centre of Excellence for Automated Decision-Making and Society. https://doi.org/10.60836/PSMC-RV23
  • Novykov, V., Bilson, C., Gepp, A., Harris, G., & Vanstone, B. J. (2023). Deep learning applications in investment portfolio management: A systematic literature review. Journal of Accounting Literature, 47(2), 245–276. https://doi.org/10.1108/JAL-07-2023-0119

Theme 2: How to think about humanity and machinery


The second theme focused more on the worker/labourer perspective on what it’s like to use AI, including ideas of paradoxes of work from Bainbridge, and using Ge Wang’s work on the Human-in-the-loop idea as a reference point for the students (because they were complaining that the readings were too hard).

  • Bainbridge, L. (1983). Ironies of Automation. Automatica, 19(6), 775–779. https://ckrybus.com/static/papers/Bainbridge_1983_Automatica.pdf
  • Wang, G. (n.d.). Humans in the Loop: The Design of Interactive AI Systems | Stanford HAI. Retrieved March 24, 2025, from https://hai.stanford.edu/news/humans-loop-design-interactive-ai-systems
  • Binns, R., & Veale, M. (2021). Is that your final decision? Multi-stage profiling, selective effects, and Article 22 of the GDPR. International Data Privacy Law, 11(4).
  • Bull, N. J., Honan, B., Spratt, N. J., & Quilty, S. (2023). A method for rapid machine learning development for data mining with doctor-in-the-loop. PLOS ONE, 18(5), 1–10. https://doi.org/10.1371/journal.pone.0284965
  • Mabrok, M. A., Mohamed, H. K., Abdel-Aty, A.-H., & Alzahrani, A. S. (2020). Human models in human-in-the-loop control systems. Journal of Intelligent & Fuzzy Systems, 38(3), 2611–2622. https://doi.org/10.3233/JIFS-179548
  • Kleinman, D. L., Baron, S., & Levison, W. H. (1970). An optimal control model of human response part I: Theory and validation. Automatica, 6(3), 357–369. https://doi.org/10.1016/0005-1098(70)90051-8
  • Davies, D. (2024). The unaccountability machine: Why big systems make terrible decisions - and how the world lost its mind. Profile Books.
  • Acemoglu, D., & Johnson, S. (2024). Learning from Ricardo and Thompson: Machinery and Labor in the Early Industrial Revolution, and in the Age of AI. Annual Review of Economics, 16, 597–621.
  • Hardt, M., & Negri, A. (2000). “Postmodernisation of production” in Empire (pp. 280-303). Harvard University Press.
  • Pasquinelli, M., Alaimo, C., & Gandini, A. (2024). AI at Work: Automation, Distributed Cognition, and Cultural Embeddedness. Tecnoscienza – Italian Journal of Science & Technology Studies, 15(1), Article 1. https://doi.org/10.6092/issn.2038-3460/20010
  • Nest, M. (2011) Coltan. Vol. 3. Polity, 2011.
  • Packard, V. and McKibben, B. (1960). The waste makers.
  • Parikka, J. (2018). Medianatures. ZMK Zeitschrift für Medien-und Kulturforschung, 9(1), 103-106.

Theme 3: How to Control AI


Theme three was intended to be focused on policy matters related to AI, but ended up being more about the problems that came with it. This included applied ethics, environmental issues, the issues of data supply, and exteriorisation. More cannibalism in this section than I’d originally planned.

  • Bucher, T. (2018). Neither Black Nor Box. In  If. . . Then: Algorithmic Power and Politics. (pp. 41-65) Oxford University Press.
  • Pasquale, F. (2016). The black box society: The secret algorithms that control money and information. Harvard University Press.
  • Keenan, B., & Sokol, K. (2024). Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann’s Functional Theory of Communication (No. arXiv:2302.03460). arXiv. https://doi.org/10.48550/arXiv.2302.03460
  • Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor (First Edition). St. Martin’s Press.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York university press.
  • Culnane, C., Rubinstein, B. I. P., & Teague, V. (2017). Health Data in an Open World (No. arXiv:1712.05627). arXiv. https://doi.org/10.48550/arXiv.1712.05627
  • Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and Discrimination: Converting Critical Concerns into Productive Inquiry, 22(2014), 4349–4357.
  • Andrejevic, M., Fordyce, R., Luzhou, L., Trott, V., Angus, D., & Ying, T. X. (2022). Ad Accountability Online: A methodological approach. In Everyday Automation (pp. 213–225). Routledge.
  • Deleuze, G. (1988). Spinoza, practical philosophy (R. Hurley, Trans.). City Lights Books.
  • Nietzsche, F. W. (2001). The gay science: With a prelude in German rhymes and an appendix of songs. Cambridge University Press. §341
  • Spinks, L. (2010) Eternal Return. In Adrian Parr (Ed). The Deleuze dictionary (Revised edition). Edinburgh University Press: 85-87.
  • Floridi, L. (2018). Soft Ethics and the Governance of the Digital. Philosophy & Technology, 31(1), 1–8. https://doi.org/10.1007/s13347-018-0303-9
  • Powers, T. M. and Ganascia, J-G. (2020) The Ethics of the Ethics of AI. In Dubber, M. D., Pasquale, F., & Das, S. (Eds.). The Oxford handbook of ethics of AI. Oxford University Press.
  • De Landa, Miguel (1991) War in the Age of Intelligent Machines. MIT Press.
  • Keyes, O., Hutson, J., & Durbin, M. (2019). A Mulching Proposal (No. arXiv:1908.06166). arXiv. [Direct Link]
  • Crogan, P. (2010). “Bernard Stiegler: Philosophy, Technics, and Activism.” Cultural Politics, 6(2), 133–156. https://doi.org/10.2752/175174310X12672016548162 Image generated by ChatGPT with the prompt “Can you make an image that corresponds to the following statement”
  • Cooper, J.M. and Hutchinson, D.S. (eds) (1997) ‘Phaedrus’, in Plato, Plato: Complete Works. Translated by A. Nehamas and P. Woodruff. Indianapolis: Hackett Publishing Company, Inc,
  • Stiegler, B. (2016). Automatic society. Polity Press.
  • Brown, T. B. et al (2020). Language Models are Few-Shot Learners (No. arXiv:2005.14165). arXiv. https://doi.org/10.48550/arXiv.2005.14165
  • Shumailov, I., et al  (2024). AI models collapse when trained on recursively generated data. Nature, 631(8022), 755–759. https://doi.org/10.1038/s41586-024-07566-y
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Theme 4: How to think radically about AI


The final section involved the more activisty elements, tracing the politics of AI as a technology that remains ‘up for grabs’ politically, connecting the dots between those who are involved in AI investments and regulation, and mapping out our histories of technology and politics via the Futurists. The unit ended with a discussion of art and its relationship to politics before finishing with a discussion of the science fiction impetus of converting ourselves into AI post-mortem.


03. Reports and cases
Non-academic materials are in rough order of their coverage in the unit, but are grouped into resource type rather than in order of their use:
  • Reports: Industry perspectives
  • Cases: Past Futures of Automation
  • Cases: Automata
  • Cases: Paintings and posters
  • Cases: Writing
  • Cases: Films
  • Cases: Software
  • Cases: Reportage, video footage, news.

Reports: Industry perspectives



Cases: Past Futures of Automation


  • Keynes, J. M. (1963). Economic Futures of our Grandchildren (1930). In Essays in Persuasion (1st ed). (pp. 358-373) W. W. Norton & Company, Incorporated.
  • Butler, S. (1917). “Darwin Amongst the Machines”. In The notebooks of Samuel Butler (pp. 42-46). New York, E.P. Dutton. http://archive.org/details/cu31924013448299
  • Marx, K. (1858). Fragment on Machines. In Grundrisse (pp. 690–712). https://thenewobjectivity.com/pdf/marx.pdf

Cases: Automata


  • Jacques de Vaucanson - The Digesting Duck (1739)
  • Jaquet-Droz family (1770s) The Draughtsman, The Musician & The Writer
  • John Joseph Merlin (1773)  The Silver Swan
  • Pigeon of Archytas (~4th C BCE)
  • Terracotta Warriors 兵马俑 (1st C BCE)
  • Ismail al-Jazari ( الجزری ) (1206) The Book of Knowledge of Ingenious Mechanical Devices
  • Leonardo Da Vinci (15th-16th C) Mobile Lion, Robot Soldier, Theatrical bird, Wire-controlled bird.
  • The Golem of Prague (‎גּוֹלֶם) (17th C fable)

Cases: Paintings and posters


  • Fritz Kahn (1926) Man as industrial palace [Poster]
  • Fritz Kahn (1927) The biology of roasting smells [Poster]
  • Géricault, T. (1818-1819) Le Radeau de la Méduse
  • Balla, G. (1913) Velocity Of An Automobile
  • Pannaggi, I (1922) Speeding Train
  • Balla, G. (1910) Street Light 
  • Crali, T. (1939) Nose Dive on the City
  • Crali, T. (1939) Before the Parachute Opens
  • Severini G. (1915) Armored Train in Action
  • Ambrosi, G. A. (1930) Mussolini the Aviator
  • Bonetti U. (1933) Dux 
  • Dottori, G.  (1933) Portrait of Il Duce

Cases: Writing



Cases: Films


  • Miyazaki, H. (1997) Mononoke-hime (Princess Mononoke), Studio Ghibli.
  • Oshii, M. (Director). (1996). Ghost in the Shell. Kôdansha, Bandai Visual Company, Manga Entertainment. 

Cases: Software


Cases: Reportage, video footage, news.


NB: a lot of the dates are missing on my citations below as Zotero didn’t pick them up. Not a major issue I think, and all are searchable.

You can check out the unit itself here: https://handbook.monash.edu/2025/units/ats3992