Back When it comes to AI, the Humanities should stop self-sabotaging. Nausikaä El-Mecky

When it comes to AI, the Humanities should stop self-sabotaging. Nausikaä El-Mecky

Nausikaä El-Mecky is tenure track professor in History of Art & Visual Culture at at the Department of Humanities at UPF.

20.11.2023

Imatge inicial

In October, I was invited to take part in an expert workshop on the future of Science and AI at the European Commission in Brussels. I noticed how Humanities are barely part of the conversation. This is partly our own fault.

The Humanities disinvite themselves from AI policy and development for two reasons: excessive humility, and condescension. Let’s start with humility. As I walked into the European Commission building, I thought: “What do I even know about coding? Shouldn’t I just leave AI to the experts?” By experts, I mean the people who know about computers, legal tech and governance. But AI does not limit itself to these disciplines, as it increasingly decides what is moral, immoral, or beautiful. In my own research group on algorithmic censorship of images, we found that engineers programme what pictures are and aren’t acceptable without any or minor art historical expertise, based on retrograde and sometimes even sexist or racist criteria. During the meeting in Brussels, it became clear that there is a huge risk that standards for culture and democracy are set, singlehandedly, by people unqualified to do so. It leads to dystopian situations with hardly any oversight, such as when programmers create the parameters for outsourcing life and death decisions to algorithms in automated warfare.

During the meeting in Brussels, it became clear that there is a huge risk that standards for culture and democracy are set, singlehandedly, by people unqualified to do so

As Humanities scholars we tend to remain consumers and critics of AI, perhaps using a nifty AI tool to digitise an archive, or writing how awful generative AI imagery is. Standing politely by the side-lines, we do not allow ourselves to shape the future of AI, since we lack the technological skills. Meanwhile, AI engineers are unencumbered by such impostor syndrome, venturing deep into Humanities territory. As Humanities scholars, we need to start thinking of ourselves as AI experts.

It is unlikely that in 10 years’ time, AI will simply be a more sophisticated version of what it is today. Its exponential growth makes it unfathomable for all the experts that were present at the meeting. To think beyond the experiential, to imagine what AI is beyond its current incarnations, that is Humanities’ comfort zone, which fosters open-ended, imaginative thinking. We imagined how in 2035 science may have become obsolete, with supercomputers scraping the world’s data, generating and testing hypotheses, churning out publications and even peer reviews in the blink of an eye.

This was just a thought-experiment, but it shows how the development of AI affects all of academia, not just the natural sciences. Yet EU the European Centre for Algorithmic Transparency and the newly founded UN AI advisory board have zero and two humanities members respectively. At the meeting in Brussels, I was the only Humanities expert in full-time academic employment. Meanwhile, a key framework at this meeting was that Europe must accelerate its AI projects to stay competitive with China and the USA- such pressure makes it seductive to cut ethical corners. It is essential that the Humanities take part in AI development and policy on a structural level. But first, the Humanities need to overcome not just their self-effacement, but also their condescension.

It is essential that the Humanities take part in AI development and policy on a structural level

With the explosion of generative AI, I noticed how often the Humanities’ response tends towards a strange mixture of romantic and snobbish. Comments like “Chat GPT writes terrible, clichéd texts” or “Midjourney images look awful and cannot replace Real Art” abound. Plenty of stilted, Frankensteiny horrors have indeed been generated, but AI, while still in its infancy, has also created spectacular things. Largely, our field appears to have reached the defensive consensus that “machines cannot substitute human creativity.” This disparaging attitude is akin to stating that generative AI is not worthy of our serious attention, which puts blinkers on the imaginativeness at which the Humanities excel. Exponential growth may soon yield ways to translate originality and poignancy into code. Perhaps a music-generator will learn to read audience’s facial expressions, adapt the melody in real-time and ensure 80% of the audience is moved to tears? As one natural scientist told me: “In AI, we need the Humanities to take a leading role.” But for that, we need invite ourselves back in.

 

[5] https://www.artforum.com/news/sony-world-photography-award-winner-reveals-entry-was-ai-generated-rejects-prize-252639/

Multimedia

Categories:

SDG - Sustainable Development Goals:

Els ODS a la UPF

Contact

For more information

News published by:

Communication Office