In Defence of A.I.?

In this blog, resident blogger, Beth Price, reflects on the challenges and possibilities that the arts and humanities face from A.I., and asks whether we’re all doomed?


It was perhaps an understatement to say “there are some strong emotions in the room”. In the conference room of V&A Dundee, a group of Arts & Humanities PhD students had just been told that the reality is we have to accept A.I. and the redundancies it will bring in fields like animation and graphic design. 

From creating “incredible” photo-realistic art in *just a few seconds* to expertly editing videos guaranteed to be viral, the general vibe in the room was less “wow, look at all the amazing potential of generative A.I.” and more along the lines of something I can’t post on this blog. 

Fortunately, there was some reassuring A.I. talk; A.I. in healthcare, A.I. in accessible museums, A.I. regulations… Coupled with the revelation that A.I. can make the interminable process of doing a facet analysis actually straightforward, I found myself wondering, do I actually like A.I?

Photo by ThisIsEngineering on Pexels.com

In Defence of A.I.

Super Speed for Dull Jobs

Something that A.I. is very good at (and I am not) is processing a huge amount of data to produce straightforward answers. As my supervisor could attest, my secret talent is processing a limited amount of data and producing long-winded answers. I am also not very good at doing the steady, technical, detail oriented sort of facet analysis and search term planning that you need to do to set off on the right path with your research. 

Fortunately, A.I. is very good at this sort of thing. With pointers from the faculty librarian (who knows what he’s doing far more than me when it comes to searching databases), it was a matter of mere moments before ChatGPT gave me a facet analysis and Library of Congress Subject Headings to start my search with. 

Essentially, ChatGPT’s strength is in scraping and regurgitating data, and it can do it much faster and much more in depth than a person can. Armed with a list of Library of Congress headings to start with and facets to build from, it took a couple of hours for me to start shaping a reading list. A.I. can be a great tool to start from, and allows us to skip some of the drudge work. 

Enhancing Experiences

A.I. has also been touted as the future of museums and accessing history from anywhere in the world. From digitised galleries at the Tate London to creating a personalised narrative path through a museum for each visitor, A.I. technology is being used to enhance the visitor experience. In Rio de Janeiro’s Museum of Tomorrow, IRIS+ greets visitors and helps guide them through the museum and can rifle through tens of thousands of data points to answer their questions, from small things like “what is this object?” to more existential things like “what is the point of us?”. 

One of the caveats to tech helping museums become more accessible is that an awful lot of the “accessibility” is visual based, and not suitable with people who either can’t use screens and headsets or who have visual impairments. Fortunately, A.I. is also being used by people like Playable Tech.

Founded by people with a background in music therapy and special needs education, Playable Tech have developed BeatBlocks, an app that lets you use building blocks and lego bricks to make music. To be clear, this is not a sponsored post and I have nothing to do with the team or the product, I just think it’s brilliant. 

The genuine inclusivity of A.I.-supported experiences like making music when traditional instruments are a no-go, or seeing historical artifacts in incredible detail when you can’t travel or physically visit a museum is something worth celebrating. Why wouldn’t we want to make connecting with culture and history more inclusive and, dare I say it, more interesting? 

Just don’t mention the metaverse…

Against A.I.

Hyperplagiarism and the Death of Creativity 

Ooh boy, this is a big point. Generative A.I. (the ChatGPT and Dall-Es of the world), as argued by many, undermines creativity and originality, two fundamentally human traits. Why spend years of your life perfecting the craft of writing impactful words and literature when you can plug a prompt into an A.I. software and start publishing e-books within the hour? Who cares about the human connection and beauty of art when you can stick a few words into a bar and come away with photo-realistic(ish) images of cakes, crochet, and landscapes? Why do anything by hand when you can get immediate, if slightly crap, results with A.I.??

Particularly for those of us who have spent most of our lives arguing why we deserve to make a living with a creative or artistic career, this particular side of the widespread adoption of A.I. is, frankly, terrifying. Thousands of years of human craft and creativity are at stake for the sake of some flashy technology. Philosopher Nick Bostrom put it more powerfully in his 2014 book, Superintelligence (read an extract here):

We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans—scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth. 

A.I. is Coming for Our Careers

A close second to the moral and philosophical fear of our humanity being eroded by technology is the very real threat that generative A.I. will render plenty of creative jobs irrelevant. Anecdotally, the Great Replacement of designers, animators, illustrators, copywriters, and social media managers is already happening, and for every tech bro gushing over the amazing photo-realism of their “art”, there is an artist seeing their original work fall to hyper-plagiarism.

In 2023, SAG-AFTRA (Screen Actors Guild – American Federation of Television and Radio Artists) and the Writers Guild of America went on strike for months to protect their livelihoods. The amount of collective action and dogged resilience required to prevent film studios retaining actors’ likenesses for replication whenever they need to bulk out a film was somehow inspiring and alarming; without a powerful union, are people in the creative industries doomed to lose their jobs? 

Combined with massive cuts to arts and culture funding in the UK, the future for creative industries can look bleak. The supposed economic benefits of just using A.I. rather than hiring real, skilled people, might be too appealing to the folk at the top to resist. No matter how snazzy it might be to mess around with Dall-E and make your own images at the drop of a hat, the reality is that generative A.I. is eroding livelihoods.

Won’t Someone Think of the Environment?

The final argument against A.I. which I will mention in this blog is its huge environmental impact. In 2019, a study found that training one early large language model (think early-days ChatGPT) produced of 300,000kg of CO2 emissions, the equivalent of flying from New York to Beijing and back 125 times. 

Kate Crawford and Vladan Joler’s visual map, “Anatomy of an AI System”, analyses the human labour, material resources, and environmental impact of Amazon’s echo, just one smarthome A.I. system with millions of users. At every stage, they found, A.I. relies on exploiting our planet and its people to let us find out what the weather is in Sydney in a split second. 

Running large language A.I. models requires a huge amount of computing power, meaning a huge amount of energy to run them. According to OpenAI researchers, the amount of power needed to train new A.I. models has doubled every 3.4 months since 2012. Added to that is the environmental impact of mining for lithium, disposing of electronic waste, and even saving our silly little A.I. pictures on the cloud

In short, A.I. might just be an environmental disaster waiting to happen. 

Photo by Kelly on Pexels.com

To be honest, it was hard to pick just three anti-A.I. arguments, and they feel much more substantial than the pro-A.I. I haven’t even mentioned the concerning use of generative A.I. in producing harmful deepfake images, politician’s soundbites, or the rabbit hole of A.I. being used in academic papers.

As overwhelming as A.I. can seem, particularly if it is threatening your job or your research, it is a phenomenal tool. A.I. in its most basic form – data and maths – has been around for decades, often in very positive ways. We have seen A.I. tools designed to help quality check pastries accidentally improve medical imaging for cancer detection, and might just help to save the bees! When A.I. is used as a tool rather than as a final product, and when it is properly regulated, it can genuinely benefit us. 

The other thing to keep in mind is that the hype around generative A.I. is almost over. Soon enough, the novelty of making images within a few seconds will drop off, just as much as asking Google “what is the meaning of life” stopped being fun. Across the globe, there are legal cases in process which will set out regulations on plagiarism, and there are tools being developed which essentially let artists poison their work against A.I. scraping. 

All is not yet lost for human creativity. And perhaps, just perhaps, the real fear that we feel in the face of A.I.’s rapid cultural domination is less a Luddite-esque fear of technological progress, and more a fear of unregulated technocapitalism.

Just food for thought. 


Beth Price is a 1st year PhD researcher in Chinese Studies at the University of Edinburgh. Her research explores nudity and the female body in media, arts and popularised medical science during the Republican Period in China (1911 – 1949) in the context of feminism, semi-colonialism, and a new transcultural medical discourse. Find her other writing, outreach, and community education resources at @breakdown_education on Instagram.

Leave a comment