AI and universities: teaching how to fake voice and inner-life

Voice and authorship are important

A few years ago I took the night train from Amsterdam to Prague, and then on to Budapest. I was reading Imre Kertesz’ book, Fatelessness. The train ride was eerie, as I was riding South on some of the same tracks that Kertesz’ character had been taken North on, in terrible circumstances, some 70 years earlier.

It was eerie because I knew that Kertesz himself had been deported from Budapest, and that scores of people had been transported to their deaths along the route I was travelling and reading about. It matters that Kertesz, and not some AI programme, had given voice to the characters and events in the novel.

Likewise, however well AI (combined with 3D printing) can reproduce the Mona Lisa, it matters that Leonardo da Vinci painted it, sufficiently so that people flock to the Louvre to see it, and that original paintings need protection from art thieves.

Less grand, but just as important, it mattered to me when my daughters confected birthday cards, and it now matters to me that – now older – they occasionally write or phone with brief updates as they spread their wings. However formally informative and lifelike AI diaries and voice syntheses could be, they cannot replace a real chat or text.

Source: Photo by Bilal Furkan KOŞAR: https://www.pexels.com/photo/train-at-night-19657290/
AI : robbing voice

I do not pretend that I could easily distinguish a fake AI novel, painting, birthday card or text from the real thing, though of course I hope I could.

What is terrible, though, is that AI has introduced an era when one can’t discern fake from real. Were I to discover, for instance, that Fatelessness was in fact a clever probabilistic combination of words and phrases concocted by AI, it would lose its significance, and force me to question the intense feelings it evoked during my train journey.

The same goes for art collectors and museums. They go to enormous lengths to authenticate work: but if it is so valuable and important to correctly identify originals, then the fact that AI can easily fake them means that it is robbing us of something important.

Just as it would rob me of something important if I discovered that my daughters’ birthday cards and phone calls were AI fake. I would feel cheated and angry.

Is AI fake?

Some readers may balk at my characterizing AI as fake.

Of course, AI output is not fake when clearly identified as such. AI is fake when it produces text (or images, or songs) that purport to be produced by a human. This is true for art, personal communications and, I think, for academic work.

Authorship – as I have illustrated with examples above – is important. It imbues meaning to a text, image or tune. This is not trivial: millions of dollars are paid for *original* works of art, manuscripts and notes, because there is inherent value in associating an artifact with its real author. Emotions are invested in *real* communications and interactions with people, because people have value: words or art merely articulate this value, they have little value in themselves.

But surely words have value irrespective of authorship?

Maybe the authorship of manuals and reports purporting to present factual situations is not problematic – though, even for these, the reader needs to know who is responsible for the instructions, advice or analysis: to my knowledge, AI is just a tool, and is not liable for its own output.

Even the value of an academic paper rests, in part, on the reputation and history of the author who, over a career, slowly builds up expertise, a body of work, and a readership that respects them.

Authorship is important:

” I have a dream that one day on the red hills of Georgia sons of former slaves and the sons of former slave-owners will be able to sit down together at the table of brotherhood” (Martin Luther King, 1963).

Would these words have the same resonance and meaning if they were probabilistically cobbled together by AI?

What about universities ?

Maybe naively, I believe that universities have a function that goes beyond increasing the quantity and believability of student output. Universities are *educational* institutions, and should also provide an opportunity for young people to discover themselves, find their voice, develop their own ideas, and learn how this can be done.

AI is robbing students of this opportunity.

Although books such as Teaching with AI1 (Bowen & Watson, 2024) provide a decent guide about *how* to teach with AI, they shirk the key question of *whether* one should. The book takes it is a given that AI is here to stay, and describes how it can be used. Erasure of voice and of authorship is downplayed, the argument being that clever prompts are now all that is required for ideas and style to become one’s own.

This may be true in a purely functional world, where words are parameters, sentences are equations, meaning is a probability, and style is marketing.

But words carry meaning beyond that of a probabilistic string: they reflect their authors’ inner-lives, experiences and values (whether the author is a Nobel-prize-winner, a run-of-the-mill academic, or one’s child).

Universities are now in danger of teaching students how to expertly fake inner life, experience and values.

***

1 This book is recommended reading for McGill profs (see: https://teachingkb.mcgill.ca/tlk/using-generative-ai-in-teaching-and-learning – this page may only be available from within the McGill system)

Published by Richard Shearmur

I am a professor at McGill's School of Urban Planning. I perform research on innovation, on how we locate work activities (in a world where people often work from many places), and on urban and regional economic geography. I used to work in real-estate, and teach a course on this. I am an urban planner, member of the Ordre des Urbanistes du Québec and of the Canadian institute of Planners.

Leave a comment