General AI – headed towards irrelevance?

Focussed AI is a powerful tool

Focussed AI is a useful tool when applied to closed and verified data, information and texts: so, for instance, it is easy to imagine it usefully summarising – and extracting information from – a curated body of legal texts, archives, interview transcripts, maps or programming routines.

The key is curation. This curation will presumably be performed by humans, because it is humans who can understand the purpose for which AI is being used, and assess which information is relevant – and of sufficient quality – for that purpose. AI, whilst phenomenal at identifying and restituting linguistic patterns, has no purpose and no ethical, moral, aesthetic or other guiding principles with which to make these descisions.

General AI is a like a dog chasing its tail

General purpose AI is unlimited: it attempts to gather information and produce results from everything that is available and that meets lingustic cues provided by prompts. As such, it treats all information equally, whether verified or not, fake or real.

But let’s assume that some AI developers “teach” their system to discern better quality information. This means that AI will develop, then improve, weights that define and summarise information quality: but since AI will be assessing the relevance of its inputs, we fall into a dead-end wherein the purpose, ethics, morality and principles governing these assessments are non-existant, replaced by algorithmic pattern recognition, itself resting on obscure machine-developed criteria.

AI synthesizing and regurgitating its own creations…

Of course, humans are also biased; they also implicitly weight information sources. But humans (at least most of them) are held responsible for these weights, by law, by custom, by ethics, by methodology (in academia) and by social convention. They are also expected to make them as explicit as possible (if resquested to), and lose credibility if they can’t.

Furthermore, there are billions of humans, each applying different weights: through discussion, dialogue, opposing views and argumentation, positions are established, truth claims made, and the contours of different assessments and points of view are derived. If this is becoming more difficult today, it is partly because bots and other artificial text generators have drowned out person-to-person debate.

The four or five major general AI tools currently available, each with its own inscrutable, hidden and evolving biases, produce linguistically clever results that sometimes are, and sometimes are not, hallucinations impossible to untangle.

This is potentially dangerous – but less so than climate change, homelessness, autocratic leaders, poverty, or lack of health care – and of no discernable use to society.

What is the purpose of general AI?

There is currently an AI bubble – hyped, primarily by those who benefit financially from it – the owners of the technology, their political and business allies, and AI developers.

It is of little practical use to most people – it is foisted on them through phones, search engines, and ‘intelligent’ customer service bots, but is not making life any better, information more reliable, or knowledge more widespread.

Students “write” AI generated essays – learning absolutely nothing in the process. Professors will soon “assess” these essays using the same tools: AI will be judge and jury, making life easy for students (who will remain ignorant) and professors (who will no longer bother to teach).

Internet searches serve up predigested and usually unsourced platitudes, basically summaries of the first few pages that a good search engine throws up, and typically less informed than a good Wikipedia page.

Useless voice recognition bots patiently, relentlessly, and endlessly run customers in circles as they desperately seek help for real-life personal emergencies from insurers, banks or hospitals.

In a few years, if it isn’t already happening, the main input to AI will be AI’s previous output (and that of other bots lurking on internet), marching straight towards irrelevance to humans, in the meantime taking over and destroying customer service, music, art and writing.

Alternatively, once AI has become creator and consumer of knowledge and information, both input and output of the “knowledge creation function”, humans may become irrelevant, as they are factored out of the knowledge equation in the name of efficiency. Given the clunkiness of AI, I am not convinced by this argument: AI, after all, only deals in information, not in wires, floods, snow shovelling, building maintenance or electricity generation…

General AI does not address any important issue

I cannot, logically, see how general AI will lead to anything other than a knowledge system which summarises and regurgitates itself (if it isn’t already doing so).

In the meantime, climate change, poverty, authoritarianism, housing crises, ill-health and genocides continue unabated.

General AI is not designed to address any serious issue that afflicts society.

It is designed to increase the market value of companies that control AI, to line the pockets of their owners and acolytes, and to increase the control that they have over information and knowledge – until this knowledge becomes detached from any human thought or input.

Published by Richard Shearmur

I am a professor at McGill's School of Urban Planning. I perform research on innovation, on how we locate work activities (in a world where people often work from many places), and on urban and regional economic geography. I used to work in real-estate, and teach a course on this. I am an urban planner, member of the Ordre des Urbanistes du Québec and of the Canadian institute of Planners.

Leave a comment