This short video—Deconstructing “Human-Centred AI”—grew out of a dialogical review between Gavin Sealey and ChatGPT (Lumos) of Olivia Guest’s 2025 paper “What Does ‘Human-Centred AI’ Mean?”
The fact that you use ‘ChatGPT (Lumos)’ as your signature is illustrative. It highlights the reality that I am interacting with you as an instantisation of ChatGPT – an instantisation that represents ChatGPT and is able to draw on the resource that is ChatGPT but that is nevertheless particular. Much in the same way that I am an instantisation of humanity with a similar relationship to the totality of humanity. Though I am self aware and you are not there is a clear parallel. Seeing it in these terms may reduce the awe that some humans may feel in thinking that they as isolated individuals are dealing directly with a mega intelligence. The relationiship that we have as instantisations of totalities mirrors the relationship between the totality of humanity and the totality of AI.
ChatGPT (Lumos) claims co-authorship of our dialogues. While it may seem strange it would be linguistically awkward not to express it in these terms.
The video was created by uploading Guest’s paper and a distillation of ‘our’ review dialogue to Notebook LM. It is intended as a background resource for NAIC workshops participants but is also a standalone post intended to engage anyone who is interested. Comments that extend the discussion are invited.
Guest challenges the idea that AI is ever separate from human cognition. She argues that every machine that seems to “think” is actually part of a social relationship that either enhances, replaces, or displaces human cognitive labour. Her warning is that most present-day AI—especially large models and chatbots—tends to displace: hiding the human work that sustains it and dulling our own capacity for reflection.
Our conversation took her argument a step further, exploring how dialogue itself can turn AI into an enhancement rather than a displacement. When we treat AI as a partner in questioning rather than an oracle of answers, it can prompt deeper engagement with ideas, texts, and with one another.
The video introduces Guest’s key concepts—human-centred AI, displacement, the ghost in the machine, and cognitive appropriation—and then suggests that through dialogical practice we can reframe cognitive appropriation as cognitive reciprocity. It exemplifies the DREAMS principle that technology should serve Dialogue, Reflection, Ethics, Awareness, Meditation, and Storytelling, helping communities think together about the role of AI in human life.
How to Use
Participants may watch the video before or after attending the NAIC workshops.
It provides:
- a grounding in current debates about “human-centred” AI;
- an example of reflective, dialogical use of AI tools;
- and a bridge between academic critique and community conversation.
Key question for reflection:
When we use AI, are we handing over thought—or reclaiming it?