What does the MIT study mean for teaching and learning?
.png)
There is no doubt the introduction of AI is changing the world as we know it. In this article, Mrs Quinn, Assistant Head Teaching and Learning, explores the impact of AI in education.
AI is shifting our landscape, perhaps not in the seismic way the headlines claimed when ChatGPT was launched, but incrementally, permanently and continuously. In the world of education, the initial conversation focused around cheating. How would teachers know if a student’s work was their own? How can we assess students in a world of generative AI?
I believe this is the wrong conversation. If our concern is centred around cheating, our focus is on ways in which we measure, compare and report on students. Certainly, we do these things. We live in a world of examination results, and it would be naïve to suggest they do not matter. But, they are ultimately a tool of comparison and measurement, helpful to potential employers or university admission tutors.
Learning is not the same as assessment. I think we should focus on the actual learning processes and the development of critical and creative skills, regardless of the accuracy of our measurement. Surely what really counts is an individual’s understanding of a concept; their ability to articulate an idea; their questioning of assumptions; their ability to grapple with gnarly problems. This is why I feel we should pay attention to the recent MIT study[1], published by its main author, Nataliya Kosmyna, despite it awaiting peer review and in acknowledgement of the fact it is one, very small, study.
The study divided 54 young adults into three groups, and asked them to write essays using ChatGPT, Google, and nothing at all, respectively. Their brain activity was measured over the course of the writing process. The ChatGPT users had the lowest cognitive engagement and “consistently underperformed at neural, linguistic, and behavioural levels.” Simply put, they displayed low level of brain connectivity. They couldn’t recall the content of their essays, and they grew progressively lazier, simply copying and pasting by the end. Because they weren’t thinking, they couldn’t engage with, or recall, the content of ‘their’ work.
The next stage was even more interesting. The brain-only group subsequently had access to ChatGPT and the AI group had it removed. The initial brain-only group continued to demonstrate high connectivity, recall and comprehension suggesting that the difference lay not in the tool itself but in how it was used.
Whilst we shouldn’t base educational strategy on one study, there is a very clear distinction in the ways in which students can use generative AI to promote learning and understanding and ways which will rapidly undermine their cognitive development. If students allow, or ask, the LLM to do the thinking for them, of course they will not learn and progress. By delegating the cognitive processes demanded by learning a new topic, they will not be able to commit facts to their long-term memory. If ‘memory is the residue of thought’[2] the cognitive heavy lifting is the vital element in retaining key information. But this is not an argument against AI. Used well, it could enhance learning. If a pupil inputs a text and asks the AI to produce quiz questions, the pupil is effectively testing themselves, forcing thinking and recollection of information. The AI can even give constructive feedback with advice on how to improve based on assessment objectives. A pupil might ask the LLM to explain a complex problem, breaking it down into stages, serving as a personal tutor. AI could generate flash cards for pupils to use to test themselves and each other.
The danger is in delegating thinking. As soon as the pupil asks AI to structure an essay; produce a summary; write an essay, they are wasting their time. At the point of handing over the thinking, they are not learning and, if the MIT study is anything to go by, they are reducing their cognitive abilities.
Where does this leave us in schools? We need to talk to the pupils about AI; their experiences; the pitfalls; its frequent hallucinations and eagerness to simply make stuff up. This is also an opportunity to focus on what makes us human; the things that AI cannot replace. The ability to empathise, persuade and listen has never been more important. At Warwick School, we are putting oracy at the heart of our curriculum. According to the EEF[3], pupils who follow an oracy-based learning model make an additional six months’ progress. Employers cite effective communication as the skill they value the most and yet the one they find most lacking. Developing oracy involves equipping pupils with the vocabulary and confidence to ask questions; to assert an opinion; reference research and, crucially, listen with intent. It requires the skills to read and critically analyse complex material and the confidence to relish the intellectual joy of engaging in thoughtful, respectful discussion and debate. They will make mistakes. They’re human. And that is perhaps the most important message to our young people. Do not delegate thinking because that is what makes you human and, now more than ever, we need our future leaders to be able to think.
[1] https://www.media.mit.edu/publications/your-brain-on-chatgpt/
[2] https://my.chartered.college/research-hub/why-you-should-read-why-dont-students-like-school-by-daniel-t-willingham/
[3] https://educationendowmentfoundation.org.uk/news/what-does-the-evidence-base-tell-us-about-effective-oral-language-practice





















