A leading academic expert on Artificial Intelligence set out both its huge potential for education – and some of the looming pitfalls.
Rose Luckin, Professor of Learner Centred Design at UCL’s Knowledge Lab, took as her starting point in the Senior School lecture assembly the need for deep human understanding: “AI is the inter-disciplinary study of intelligence – if we don’t understand intelligence, we can’t automate it.”
Together with educationalist Sir Anthony Seldon and entrepreneur Priya Lakhani, she is one of the leaders of the new Institute for Ethical AI in Education. Headmaster Neil Enright was among those who attended its launch this month at Speaker’s House in the Palace of Westminster.
During her lecture at QE, she highlighted ways in which Artificial Intelligence might obviate an impending global shortage of teachers – an estimated 69 million more will be needed by 2030. It could, for example, be used in teaching larger groups, releasing human teachers to focus on particular aspects of the curriculum with particular children.
However, Professor’s Luckin’s work takes in not only how AI can be used to assist human education, but also how education itself may need to change in response to the new technology. And in her lecture to the boys and staff, she said that, since AI can learn information faster and more accurately than humans can, there is a need to move beyond a focus on subject knowledge. This, she acknowledged, was already being done at QE, with the School’s emphasis on skills such as problem-solving and on synthesising and understanding the meaning of data.
She pointed to some of the ethical issues presented by the new technology. AI is built upon “big data”, she told the assembly, and it was not only in the area of data security that there were concerns, but also in how representative the data used is. There have been cases where AI has delivered skewed results, such as facial recognition only recognising certain ethnicities, or has shown a gender bias in its decisions. “We need to be appropriately sceptical,” she said – careful about what is automated ensuring that companies and technologies are held to account. “We need detailed explanatory answers when being presented with a seemingly nice solution to something.”
There were specific issues in education which AI was particularly well-suited to tackle: speech recognition might be deployed to help people with disabilities, she said, noting that Google has predicted that developments in speech recognition will be more significant than driverless cars. Yet doing so was no easy matter, because of the ways in which voices change.
In a question-and-answer session with the boys after the lecture, Professor Luckin delved into: issues of AI and consciousness; understanding what knowledge is and where it comes from; the need for AI that can explain its decisions, and how the education sector should be engaged in the development of the technology. She also explained the importance of inter-subjectivity in teaching and learning to make the best use of AI – that is, achieving the right blend between human interaction and machine-learning.
In thanking Professor Luckin, Year 13 pupil John Tan said: “Whilst we live in a society characterised by technology and technological advance, her talk emphasised the importance of the human connection in education.”
In addition to her work in education, Professor Luckin is also working with the Department of Health on a project commissioned by current Foreign Secretary, Jeremy Hunt (in his previous role as Health Secretary) into how AI will impact and can help the NHS.
A copy of Professor Luckin’s book, Machine Learning and Human Intelligence, which was published in June, was donated to The Queen’s Library.