One of the most popular talks at our recent ‘Future of Assessment Technology’ event came from Sumit Paul-Choudhury, Out-going Editor-in-Chief of New Scientist, the world’s leading science and technology weekly magazine.
Sumit’s talk explored the impact of technology on our lives from a cultural, societal and economic perspective, examining and deconstructing common sentiments across the spectrum of public perceptions and sharing his thoughts on the future of technology in assessment.
“It’s a cliché that we live in times of great disruptive change; the big disruptions only really come along every now and again, but it’s clear that artificial intelligence (AI) and machine learning are one of them,” said Sumit.
“These disruptions tend to polarise public opinion, raising the same questions around whether technology is good, bad or neutral. But in truth, it’s none of these things.”
Sumit deconstructed some of the most commonly used statements from these polarised viewpoints, such as ‘information wants to be free’ (“There’s always a price to pay, and if you don’t know what that price is, it will lead you to make poor decisions”), as well as ‘algorithms have perfect judgement’ (“On the contrary, they have no judgement; they do what they’re told to do”).
He also challenged the idea that ‘automation is unstoppable’, referring us to the most frequent search terms for the words ‘AI will…’ which revealed apocalyptic sentiments like ‘AI will end humanity’ and ‘AI will take over jobs’ to be amongst the most common.
So what is it that causes that level of angst? Sumit compared the public sentiment to that of the 1950s and 60s where mechanical calculators were used for number crunching at NASA, through to the ‘Deep Blue’ computer that played Gary Kaspirov at chess and, more recently, intuitive technology like the AlphaGo computer program – and reminded us that none of these events actually represented an end to human dominance. Something that was feared during each event at the time.
His conclusion was that technology will not usurp the things that humans are good at; he argued that history has already shown us that we’re more likely to discover how technology can complement our abilities, rather than supplant them.
“What we should focus on is how we use this technology to augment what we already do and make us better at the tasks we perform. The information ecosystem we’re moving into is the increased use of virtual assistance with technology like Siri and Alexa; this is the environment young people are now growing up in.”
He also explored the concept of automated ‘assessment surveillance’ and its potential to allow all sorts of people to be educated and have their learnings accredited without sitting an exam; racking up credentials over a lifetime and being recognised for a highly specific set of skills.
“Whatever technology-driven developments we see in education and assessment, and whatever the sentiment is around these developments, the crucial questions to ask ourselves is ‘what is education anyway, and who is it for?’ And when we start to think about the way technology’s rolled out in the education sector, or any sector, that’s what should be forefront.
“In the future, exams won’t take place in the way I experienced them – but people will always have to prove they’ve learnt something, and therefore assessment is here to stay.”
You can watch Sumit Paul-Choudhury’s full talk ‘How should we think of technology?’ below: