Move over flashcards. QuizBots prove better at testing your memory
Researchers at Stanford have shown that a relatively new technology — a chatbot they dubbed “QuizBot” — can be significantly more effective than flashcards in helping students learn and retain information.
In a study with 36 students learning with either a flashcard app or a QuizBot, the team found that students correctly recalled over 25% more right answers for content covered in QuizBot and spent more than 2.6 times longer studying with QuizBot versus flashcards. Both the flashcard app and QuizBot taught information ranging from science and personal safety to advanced English vocabulary. They had identical sequencing algorithms for selecting which item to present next to students.
The longer study time is a key part of why QuizBot works so much better, according to computer science assistant professor Emma Brunskill, who is co-author of the paper to be published in the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. The team presented its findings May 8 at CHI 2019 in Glasgow, Scotland.
“QuizBot is more conversational and more fun. Students felt like they had a true study partner,” said graduate student Sherry Ruan, who led the study. James Landay, a professor of computer science, was the senior author of the paper.
ChatBots are an offshoot of artificial intelligence in which computer programs interact with humans via text message. They are far more common in customer service and ecommerce applications than in classrooms. If you’ve ever chatted with a customer service rep when contemplating an online purchase, there’s a chance you’ve interacted with a ChatBot.
QuizBot is a new spin on the chatbot formula. It posits factual questions via text, much like a teacher. The student types in answers, asks clarifying questions and requests hints. QuizBot then comprehends and responds conversationally, as if another human is on the other side.
Unlike flashcards, ChatBots are able to recognize near-miss answers and to offer additional guidance and even encouragement to the student.
One hurdle in designing such an educational system is that the computer must be able to recognize correct answers in various forms. Not all students answer the same way, using the same words or syntax. And, of course, there is always the potential for typos. That’s where artificial intelligence comes in.
To test QuizBot’s accuracy in grading, the team randomly selected 11,000 conversational logs from their studies. They found that QuizBot was right 96.5 percent of the time — just five incorrect assessments out of 144 questions asked. Of those, one was a typo (‘ture’ for ‘true’) and three occurred because the algorithm penalized answers that were too short, but still correct. Only one error resulted because QuizBot misinterpreted the language.
The researchers think QuizBot could be the dawn of a post-flashcard world for informal learning. “I think there’s a lot of excitement around chatbots in general, though they aren’t in widespread use in education, just yet,” said Brunskill, who also directs the Artificial Intelligence for Human Impact Lab. “But that should change.”
Stanford contributors include postdoctoral scholar Elizabeth L. Murnane; graduate student Bryce Joe-Kun Tham; and research assistant Zhengneng Qiu. Researchers from Colby College, Tsinghua University in China also contributed.
The TAL Education Group provided funding for this research.