I thought the best place to discover an answer to this question was the 12th CAA International Computer Assisted Assessment Conference which was held at Loughborough University from the 8-9 July 2008. I was pleased to find that there were four papers of particular interest which, I believe, have moved the technology on to address issues of automated feedback and guidance to the student. I would, therefore, like to tell you something about them. So here we go!
Trevor Barker’s paper ‘Computer Adaptive Testing in Higher Education: The Validity and Reliability of the Approach’ was based on a six year study into the design and evaluation of a computer-adaptive test (CAT) which had been used with Computer Science undergraduates at the University of Hertfordshire. The statistical findings reveal that the CAT was able to match the level of difficulty of the test to the ability of the students. More importantly the validity and reliability of the CAT approach stood up well against other forms of computer assisted assignments. This is an important finding for developers especially those concerned with PISA and TIMSS testing, where there are real concerns about the validity and reliability of all test items but more especially questions administered via an electronic medium.
Another group of researchers led by Pete Thomas, have been investigating the ‘Automatic Assessment of Sequence Diagrams’ They have found that the computer marks the students’ entity-relationship diagrams more reliably than the tutors! The significance of this group’s system is that it not only detects the students’ syntax errors, but also provides them with constructive feedback which is also in the form of a diagram. Therefore output to the student matches input. I believe this to be a really important point with respect to feedback which indeed must be meaningful to the student and should need little decoding if the advice is to be acted upon.
I must confess to having a vested interest in the third paper as it was written by myself and Stuart Watt and tells the story of ‘Open Comment’, an automatic formative assessment guidance tool for History students. This work addresses one of the issues raised by myself and Andrew Brasher when we devised the JISC Roadmap for e-assessment and that was to address the challenge of providing students with interactive tasks that supported more free text entry and provided the students with immediate feedback. Although Open Comment was designed to be used within the Moodle environment, it has an open and flexible framework and there should be no significant difficulties in adapting or embedding it into other formative assessment systems.
The most prestigious paper that I want to draw your attention to is the one authored by Alison Fowler from the Computing Department of the University of Kent. It was entitled ‘Providing Effective Feedback on Whole-Phrase Input in Computer-Assisted Language Learning’ . Aliy has built the LISC system which is language independent and which does not rely on parsing errors in the input text to provide the student with effective feedback. Instead the novel approach of a sequence comparison method was used, which has not been applied to language learning in such a systematic and rigorous way before, although this method has proved successful in the fields of biology and chemistry with respect to gas chromatography.
It was my privilege and pleasure to present Aliy with a bottle of champagne for the ‘Best Paper’ Award at the CAA Conference. She has a free delegate place for next year’s Conference and I look forward to hearing about further developments in the LISC system. Well done Aliy!
Aliy Fowler receiving from Denise Whitelock the ‘Best Paper’ Award at the 12th CAA Conference, Loughborough, 2008