Posted tagged ‘e-assessment’

Challenging Assessment: What did the Fourth Biennial EARLI/Northumbria Assessment Conference mean by that?

October 8, 2008

There was a good blend of papers and delegates from 18 countries at this Conference which was set in Potsdam beside a beautiful lake.   Most of the delegates stayed in the same hotel which facilitated time for informal discussions which were indeed most enjoyable and we could take shelter during a rainy August.  

As at any conference there is always a hot topic which was assessment for learning and how to provide useful feedback was on a lot of peoples’ minds. The term feedback itself was in dispute with some researchers favouring the term feed forward (Dai Hounsell)  but perhaps what was more interesting was not the terminology but the appropriateness and timeliness of the responses given by tutors to the students. I was delighted to see that researchers such as Margaret Price, Karen Handley and Berry O’Donovan in their paper on feedback, together with Sue Bloxham and Liz Campbell from the University of Cumbria, were emphasising that feedback should be seen as embedded within an ongoing dialogue.  Their presentations are available in the book of abstracts. This notion of promoting a dialogue between student and tutor is one which Stuart Watt  and myself have been progressing with our development of OpenMentor   and Open Comment.  Both these systems provide automated feedback to the user and lets them know what they didn’t know! 

The three Plenary Lectures provided three different vignettes into current thinking about assessment. Eckhard Klieme  discussed a number of pertinent issues that impinged upon educational measurement which is of particular interest to all those involved in PISA testing. His presentation was entitled ‘Assessment, grading and instruction: Understanding the context of educational measurement’.  Of particular interest was Ruth Leitch’s topic which informed us about children’s rights, as enshrined in European law, with respect to the assessment process.  It was Dylan Wiliam’s Plenary which closed the conference and was aptly named ‘When is assessment learning-oriented?’  He interestingly challenged the notion of formative assessment per se and has devised a new definition with Paul Black which they hope will be published in a paper in 2009.  They state:

 “An assessment functions formatively when evidence about student achievement elicited by the assessment is interpreted and used to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions that would have been made in the absence of that evidence.”

 The emphasis here of course is upon instruction but shouldn’t a definition include a subsequent action from the student?

This conference did not solely address forms of electronic feedback and assessment but participants‘ papers which focused in this area were invited to submit their contributions to a special issue of the British Journal of Educational Technology  which I am editing.  It should be available in March 2009, Volume 40, No. 2. So watch this space!


New tools or new ways to check it out?

June 19, 2008

This week Martin Weller  has been commenting on Brian Kelly’s  statement that the OU is the most popular university on Facebook. One of the contributing factors to this success Martin believes is the tool set which he has developed with Liam, Tony and Stuart .  These tools, such as the Courses Profile, My OUStory and Study Buddy help you check out other students on the course, find someone to work and share stories with.  So there are some levers to assist with building relationships and finding other folk to “check out” with. 

Creating opportunities for dialogue is an essential part of the learning process and is widely accepted as a suitable activity in the constructivist theory world.  However formal assessment has consistently failed to follow through with this more recent thinking because it squares the desire for improved constructivist learning against the demand for institutional and external reliability and accountability. So the official “checking it out” procedures have not kept apace or have they?

I was very interested to discover this week the notion of the “peer exam” which was developed by Vera John-Steiner . With the “peer exam”, (which is more reminiscent of androgogy than pedagogy), the students choose a topic they wish to explore; which is meaningful to themselves and their colleagues. Two to four students can take part in this process. They work out the questions they will use and prepare for the exam by reading around the topic and talking to one another. The students tape the examination session and write up the process as well as the results of their deliberations. Although this type of examination feels uncomfortable to begin with, it has had a profound effect on the students, some of whom have adapted this process to their own teaching. John-Steiner based this “peer exam” on Vygotsky’s theory of learning which emphasises the social nature of knowledge acquisition.  So what can we take from this?  Perhaps it’s best to check it out together?


peer exam 

Checking it out together: preparing for a “peer exam”





Checking it out: isn’t that what we do when learning?

May 12, 2008

I have been trying to pluck up the courage to start a blog for some while now and have been chewing over a few ideas.  These include some of the ruminations over the writing of my book on Digital Discourse, which I am sure will appear in later posts, but in the end I decided to go for the big challenge of crystal ball gazing!  This piece was prompted after presenting a paper for the Grand Challenges for Computer Research in the Learning for Life strand. 

Presenters were asked to think about how we could exploit technology to sustain lifelong learning and how we can support learners changing needs in the future.  Quite a task, don’t you think?  So I got out my crystal ball and started to consider about the sorts of issues that I am dealing with in e-assessment and then thought it was better to move onto the social networking tools and elearning communication systems today that we can learn from to meet the challenges of tomorrow? 

 crystal ball

 I would suggest that one of the common activities is that of ‘checking out stuff’.  So what do I mean by that?

Let’s take the example of Facebook. Users can check out the following:

·         Checking out whether their friends are online or what they are doing.  Twitter too assists with this functionality

·         Arranging meetings/dates

·         Seeking advice about places to visit, restaurants etc.  There are applications such as Local Picks Tripadvisor to assist

·         Appraising reading lists and looking for a good read  using Visual Bookshelf

If folk continue to ‘check out’ they do so because there is a response and new tools keep appearing in places like Facebook for friends and colleagues to record their opinions about a number of phenomena which are open to others. 

This checking out also takes place in learning and a number of models have been developed which explore conversational relationships in education e.g. Laurillard (1993) and Pask (1976)  (Unfortunately I cannot find a free copy of Pask’s 1976 paper online.  Can anyone help with this?)

Technology can play an essential role here in providing the feedback which facilitates the exploration of new roles in this new ‘check it out’ arena. In fact electronic feedback systems that support student interactivity have proved to be successful in providing audio feedback that explains the reasoning behind correct and incorrect choices for answers to a set of ‘check it out’ questions.  I have found this strategy to be very useful in the Open University’s Science Foundation Course (S103) where all this information would not be provided in a face to face tutorials.  This is because tutors would only have time to explain the correct solution to a problem and go through the workings of that solution but would not necessarily describe why all the invalid incorrect answers were wrong.

Therefore I think we should exploit the technology not only to support just in time check out for learners but use the feedback in these systems to help shape the users  as independent thinkers, making their own judgements and decisions about their learning process. This information will in turn assist them to make choices about their own futures. One way to achieve this outcome is to build new feedback systems which will facilitate novel roles for learners which fit within a constructivist perspective and meet the demands of The Leitch Report (2006) which proposes  a 45% expansion in Higher Education by 2020.  Dearing, in his speech to the Society of Research into Higher Education’s Governing Council, February 2008 believed this figure to be unrealistically high but stressed the need for such an expansion in order to achieve the same goals he stressed in his 1997 report which were to:

·         ‘serve the needs of an adaptable, knowledge based economy at local, national and regional levels

·         play a major role in shaping a democratic, civilised, inclusive society’  (Dearing, 1997)

New feedback systems are therefore required to assist learners to make choices about their own futures and how these sit within a challenging and changing economy.

Moving Forward

The race has been on in the United States since the mid-1960s  to decode free text entry of students’ work for automatic essay grading.  E-rater (ETS)  and the Intelligent Essay Assessor (IEA) currently in use. The latter uses Latent Semantic Analysis (LSA).  The strength of both these systems is that they can be used to give constructive feedback. The strength of IEA is that it can be used to give constructive feedback. IEA has been described as a psychologically sound

system as it is held by some of its proponents to be a theory of language use in people, and indeed, it does seem to match several significant psycholinguistic effects (e.g. Landauer et al., 1997) 


The problems associated with the evaluation of free text entry responses have also been concentrating the minds of myself and a group of researchers at The Robert Gordon University. We are currently working on “Open Comment”  which is an open source text recognition question type for Moodle and was built to facilitate free text entry for students answering History and Philosophy questions for a series of formative assessments.  Open Comment implements a constructivist and supportive approach to providing feedback, based on the following key principles:




  • There is no ‘right’ answer
  • Any response by a learner will generate a mixture of supportive comments and suggestions for improvement
  • The suggestions for improvement are selected on the basis of identifying markers extracted from the text
  • The mechanisms for identifying markers are flexible, and do not need to depend on keywords in the text itself


The current implementation of Open Comment includes two classifiers, one based on regular expressions (text patterns) and the other using support vector machines, a classifier based on machine learning technology. This is an approach which is proving successful and offers one path of investigation for a ‘lifelong learning check it out’ system.


Any developments in this area will need to adopt an interdisciplinary approach to the research and this strategy will be required to tackle the problems associated with producing new feedback tools for learners. Learners are often checking things out due to an all pervasive assessment regime, and changes in feedback  for learning systems require changes at levels that range from the minutiae of diagnosing student progress on a specific task to influencing political decision making. We will certainly need a number of partners with expertise in areas such as:

·         The design of information /feedback tools

·         Latent semantic analysis and associated areas of expertise

·         Innovative e-assessment

·         Educational evaluation and theory

·         Mobile learning

·         Curriculum design and skills based courses in Higher Education

·         User modelling

·         Open source development of tools for enhancing learning

But what do you think?  Where can we go with this? Perhaps the ideal would be when the student uses the system has confidence in it and believes the feedback will help them progress and do better!