Checking it out: isn’t that what we do when learning?
I have been trying to pluck up the courage to start a blog for some while now and have been chewing over a few ideas. These include some of the ruminations over the writing of my book on Digital Discourse, which I am sure will appear in later posts, but in the end I decided to go for the big challenge of crystal ball gazing! This piece was prompted after presenting a paper for the Grand Challenges for Computer Research in the Learning for Life strand.
Presenters were asked to think about how we could exploit technology to sustain lifelong learning and how we can support learners changing needs in the future. Quite a task, don’t you think? So I got out my crystal ball and started to consider about the sorts of issues that I am dealing with in e-assessment and then thought it was better to move onto the social networking tools and elearning communication systems today that we can learn from to meet the challenges of tomorrow?
I would suggest that one of the common activities is that of ‘checking out stuff’. So what do I mean by that?
Let’s take the example of Facebook. Users can check out the following:
· Checking out whether their friends are online or what they are doing. Twitter too assists with this functionality
· Arranging meetings/dates
· Seeking advice about places to visit, restaurants etc. There are applications such as Local Picks Tripadvisor to assist
· Appraising reading lists and looking for a good read using Visual Bookshelf
If folk continue to ‘check out’ they do so because there is a response and new tools keep appearing in places like Facebook for friends and colleagues to record their opinions about a number of phenomena which are open to others.
This checking out also takes place in learning and a number of models have been developed which explore conversational relationships in education e.g. Laurillard (1993) and Pask (1976) (Unfortunately I cannot find a free copy of Pask’s 1976 paper online. Can anyone help with this?)
Technology can play an essential role here in providing the feedback which facilitates the exploration of new roles in this new ‘check it out’ arena. In fact electronic feedback systems that support student interactivity have proved to be successful in providing audio feedback that explains the reasoning behind correct and incorrect choices for answers to a set of ‘check it out’ questions. I have found this strategy to be very useful in the Open University’s Science Foundation Course (S103) where all this information would not be provided in a face to face tutorials. This is because tutors would only have time to explain the correct solution to a problem and go through the workings of that solution but would not necessarily describe why all the invalid incorrect answers were wrong.
Therefore I think we should exploit the technology not only to support just in time check out for learners but use the feedback in these systems to help shape the users as independent thinkers, making their own judgements and decisions about their learning process. This information will in turn assist them to make choices about their own futures. One way to achieve this outcome is to build new feedback systems which will facilitate novel roles for learners which fit within a constructivist perspective and meet the demands of The Leitch Report (2006) which proposes a 45% expansion in Higher Education by 2020. Dearing, in his speech to the Society of Research into Higher Education’s Governing Council, February 2008 believed this figure to be unrealistically high but stressed the need for such an expansion in order to achieve the same goals he stressed in his 1997 report which were to:
· ‘serve the needs of an adaptable, knowledge based economy at local, national and regional levels
· play a major role in shaping a democratic, civilised, inclusive society’ (Dearing, 1997)
New feedback systems are therefore required to assist learners to make choices about their own futures and how these sit within a challenging and changing economy.
The race has been on in the United States since the mid-1960s to decode free text entry of students’ work for automatic essay grading. E-rater (ETS) and the Intelligent Essay Assessor (IEA) currently in use. The latter uses Latent Semantic Analysis (LSA). The strength of both these systems is that they can be used to give constructive feedback. The strength of IEA is that it can be used to give constructive feedback. IEA has been described as a psychologically sound
system as it is held by some of its proponents to be a theory of language use in people, and indeed, it does seem to match several significant psycholinguistic effects (e.g. Landauer et al., 1997)
The problems associated with the evaluation of free text entry responses have also been concentrating the minds of myself and a group of researchers at The Robert Gordon University. We are currently working on “Open Comment” which is an open source text recognition question type for Moodle and was built to facilitate free text entry for students answering History and Philosophy questions for a series of formative assessments. Open Comment implements a constructivist and supportive approach to providing feedback, based on the following key principles:
- There is no ‘right’ answer
- Any response by a learner will generate a mixture of supportive comments and suggestions for improvement
- The suggestions for improvement are selected on the basis of identifying markers extracted from the text
- The mechanisms for identifying markers are flexible, and do not need to depend on keywords in the text itself
The current implementation of Open Comment includes two classifiers, one based on regular expressions (text patterns) and the other using support vector machines, a classifier based on machine learning technology. This is an approach which is proving successful and offers one path of investigation for a ‘lifelong learning check it out’ system.
Any developments in this area will need to adopt an interdisciplinary approach to the research and this strategy will be required to tackle the problems associated with producing new feedback tools for learners. Learners are often checking things out due to an all pervasive assessment regime, and changes in feedback for learning systems require changes at levels that range from the minutiae of diagnosing student progress on a specific task to influencing political decision making. We will certainly need a number of partners with expertise in areas such as:
· The design of information /feedback tools
· Latent semantic analysis and associated areas of expertise
· Innovative e-assessment
· Educational evaluation and theory
· Mobile learning
· Curriculum design and skills based courses in Higher Education
· User modelling
· Open source development of tools for enhancing learning
But what do you think? Where can we go with this? Perhaps the ideal would be when the student uses the system has confidence in it and believes the feedback will help them progress and do better!