julietteculver.com

H809: Week 11 - More on Research Methods

May 2009

Two more papers to read this week, this time about observing and recording online activity. The first was Davies, J. and Graff, M. (2005) ‘Performance in e-learning: online participation and student grades’, British Journal of Educational Technology, vol. 36, no. 4, pp. 657–63. This examined the question of whether online interaction leads to better grades by looking at the performance of first year business studies students and their quantity of VLE activity according to the VLE logs. Their results could very roughly be summarised as finding little difference in usage between students with high and medium grades, but a more marked difference between students with high/medium grades and those with low/failing grades.

However, part of our next assignment is to critique this paper, so as I shall have to write at length about it for that, I am going to concentrate on the second paper here. This was Cox, R. (2007) ‘Technology-enhanced research: educational ICT systems as research instruments’, Technology, Pedagogy and Education, vol. 16, no. 3, pp. 337–56. The focus of this paper was to argue for a process-analytic approach to researching learning using fine-grained recordings of students' behaviour as the basis.

The paper illustrated this approach with three case studies:

  1. switchER - software to help teach analytic reasoning skills of the type tested by GRE questions in the US. The package provided support for constructing representations of such problems, allowing the user to switch between different representations. Screen recording software was used here to discover that there were two different types of representations switching performed by students, referred to as ‘thrashing’ and ‘judicious switching’ by the authors respectively. The information obtained was used to improve the software by giving feedback on the accuracy of representations and suggesting the student switch representations when appropriate.
  2. Hyperproof - software for teaching first-order logic. By keeping detailed interaction logs with two groups of students attending a ten-week course using the software, the researchers discovered two different learning styles, both apparently equally effective. Some students were ‘translators’ using the software to translate between its two different modalities (graphical and sentential) while others were ‘unimodal’ with a strong preference for one of the modes.
  3. PATSy - software for case-studied based disciplines especially in the health sciences, enabling students to consult the multimedia database and conduct virtual tests on the patients. In this case the researchers studied pairs of students taking part in ‘task-directed discussion exercises’. Data included software logs, videos of the pairs of students, screen capture and students responses to text-answer questions in the case studies which were added to the logs. Again, this data was used to improve the software, monitoring the interactions of students and intervening if appropriate.

I enjoyed reading about the studies, but I’m not quite sure exactly what to take away from this paper, other than a reminder that such techniques can be powerful and are worth considering when developing. It brings to mind usability testing of websites and the work I did on processing Moodle logs to make it easier to see the paths that students have taken through the site.  There were a couple of references in this paper though that I think might be worth following up - one by Chi on a method for analysing qualitative data objectively, and the work of Anderson on modelling student behaviour.

The notes for this week also included more discussion on the different types of way that new technology can change how research is done, and went onto to discuss the concept of objectivity, which the notes pointed out is not as clearly defined as you might think. I rather liked the following quote by Phillips included in the notes:

It turns out, then, that what is crucial for the objectivity of any inquiry – whether it is qualitative or quantitative – is the critical spirit in which it has been carried out. And, of course, this suggests that there can be degrees; for the pursuit of criticism and refutation obviously can be carried out more or less seriously. ‘Objectivity’ is the label – the ‘stamp of approval’ – that is used for inquiries that are at one end of the continuum.