julietteculver.com

H809: Week 2

February 2009

This week on H809, we were given two papers to read.

The first was Hiltz, S.R. and Meinke, R. (1989) ‘Teaching sociology in a virtual classroom’, Teaching Sociology, vol. 17, no. 4, pp. 431–46. This paper, written back in the depths of 1989, was about the use of a 'Virtual Classroom', a precursor to the modern-day VLE, at two US Higher Education institutions. It was used for a selection of sociology courses and with about one hundred students in total. Students were given the option of taking courses in a purely online or blended form rather instead of the normal traditional delivery. It is hard from the paper to get a complete picture of what the teaching was like in the different formats but mention is made of its use for discussion, quizzes and 'virtual seminars' where students took it in turn to lead seminars on specific topics. There are also a couple of transcripts from the course in an appendix. Little was said about the organisational context though a hint was dropped about 'active resistance among some faculty to the experiment'. At least one of the courses was taught but one of the authors of the paper.

The style of the paper come across very much as one of an evaluation - you get the impression that the authors are doing this in order to check for problems and maybe learn a few lessons for future years rather than in an attempt to push back the boundaries of what we understand about the use of technology in learning and teaching. They have done statistical analysis (lots of it!) on grades and student survey responses to statements such as 'Taking online courses is more convenient' and 'I found the course to be a better learning-experience than normal face-to-face courses'. On top of this the authors looked at the transcripts from courses and included observations made by some of the instructors. They apparently did interview some of the students, but from the way the paper is written, this doesn't really come across as contributing much to the findings.

Of course the problem with this type of research like this is that there is so little context-transferability. By only looking at grades and responses to such general questions you also don't really get find out much about how the technology actually affects the learning which to me is what is really interesting and useful. I feel that the question 'Is teaching better using technology?' that the paper was trying to answer is a bad one - in the same way that it would be silly to ask if e-mail was better than the phone or cycling better than taking the train. There isn't a yes/no answer.

So what did they conclude? Basically that in terms of grades and how students felt about the Virtual Classroom as learning experience, it didn't seem to make a marked difference on the whole whether it was used or not, but that there was variation from course to course. The paper discussed students' greater candour online, the tendency for students taking the online course to procrastinate more, the different that easy access to computers made (surprise, surprise!), and the 'level of cognitive maturity' needed to deal with an online course being greater than for traditional courses. The students who were most enthused about the online courses were those who valued the contributions of other students and discussion with them most highly. There was also an interesting comment from one of the instructors that 'Students are like themselves, only more so, when they are online'. However the evidence for all of these feels anecdotal based on the experience of the instructors. It would have been nice to seen some of these looked at in greater depth.

The second paper was Wegerif, R. and Mercer, N. (1997) ‘Using computer-based text analysis to integrate qualitative and quantitative methods in research on collaborative learning’, Language and Education, vol. 11, no. 4, pp. 271–86. This was a different type of paper in that the main force of the paper was to propose the integration of qualitative and quantitative in analysis text corpora rather than to present actual educational research findings, although an example was used to illustrate the type of method that the authors were proposing.

The quantitative methods referred to be the author involve getting independent people to categories utterances in transcripts of text into pre-defined categories and checking that they agree to within a reasonable level. The number of occurences of different categories in particular conversations etc. can then be correlated with other findings. This has its issues however, notably that your results are only as good as the categories concerned and that temporal data isn't taken into account. More qualitative methods of discourse analysis when a small amount of text is dissected in enormous detail are subject to accusations of examples being picked to illustrate the argument that the author wanted to make.

The example that the authors give to illustrate how one might be able to combine both approaches involve an intervention made in a primary school classroom, with a control group for comparison. The study looks at how the children learn to collaboratively solve reasoning problems. The children were tested in groups before and after an eight-week programme of coaching on the use of 'exploratory talk' ('defined as talk in which reasons are given for assertions and reasoned challenges made and accepted within a co-operative framework oriented towards agreement'). Study was then made of the transcripts of the children's discussion when solving the problem before and after this programme. The scripts were examined to look for examples of phrases that might signify for instance presenting a reason for an assertion, searches were made for other instances of such phrases and the number of occurences of these looked at in relation to how the children did on the tests.

I must admit that I am fascinated by patterns in the way that people interact and the language that they use so I find these type of methods interesting. One can easily be deluded by the 'scientific' appearance of these methods - how one categorises utterances is always going to be very much open to interpretation. I do think that the approach of using qualitative techniques to generate hypotheses and then quantitative techniques to try and verify if they are valid is a reasonable one. Humans are better at spotting patterns than computers but we also tend to like to see patterns that don't in fact exist. Indeed 'coding' of interviews appears to be common nowdays in the social sciences with software such as NVivo and Atlas.TI being widespread.

As well as reading these two papers, we were also charged as a tutor group with creating a timeline of when different digital technologies became available and when their use became widespread in education. My group's timeline couldn't be described as comprehensive yet (and I'm not sure 'Juliette gets an iPhone' should be on there!) but it has made me realise how hard it is to define what it means for a technology to be mainstream or widespread, or even 'available'. On top of this, things don't become widespread overnight. On the otherhand the upwards curve also isn't always a gradual uniform curve. I'm still not quite sure why we have been asked to do this, but it has been quite a fun nostalgia trip and I presume we'll learn something when we revisit it in a future week as promised.