julietteculver.com

H809: Week 11 - Research Methods

May 2009

Last week we started the third and final block of H809 on research methods, and in particular ‘new’ research methods.

The notes for Week 11 categorised research methods into four groups: observation, documents, interview and experiment. These fall on a spectrum based on how interventionist they are in nature. Very few research methods are new in the sense of not belonging to one of these categories, but may use for instance new communication media to interview people or new technology to observe people in ways not previously possible.

I enjoyed the two podcasts this week. The first of these was  which was a conversation between Alan Woodley and Adam Joinson. This concentrated on the differences between paper and web-based surveys. There were lots of interesting points mentioned. People score lower on social desirability measures (and seemingly higher on anxiety measures) on web-based surveys than the paper equivalent. However it’s not clear whether this is due to the impact of the media or differences in sample. Although web-based samples will obviously have biases, the samples for many papers surveys are often questionable being based on students handed surveys in lectures: Adam pointed out that psychology is sometimes known as the ‘science of the sophomore’.

There was also discussion about shifts in how candid people are in online surveys. Ten or so years ago, people were more candid online, but there seems to be a shift here as people have begun to realise that online survey data, though possibly confidential, is not necessarily anonymous. The other issue raised was that of response rates to surveys, which have apparently been dropping universally. This may be the result of increased privacy concerns, but could also be cause by a change in attitude regarding the idea of completing surveys being something one does for the public good.  The podcast went on to cover how researchers might enable participants more participation in the research process and the balance that needs to be struck between humanising the research process in a bid to increase response rates and the resultant possible biases such an an increase in socially desirable responses.

We were given two readings this week. The first was Bos, N., Olson, J., Gergle, D., Olson, G. and Wright, Z. (2002) ‘Effects of four computer-mediated communications channels on trust development’ in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Changing Our World, Changing Ourselves, Minneapolis, Minnesota, 2002, New York, NY, ACM. This described a lab experiment comparing groups using different communication media playing a social dilemma game called Daytrader (a Prisoner’s Dilemma type game). The four types of media were face-to-face, video conference, phone conference and instant messaging. The biggest gap in terms of trust as measured by the game and questionnaires was between the instant messaging group and the others. However the trust in the video conference and audio conference groups appeared more delayed and more fragile than that in the face-to-face groups.

The second paper was Joinson, A. and Reips, U-D. (2007) ‘Personalized salutation, power of sender and response rates to Web-based surveys’, Computers in Human Behavior, vol. 23, no. 3, pp. 1372–83. This looked at the impact of the salutation on e-mail survey panel invites on response rates, and also on the impact of the status of the person who the invite was from. Three separate studies were carried out based on mailings to large numbers of Open University students. One interesting part was experimenting with the invites to leave the panel to see if the change in salutation just made a difference to whether the e-mail was read or the actual behaviour in terms of participation.

The next topic covered was the distinction between validity and reliability. Whereas reliability is about whether you can consistently achieve the same results using methods, validity is about whether your methods tell you what you claim they do. There’s quite a nice article here on the difference and on the different types of validity. Essentially ‘conclusion validity’ is about whether your results actually show there is some relationship between two variables, ‘internal validity’ is about whether the relationship is causal rather than just a correlation, ‘external validity’ is about how well the results generalise to other contexts and ‘construct validity’ is whether your results actually tell you what you are claiming they do (so in the Bos et al paper for instance, do the results of the social dilemma game actually tell us anything about the concept of trust?)

We were also asked to find out about the Hawthorne Effect, related to the idea that research participants behave differently because they know they are being studied. We were referred to quite an illuminating article by Olsen (Olson, R., Hogan, L. and Santos, L. (2006) ‘Illuminating the history of psychology: tips for teaching students about the Hawthorne studies’, Psychology Learning and Teaching, vol. 5, no. 2, pp. 110–18) discussing the actual Hawthorne studies and how the results of them have commonly been misinterpreted.

Finally, the second podcast for this week featured Alan Woodley and John Richardson discussing the peer review process. Having reviewed papers and having been surrounded by the academic world for a good deal of my life, I felt fairly familiar with the peer review process. Nonetheless it was still great to have a rare chance to hear about it from somebody who has far much more experience of it than I do.