The Human Factor in Quantitative Data

Knight’s discussion on the lack of validity, reliability and consistency of pseudo-scales (pg. 81) started to get me thinking of the ethical implications of research design. I was reminded of a study that showed how re-wording fixed response questions could have a profound impact on the respondent’s answers. I’ll give a brief summary of the questionnaire’s findings and if you’re interested in reading the full article, here’s the link:

The scenario given to the sample group (university students) involved a disease outbreak that would kill 600 people. They were asked to choose from two programs meant to control the spread of the disease. Option A: 200 people are saved and in Option B: 1/3 probability that 600 people would be saved and 2/3 probability that no one would be saved. 72% chose A and 28% chose B

The same scenario was presented to a second set of students, who were given the same options but worded differently. Option C: 400 people will die and Option D: 1/3 probability that no one will die, and 2/3 probability that 600 people will die. 22% chose C and 78% chose D

The ethical implication that I see in that study is that if you, as a researcher, are invested in a specific response or conclusion to support your research claim, you could manipulate the research design to get the data you want (i.e. in this case, preying on a respondent’s psychological bias towards saving people rather than letting them die). Using quantitative research methods also gives your study the added bonus of seeming “objective” and “scientific”.

This study and Knight’s chapter about the weaknesses in “Research at a Distance” methods has me wondering how I can fairly represent my sample group through my research design. This is of particular concern to me because I’m doing my research on a marginalized group.



Devon said...

That study sounds really interesting.
I just read Luker's chapter 8 I think? (I returned the book and can't check). Anyway, she was talking about how, during interviews she thought it was okay sometimes to ask leading questions and her argument made sense at the time I was reading it but after looking at your summary of that study I'm less sure about it. These researchers basically led people in the direction they expected them to go just through phrasing the questions.
Does that mean all questions are leading?
The last time I did a study (to get extra points for psychology 101) I was often able to guess what they wanted or where they were connecting what I said with other parts of the study. If I hadn't understood the connections, would I have answered differently?
On the other hand, if researchers can't ask what they actually need to know, how can they research?

Sara M. Grimes said...

Great find, Ramona - and a wonderful discussion of the importance (and politics) of "wording the questionnaire".

Aaron. said...

I agree, a great find - and I have to agree with Ramona's comment about Luker's endorsement of leading questions - I suppose you could mount a defence of taking a contrarian stance in order to elicit a more honest response. Then again, that could be seen as a cheap reverse psychology tactic. Really, I disagreed strongly with her statements on that on a deeply basic level.

Post a Comment