Thanks to Ysharros, I discovered and participated in a new survey/study (follow this one) being conducted by Nick Yee of the Daedalus Project. It brought up some interesting questions, though I often wonder if the way I answer these things may lead to misinterpretation of the data. That is, do I overstate understate my preference or agreement with certain questions/issues?
For example, a very long time ago, I participated in a survey at my elementary school—maybe by The Mini Page, or The Weekly Reader?—in which one of the "drug" questions asked if I had ever sniffed glue. I happen to like the smell of (Elmer's) white glue, so I marked yes. As you may be aware, Dear Reader, sniffing glue is a way to get high. I was not aware of this at the time I took the survey. However, months later, when I read the survey results, it was clearer that was the intended question.
Years later, during college, I worked for a market research company conducting many different types of surveys over the phone (I promise I was not a telemarketer). I once got in trouble for terminating a telephone interview when it became obvious to me that the person I called was not qualified to complete the survey, even they had not answered the demographic question directly. I quit the job shortly thereafter. Looking back, I feel like maybe I should have fought the reprimand up the chain; but honestly, it was not really worth it to me.
The danger I see with drawing certain conclusions from surveys, especially those conducted online, is that the surveyor is generally unable to clarify the intent of the question, and therefore also unable to clarify the intent of the answer.
~~~
For example, a very long time ago, I participated in a survey at my elementary school—maybe by The Mini Page, or The Weekly Reader?—in which one of the "drug" questions asked if I had ever sniffed glue. I happen to like the smell of (Elmer's) white glue, so I marked yes. As you may be aware, Dear Reader, sniffing glue is a way to get high. I was not aware of this at the time I took the survey. However, months later, when I read the survey results, it was clearer that was the intended question.
Years later, during college, I worked for a market research company conducting many different types of surveys over the phone (I promise I was not a telemarketer). I once got in trouble for terminating a telephone interview when it became obvious to me that the person I called was not qualified to complete the survey, even they had not answered the demographic question directly. I quit the job shortly thereafter. Looking back, I feel like maybe I should have fought the reprimand up the chain; but honestly, it was not really worth it to me.
The danger I see with drawing certain conclusions from surveys, especially those conducted online, is that the surveyor is generally unable to clarify the intent of the question, and therefore also unable to clarify the intent of the answer.
This work is licensed under a Creative Commons Attribution NonCommercial ShareAlike 3.0 Unported License. If you are reading this post through RSS or Atom feed—especially more than a couple hours after publication—I encourage you to visit the actual page, as I often make refinements after the fact. The mobile version also loses some of the original character of the piece due to simplified formatting.
Its a known issue. The corrective, of course, is sample size. With a big enough pool, those issues tend to filter themselves out.
ReplyDeleteOf course, sometimes they don't, which is why the ability to replicate is king in the world of academic research. :-p
Survey design is a complicated thing. I've articles demonstrating that even with decent sample sizes, the phrasing of a question can drastically change the results, reflecting the biases of the researcher as much as the sample. This may or may not be intentional, depending on the goals of the specific study.
DeleteI have learned from personal experience that inept questions can influence the results even for seeming innocuous questions. Not to mention the cultural biases of both the researcher and the audience making seemingly innocuous phrases highly contentious.