A vast majority of psychology research, roughly two-thirds, has been and continues to be done with populations of convenience – college students. While this lack of representative samples may not matter so much when studying bottom-up processes, like visual perception, where our visual system works very similarly person to person, you can imagine how this may pose a problem for studies on social attitudes, where homogenous demographics of college students would limit the generalizability of your results. For most research topics in social psychology, for example, you’d want a more representative sample. Unfortunately, when researchers sample outside of universities, the sample is still typically Western, educated, from industrialized, rich and democratic countries. This problem is so widespread that the field created a “catchy” acronym for it – WEIRD. The WEIRD population is just that, weird, that is odd or differing markedly from rest of the world. In fact, WEIRD people only make up 12% of the world’s population.
While you cannot change the fact that you may be WEIRD, you can certainly help resolve the problem of unrepresentative samples by participating in a psychological study on a college campus where majority of participants are psychology undergraduate students. If you are not a psychology undergraduate student yourself, you could still be adding important diversity into the study’s sample.
Another problem psychological research face is participant compliance, which translates to quality of data. If a participant is not serious about completing tasks properly and instead is randomly pressing keys for experiments or randomly selecting ratings for surveys, they add noise to the researchers’ dataset. Adding noise to a dataset may decrease the chances the researchers find an effect that actually exists, especially if there are enough of these troublesome participants. If the researchers don’t find an effect in a follow-up replication or two, they may steer away from studying that research question altogether, which may lead to the field missing out on important findings.
You can help improve quality of data by taking the psychological study seriously and responding as accurately as possible. You can help improve overall participant compliance by saying something to peers who “brag” about “cheating” their way through an experiment or survey.
Psychology as a field have tried to address these two issues: lack of representative samples and participant compliance. With lack of representative samples, some researchers have gone to other sources to collect data, such as Amazon Mechanical Turk, a crowdsourcing platform for all sorts of tasks, and Prolific, a research focused crowdsourcing platform. The former allows for inclusions and exclusions of workers from different countries, and the latter has prescreening built in to give researchers more control over the demographics being sampled for their study.
For participant compliance, researchers have multiple approaches to combatting participants who just respond randomly. One way to motivate participants to do the task properly is to inform them prior to the start of the experiment that with each missed or responded-too-fast trial, another similar trial will be added to the battery, therefore, their participation would be shorter if they do the task properly and longer if they respond randomly. Another way is to penalize the participant for “too quick” responses, such as having the experiment pause for 10 seconds before resuming. During this time, the participant is told that they responded too quickly and that they should formulate an answer before responding.
Either way, a culture of poor participant compliance generates more programming work for researchers.
Another way researchers try to maximize participant compliance is to make the experiment as short as possible to avoid participant fatigue. Usually, this means using fewer repetitions of trials, but there is still a limitation – two data points don’t produce a meaningful mean or standard deviation. However many repetitions researchers decide to use, their experiments would require many more participants, as it is statistically weaker to have participants each do fewer repetitions than each do many more repetitions.
A culture of poor participant compliance generates more work for research assistants as they need to collect data from more participants. More participants would also mean greater expenses if participants are compensated monetarily.
While these are preemptive measures, there are some things researchers can do after data collection. To check for participant compliance, researchers may use “catch” trials in their experiment. These are essentially attention-check trials where there is a very clear correct answer or instructions to respond a particular way, and the participant should get the trial correct if they are paying attention and doing the task properly. If a participant performed especially poorly on these “catch” trials, researchers have a valid justification of excluding that participant’s data from analysis. However, the number of participants excluded is reported in papers and may become alarming if it is too high, say above 10% of the sample.
That’s why it’s important to build a culture of taking experimental tasks seriously, so that we don’t by nature get 10%+ “troublesome” participants.
Another reason to take experimental tasks seriously is the number of people and the extent of their careers you are affecting by producing poor data. Not only would you be wasting the research assistants’ time supervising your participation in the study, but you may generate many more hours of data analysis from the graduate students responsible for the study. In addition, the graduate students’ chances of getting an academic position after graduation depends on whether they find publishable results from their experiments. With issues of participant compliance, it ultimately boils down to how lucky they are in terms of the ratio of cooperative to uncooperative participants they receive for a study.
Someone is actually looking at your data and building careers off them. Pay them the same courtesy as someone you would entrust your careers to. You wouldn’t want your success to be in the hands of a careless stranger, let alone a group of them, right?
Although one hour of your time (maximum length of a typical study) and hundreds of trials on a “mundane” task don’t seem like much, your data, especially if it’s reliable, could contribute to the generation of new knowledge from research. So, the next time you see a flyer on a college campus looking for participants, I encourage you to sign-up for the study and contribute to ongoing research by completing tasks properly and producing quality data. We need more participants like you in our samples! And your quality participation is greatly appreciated, more than you’d know.
Bibliography
Brookshire, B. (2019). Social Science is WEIRD, and That’s a Problem.. [online] Slate Magazine. Available at: https://slate.com/technology/2013/05/weird-psychology-social-science-researchers-rely-too-much-on-western-college-students.html [Accessed 26 Nov. 2019].