Crowdsourcing Platforms

Introduction

Crowdsourcing is “the practice of obtaining information or input into a task or project by enlisting the services of a large number of people, either paid or unpaid, typically via the internet” (Oxford).

The number, nature, and terms of crowdsourcing platforms change quickly. This guidance refers to some specific crowdsourcing platforms, but it is meant to provide general direction on recruitment, screening, consent, compensation, and privacy and confidentiality concerns for crowdsourcing platforms.

Please keep in mind that many crowdsourcing platforms are not necessarily intended for human subject research. For example, MTurk is intended for “hiring” workers to complete tasks, while Lucid is better suited for market research rather than research that requires IRB oversight. These differences can make preserving the rights and welfare of participants unwieldy, and many have questioned the scientific integrity of using these platforms.

Given the number of crowdsourcing platforms, no one reviewer or user can be familiar with all of them. Providing a brief description of the platform and its recruitment and compensation methods in the Recruitment Methods page in PittPRO, especially if you are using a platform other than MTurk or Prolific, can help greatly during IRB review.

The following is additional information that you can provide in PittPRO that will help ensure a smooth review:

  • If the platform allows returns and rejections, include a statement on the Research Activities page explaining how you will handle those and include consistent language in the introductory scripts or consent forms.
  • Include the privacy and confidentiality risks associated with your platform of choice as well as how you will manage them on the Risks and Benefits page. Again, please include consistent language in the introductory scripts or consent forms.
  • On the Electronic Data Management page, for the item “Select all technologies being used to collect data or interact with subjects” select “Web-based site, survey, or other tool” then “Other” for the next item.

Based on the nature of the research, the IRB may not approve the use of a particular crowdsourcing platform for a study.

Terms and Conditions of the Platform

The Principal Investigator is responsible for ensuring that the study complies with the Terms and Conditions of the platform.

Recruitment

Some crowdsourcing platforms include a short description of each study or task listing. You should upload this description of your research on the Recruitment Methods page of PittPRO. You should also upload any other recruitment materials that will be used.

Some crowdsourcing platforms do not or cannot provide the exact recruitment material(s), often “teaser ads,” that potential participants will first see. In these cases, the IRB requires that individuals be directed to IRB-approved recruitment materials prior to the introductory script or consent option. The Recruitment Methods page in PittPRO needs to explain how individuals will be directed to these materials.

Compensation

Similarly, some crowdsourcing platforms do not or cannot provide the exact incentives for participation. In these cases, provide the IRB with a range of incentive types and values participants may receive.

Depending on how the crowdsourcing platform handles incentives or payments, the language in the introductory scripts or consent forms needs to include the applicable information and language:

  • Whether or not the participant will receive compensation regardless of survey completion
  • Researchers do not have control over the conditions for when a participant will be compensated
  • Researchers do not have control over how much participants will be compensated.

Include corresponding language under #4 on the Recruitment Methods page.

Returns and Rejections

Some platforms allow the participant to return a task or survey if they decide they do not want to complete it. Similarly, researchers can reject participation if the participant does not complete the task or fails attention checks.

Rejections can harm participants’ ratings. For example: Rejections in MTurk harm workers’ approval ratings which may make it more difficult to complete other work on the platform. Prolific will ban participants who have a high number of rejections.

When platforms allow for rejections, researchers should include a statement in the introductory scripts or consent forms about instances in which participants will be rejected for a task (e.g., failed attention checks).

If you intend to reject workers or participants who do not complete a task or data collection instrument they have begun, indicate in the consent language that they should withdraw from participation by returning the task or study to avoid rejection.

Some platforms, such as Prolific, automatically set, or allow the investigator to set, a time limit to complete study participation. Prolific, recommends that “[w]here there is clear evidence that the participant tried to complete your study in good faith, and there is no reason to believe that the participant has withdrawn their consent, please manually approve their submission wherever possible.”  This recommendation should be adopted for all platforms when a time limit is set.

International Respondents or "Workers"

Unless there is a scientific reason for including international participants, the IRB recommends you use the screening or targeting functions within the crowdsourcing platform to exclude international respondents and add them to the exclusion criteria on the Study Design page in PittPRO.

Individuals responding while in the European Economic Area (EEA) have special protections that crowdsourcing platforms may not be able to adequately fulfill for IRB approval. If the study involves collecting data from individuals in the EEA, the General Data Protection Regulation (GDPR) should be addressed in the Application. See more in our GDPR guidance.

Please also see our guidance on international research.

Non-English Speaking Participants

If you are including non-English speaking participants, you are required to provide a guarantee that participants will receive recruitment, consent, and data collection materials in the appropriate language and must provide the IRB with translations for all participant-facing materials.  If you are not including non-English speaking participants, the IRB recommends you use the screening or targeting functions within the crowdsourcing platform to exclude them and add Non-English speaking individuals to the exclusion criteria on the Study Design page. See more on our guidance for Non-English Speaking Participants

Screening for Bots and Server Farms

Methods to screen for bots and server farms may require unnecessary collection of private information and increase the risk of breaches in privacy and confidentiality. If you intend to screen for bots and server farms, the IRB requires you provide detailed information on the data collected to identify the bots and server farms and your plans to mitigate the risks to privacy and confidentiality. Bots and server farms should be screened out prior to the option for enrollment, because screening techniques are imprecise and may screen out actual individuals participating in good faith.

Special Conderations for MTurk

MTurk’s Survey Creation Tool:

The IRB prohibits the use of Amazon’s survey creation tool, because it is not designed to maintain privacy and confidentiality. Screening and data collection must occur using software allowing privacy and confidentiality protections such as Qualtrics or REDCap.

MTurk ID’s:

MTurk ID’s are considered Private Identifiable Information (PII), because they can be linked to the worker’s Amazon profile. If the study team is receiving MTurk ID’s, the research cannot be characterized as anonymous.

If you are collecting them, MTurk ID’s should be listed under #1.c. on the Electronic Data Management Page. The Application should address the collection and storage of MTurk ID’s as it does all other PII.

Research on Sensitive Topics Using MTurk:

Amazon records and may disclose the Human Intelligence Tasks (HITS) workers click on. The IRB recommends that you avoid any information in the HIT title or description indicating inclusion criteria that might be considered sensitive or private. Alternatively, the HIT description could include a disclosure such as the following: “Before you accept the HIT, please keep in mind that Amazon tracks the HITs that Workers click on, even if they ultimately decline to participate or withdraw after enrolling and don't get compensated. Therefore, do not accept this HIT if the study topic is of a sensitive nature and you wouldn’t want your interest in the study disclosed to MTurk/Amazon.” *

Screening and Compensation Using MTurk:

If your research involves a survey-type screening for eligibility, you must indicate in the recruitment material or the consent language the number of screening questions the worker will be asked to complete and whether the individual will be compensated for completing the screening. Unpaid screening is common but should be limited to approximately 5 questions.

*This language comes for the University of Florida’s IRB guidance on MTurk: https://irb.ufl.edu/wp-content/uploads/IRB-Mechanical-Turk-Guidance.pdf

4/11/2023