Data collection in consumer research has progressively moved away from traditional samples (e.g., university undergraduates) and toward Internet samples. In the last complete volume of the Journal of Consumer Research (June 2015-April 2016), 43% of behavioral studies were conducted on the crowdsourcing website Amazon Mechanical Turk (MTurk). The option to crowdsource empirical investigations has great efficiency benefits for both individual researchers and the field, but it also poses new challenges and questions for how research should be designed, conducted, analyzed, and evaluated. We assess the evidence on the reliability of crowdsourced populations and the conditions under which crowdsourcing is a valid strategy for data collection. Based on this evidence, we propose specific guidelines for researchers to conduct high-quality research via crowdsourcing. We hope this tutorial will strengthen the community's scrutiny on data collection practices and move the field toward better and more valid crowdsourcing of consumer research.