1
Econ506 Field Experiments
Robert Metcalfe
University of Southern California
www.rmetcalfe.net
Spring 2022, Tue-Thurs 10am, KAP 145
1. Introduction
The objective of this course is to allow the student to design, analyze, and interpret field
experiments, and understand their practical significance to applied economics, business, and policy.
Randomized field experiments are deployed across the world to answer well-posed theoretical and
practical questions to generate new information from which to build new theories of human behavior.
Experiments are attractive to businesses and policymakers because they enable them to (mostly)
ground statistical and causal inferences in features of the research design rather than assumptions
about their business, their customers, or members of society. Well-designed field experiments allow
business and governments to become more efficient and effective.
This Field Experiments module is for the Big Data Economics, Economic Consulting, and Economic
Policy and Development tracks in the MS Applied Economics and Econometrics. It complements
Econ 500 (Microeconomic Analysis and Policy) and Econ 513 (Practice of Econometrics).
Office hours: by appointment
2. Learning Objectives
The major objectives are to understand:
1. The different types of field experiments.
2. How to design different types of field experiments.
3. How to analyze and interpret the data from different types of field experiments.
4. The practical issues in implementing field experiments.
5. Examples of field experiments to change economic policy and business practices.
6. The ways that field experiments can be scaled for economic problems.
3. Assessment
Assessment is made up of four components:
(i) Attendance (10%).
(ii) Class Participation (20%): In-class discussions are an integral part of the course, and students
are expected to contribute to the learning experience of the class by asking relevant questions,
offering insights into the topic at hand, and generally behaving in a professional manner. Quality of
contribution matters more than quantity. Class participation scores will also account for attendance,
lateness, and completion of in-class surveys. Students are expected to attend all classes; excused
absences are granted in accordance with school policy, for religious observance, military service,
2
court appearance, illness, and family emergencies. Final grades will be adjusted for absence
consistent with school standards
(iii) Final exam (35%): a 90-minute final exam that is cumulative. The final exam is scheduled in
accordance with the University-wide examination schedule. Please do not miss the exam. If you
foresee any reason that you may miss the exam, please see me within the first three weeks of the
Semester. Keep in mind that apart from conflicts with University-sponsored activities, there are very
few legitimate foreseeable reasons for missing an exam. If you do not see me within the first two
weeks, the only valid excuses for missing an exam are documented medical or family emergencies.
(iv) Group Project (35%): Graduate-level economics study is designed to prepare you to not just be
a more advanced consumer, but also a producer of data, research, and analysis. To that end it is
important to practice the research process: identifying a problem, gathering and analyzing data, and
communicating your results. Students will work in teams (of 4-5 people) to develop a causal question
related to a real business or policy, design a field experiment to test their question, perform a pilot
experiment, and “pitch” a proposed design and implementation strategy (directed to the key decision
maker in the company) the week before the final exam period.
Each group must:
Submit a paper on the experiment (75% of the project grade).
Present the paper at a full-class session just prior to the Final Exam period (25% of
the project grade).
Grading Scale
Course final grades will be determined using the following scale
A 95-100
A- 90-94
B+ 87-89
B 83-86
B- 80-82
C+ 77-79
C 73-76
C- 70-72
D+ 67-69
D 63-66
D- 60-62
F 59 and below
4. Course Schedule
Lecture
Date
Title
Reading
1
1/11/22
Introduction to the course
2
1/13/22
Introduction to field experiments
Harrison & List (2004)
3
1/18/22
Measurement
4
1/20/22
Measurement
5
1/25/22
Case study: Measuring outcomes
Heller et al. (2017)
6
1/27/22
Why Randomize?
Glennerster & Takavarasha Ch2
7
2/1/22
Case study: Why Randomize?
Arceneaux et al. (2006)
3
8
2/3/22
Experiment introduction
9
2/8/22
Internal validity
Green & Gerber Ch2, List Ch3
10
2/10/22
Internal validity
11
2/15/22
How to Randomize?
Glennerster & Takavarasha Ch4
12
2/17/22
Case study: How to randomize?
Duflo et al. (2011)
13
2/22/22
Power
Glennerster & Takavarasha Ch6
14
2/25/22
Sample Size
List et al. (2011)
15
3/1/22
Experiment check in
16
3/3/22
Threats
Glennerster & Takavarasha Ch7,
Green & Gerber Ch5-8
17
3/8/22
Threats cont.
18
3/10/22
Case study: Threats
Groh et al. (2012)
3/15/22
Spring break
3/17/22
Spring break
19
3/22/22
Analysis
Glennerster & Takavarasha Ch8
20
3/24/22
Generalizability
21
3/29/22
Mechanisms & Optimal Designs
Green & Gerber Ch10
22
3/31/22
Experiment Check in
23
4/5/22
Transparency
Miguel et al. (2019)
24
4/7/22
Measuring Beliefs
25
4/12/22
Field Experiments in Labor Economics
Gosnell et al. (2020)
26
4/14/22
Field Experiments in Public Economics
27
4/19/22
Field Experiments in Industrial Organization
28
4/22/22
Field Experiments in Digital Economics
Goldszmidt et al. (2020)
29
4/26/22
Project presentations
30
4/28/22
Review session
Readings (essential denoted as **)
Background
Essential books to get:
**Gerber, A., & Green, D. (2012). Field Experiments: Design, Analysis, and Interpretation.
**Glennerster, R., & Takavarasha, K. (2013). Running Randomized Evaluations: A Practical Guide.
Princeton University Press.
There are a couple of fun popular (and general interest) books to read about field experiments:
List, J., & Gneezy, U. (2014). The why axis: Hidden motives and the undiscovered economics of
everyday life. Random House.
4
Leigh, A. (2018). Randomistas: how radical researchers are changing our world. Yale University
Press.
Luca, M. & Bazerman, M. (2020). The Power of Experiments: Decision Making in a Data-Driven
World. MIT Press.
List, J.A. (2022). The Voltage Effect: How to Make Good Ideas Great and Great Ideas Scale.
Penguin Random House.
Introductory Articles
**Harrison, G.W., & List, J.A. (2004). Field experiments. Journal of Economic Literature, 42(4), 1009-
1055.
Levitt, S. D., & List, J. A. (2009). Field experiments in economics: The past, the present, and the
future. European Economic Review, 53(1), 1-18.
List, J. A. (2011). Why economists should conduct field experiments and 14 tips for pulling one off.
Journal of Economic Perspectives, 25(3), 3-16.
Potential Outcomes Framework
**Gerber \& Green, Ch. 2 - 3.
Imbens, G. W., \& Rubin, D. B. (2015). Causal Inference in Statistics, Social, and Biomedical
Sciences, Ch. 5 - 6
Randomization and Power
**Gerber \& Green, Ch. 4.
List, J. A., Sadoff, S., \& Wagner, M. (2011). So you want to run an experiment, now what? Some
simple rules of thumb for optimal experimental design. Experimental Economics, 14(4), 439.
Bruhn, Miriam and David McKenzie. 2008. "In Pursuit of Balance." The World Bank Policy Research
Working Paper 4752.
Learning About Mechanisms
**Gerber \& Green, Ch. 10
**Card, D., DellaVigna, S., \& Malmendier, U. (2011). The role of theory in field experiments. Journal
of Economic Perspectives, 25(3), 39-62.
Imai, K., Tingley, D., \& Yamamoto, T. (2013). Experimental designs for identifying causal
mechanisms. Journal of the Royal Statistical Society: Series A (Statistics in Society), 176(1), 5-51.
Field Experiments in Labor Economics
5
**List, J. A., \& Rasul, I. (2011). Field experiments in labor economics. In Handbook of labor
economics (Vol. 4, pp. 103-228). Elsevier.
Bandiera, O., Barankay, I., \& Rasul, I. (2011). Field experiments with firms. Journal of Economic
Perspectives, 25(3), 63-82.
Fryer Jr, R. G., Levitt, S. D., List, J., \& Sadoff, S. (2012). Enhancing the efficacy of teacher
incentives through loss aversion: A field experiment (No. w18237). National Bureau of Economic
Research.
**Bloom, N., Eifert, B., Mahajan, A., McKenzie, D., \& Roberts, J. (2013). Does management matter?
Evidence from India. Quarterly Journal of Economics, 128(1), 1-51.
Flory, J. A., Leibbrandt, A., \& List, J. A. (2014). Do competitive workplaces deter female workers? A
large-scale natural field experiment on job entry decisions. Review of Economic Studies, 82(1), 122-
155.
**Gosnell, G. K., List, J. A., \& Metcalfe, R. D. (2020). The Impact of Management Practices on
Employee Productivity: A Field Experiment with Airline Captains. Journal of Political Economy.
Ashraf, N., Bandiera, O., \& Lee, S. (2018). Losing prosociality in the quest for talent? Sorting,
selection, and productivity in the delivery of public services.
Coffman, L. C., Conlon, J. J., Featherstone, C. R., & Kessler, J. B. (2019). Liquidity Affects Job
Choice: Evidence from Teach for America. The Quarterly Journal of Economics, 134(4), 2203-2236.
Rockoff, J. E., Staiger, D. O., Kane, T. J., & Taylor, E. S. (2012). Information and employee
evaluation: Evidence from a randomized intervention in public schools. American Economic Review,
102(7), 3184-3213.
Field Experiments in Industrial Organization
**Einav, L., & Levin, J. (2010). Empirical industrial organization: A progress report. Journal of
Economic Perspectives, 24(2), 145-62.
Anderson, Eric, and Duncan Simester. "Effects of \$9 Endings on Retail Sales: Evidence from Field
Experiments." Quantitative Marketing and Economics, March 2003, vol. 1, no. 1, pp. 93-110.
Hossain, Tanjim, and Morgan, John. "... Plus Shipping and Handling: Revenue (Non) Equivalence in
Experiments on eBay." Advances in Economic Analysis and Policy, 2006. vol. 6, no. 2, article 3.
Caro F, Gallien J (2012) Clearance pricing optimization for a fast-fashion retailer. Oper. Res.
60(6):14041422.
**Bertrand, M., Karlan, D., Mullainathan, S., Shafir, E., \& Zinman, J. (2010). What's advertising
content worth? Evidence from a consumer credit marketing field experiment. Quarterly Journal of
economics, 125(1), 263-306.
Goldfarb, A., \& Tucker, C. (2011). Online display advertising: Targeting and obtrusiveness.
Marketing Science, 30(3), 389-404.
6
Bakshy, E., Eckles, D., Yan, R., \& Rosenn, I. (2012, June). Social influence in social advertising:
evidence from field experiments. In Proceedings of the 13th ACM conference on electronic
commerce (pp. 146-161). ACM.
Blake, T., Nosko, C., \& Tadelis, S. (2015). Consumer heterogeneity and paid search effectiveness:
A largescale field experiment. Econometrica, 83(1), 155-174.
Lewis, R. A., \& Rao, J. M. (2015). The unfavorable economics of measuring the returns to
advertising. Quarterly Journal of Economics, 130(4), 1941-1973.
Gordon, B. R., Zettelmeyer, F., Bhargava, N., \& Chapsky, D. (2019). A comparison of approaches
to advertising measurement: Evidence from big field experiments at Facebook. Marketing Science,
38(2), 193-225.
Aral, S., \& Walker, D. (2011). Creating social contagion through viral product design: A randomized
trial of peer influence in networks. Management science, 57 (9), 1623-1639.
**Bhargava, S., Loewenstein, G., \& Sydnor, J. (2017). Choose to lose: Health plan choices from a
menu with dominated option. The Quarterly Journal of Economics, 132(3), 1319-1372.
Brandon, A., List, J. A., Metcalfe, R. D., Price, M. K., \& Rundhammer, F. (2019). Testing for crowd
out in social nudges: Evidence from a natural field experiment in the market for electricity.
Proceedings of the National Academy of Sciences, 116(12), 5293-5298.
Field Experiments in Digital Economics
**Goldfarb, A., & Tucker, C. (2019). Digital economics. Journal of Economic Literature, 57(1), 3-43.
**Ariel Goldszmidt, John A. List, Robert D. Metcalfe, Ian Muir, V. Kerry Smith, \& Jenny Wang
(2020). The Value of Time in the United States: Estimates from a Nationwide Natural Field
Experiment.
Field Experiments in Public Economics
**List, J. A., \& Price, M. K. (2016). The use of field experiments in environmental and resource
economics. Review of Environmental Economics and Policy, 10(2), 206-225.
**Hallsworth, M., List, J. A., Metcalfe, R. D., \& Vlaev, I. (2017). The behavioralist as tax collector:
Using natural field experiments to enhance tax compliance. Journal of Public Economics, 148, 14-31
Duflo, Esther, Abhijit Banerjee, Rachel Glennerster, \& Michael Kremer. 2006. Using Randomization
in Development Economics: A Toolkit. Forthcoming in Handbook of Development Economics.
Allcott, H., & Kessler, J. B. (2019). The welfare effects of nudges: A case study of energy use social
comparisons. American Economic Journal: Applied Economics, 11(1), 236-76.
Butera, L., Metcalfe, R., Morrison, W., & Taubinsky, D. (2019). The deadweight loss of social
recognition (No. w25637). National Bureau of Economic Research.
Hahn, R., Metcalfe, R., Tam, E. (2020). Measuring Welfare in Regulated Markets.
7
Gelber, A., Isen, A., & Kessler, J. B. (2016). The effects of youth employment: Evidence from New
York City lotteries. Quarterly Journal of Economics, 131(1), 423-460.
Implementation Issues
**Glennerster, R. (2017). The practicalities of running randomized evaluations: partnerships,
measurement, ethics, and transparency. Handbook of Economic Field Experiments, 1, 175-243.
(pages 1-18)
Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for
Evaluation, 1998 (80), 523.
Wager, S., \& Athey, S. (2017). Estimation and inference of heterogeneous treatment effects using
random forests. Journal of the American Statistical Association.
Eckles, D., Karrer, B., \& Ugander, J. (2017). Design and analysis of experiments in networks:
Reducing bias from interference. Journal of Causal Inference, 5 (1).
Yong, E. (2017, January 5). An Ingenious Experiment of Jungle Bats and Evolving Artificial Flowers.
The Atlantic.
External Validity and Scaling Experiments
List, John A., and Steven Levitt. 2006. "What Do Laboratory Experiments Tell Us About the Real
World?"
Al-Ubaydli, O., List, J. A., Lore, D., \& Suskind, D. (2017). Scaling for economists: Lessons from the
non-adherence problem in the medical literature. Journal of Economic Perspectives, 31(4), 125-44.
**Al-Ubaydli, O., Lee, M. S., List, J. A., Mackevicius, C., \& Suskind, D. (2019). How Can
Experiments Play a Greater Role in Public Policy? 12 Proposals from an Economic Model of
Scaling. University of Chicago, Becker Friedman Institute for Economics Working Paper, (2019-131).
DellaVigna, S., \& Linos, E. (2020). RCTs to Scale: Comprehensive Evidence from Two Nudge
Units.
Allcott, H. (2015). Site selection bias in program evaluation. Quarterly Journal of Economics, 130 (3),
11171165.
Ethics and Research Transparency
Christensen, G., Freese, J., & Miguel, E. (2019). Transparent and Reproducible Social Science
Research. University of California Press.
Beecher, Henry K. "Ethics and Clinical Research." New England Journal of Medicine (1966).
Desposato, S. (2014). Ethical Challenges and Some Solutions for Field Experiments.
Gray, M. L. (2014, July 8). When Science, Customer Service, and Human Subjects Research
Collide. Now What? marylgray.org
8
Grimmelmann, J. (2015). The law and ethics of experiments on social media users. 13 Colo. Tech.
L.J. 219, 2015
Kramer, A. D., Guillory, J. E., \& Hancock, J. T. (2014). Experimental evidence of massive-scale
emotional contagion through social networks. Proceedings of the National Academy of Sciences,
111(24), 8788-8790.