We study in a laboratory experiment whether humans prefer to depend on decisions of others (Human-Driven Uncertainty) or states generated by a computer (Computerized Uncertainty). The experimental design introduced in this paper is unique in that it introduces Human-Driven Uncertainty such that it does not derive from a strategic context. In our experiment, Human-Driven Uncertainty derives from decisions, which were taken in a morally neutral context and in ignorance of externalities that the decisions may have on others. Our results indicate that even without strategic interaction and moral elements humans prefer Computerized to Human-Driven Uncertainty. This holds even when the distribution of outcomes under both types of uncertainty is identical. From a methodological point of view, the findings shed a critical light on behavioral research in which it is common practice to control for strategic uncertainty by comparing interaction with an artificial agent with a known strategy to interaction with humans. Outside the laboratory, our results suggest that whenever dependence on humans is changed to dependence on computers and other kinds of “artificial” decision makers, preferences with regard to these dependencies may change too.
|Tidskrift||Journal of Economic Psychology|
|Status||Published - 2019|
Bibliografisk informationFunding Information:
We thank the Max Planck Society for financial support through the International Max Planck Research School on Adapting Behavior in a Fundamentally Uncertain World. Special thanks also to Alexia Gaudeul, Oliver Kirchkamp, Anna Merkel and the participants of the 2015 IMPRS Uncertainty Summer School in Jena for their feedback and ideas on designing this experiment.
© 2019 Elsevier B.V.
Copyright 2019 Elsevier B.V., All rights reserved.