“The Wriston Fellowship is awarded each year to regular untenured members of the faculty who have achieved a record of excellence in teaching and scholarship during their first years at Brown.”
This past weekend we had the opportunity to showcase some of our work to the greater Providence and Rhode Island Community. We used economic games with candy so that children could experience the kinds of questions and topics we explore! It was a day full of cooperation, defection, trust, and sometimes novel decision-making strategies. It was a ton of fun, and we can’t wait for next year!
Eric was recently awarded an undergraduate research grant (UTRA), which he will be using to work in our lab over the summer!
Because humans live in a dynamic and evolving social world, modeling the factors that guide social behavior has remained a challenge for psychology. In contrast, much progress has been made on understanding some of the more basic elements of human behavior, such as associative learning and memory, which has been successfully modeled in other species. Here we argue that applying an associative learning approach to social behavior can offer valuable insights into the human moral experience. We propose that the basic principles of associative learning—conserved across a range of species—can, in many situations, help to explain seemingly complex human behaviors, including altruistic, cooperative, and selfish acts. We describe examples from the social decision-making literature using Pavlovian learning phenomena (e.g., extinction, cue competition, stimulus generalization) to detail how a history of positive or negative social outcomes influences cognitive and affective mechanisms that shape moral choice. Examining how we might understand social behaviors and their likely reliance on domain-general mechanisms can help to generate testable hypotheses to further understand how social value is learned, represented, and expressed behaviorally. Download PDF | Read at PPS
Christopher Schutte, a senior staff writer for the Brown Daily Herald, recently wrote a great piece about the inner-workings of our lab, including insight from multiple lab members! Check it out here!
Congratulations to Jae and Joey for their first authored publication!
A complex web of social and moral norms governs many everyday human behaviors, acting as the glue for social harmony. The existence of moral norms helps elucidate the psychological motivations underlying a wide variety of seemingly puzzling behavior, including why humans help or trust total strangers. In this review, we examine four widespread moral norms: Fairness, altruism, trust, and cooperation, and consider how a single social instrument—reciprocity—underpins compliance to these norms. Using a game theoretic framework, we examine how both context and emotions moderate moral standards, and by extension, moral behavior. We additionally discuss how a mechanism of reciprocity facilitates the adherence to, and enforcement of, these moral norms through a core network of brain regions involved in processing reward. In contrast, violating this set of moral norms elicits neural activation in regions involved in resolving decision conflict and exerting cognitive control. Finally, we review how a reinforcement mechanism likely governs learning about morally normative behavior. Together, this review aims to explain how moral norms are deployed in ways that facilitate flexible moral choices.
There is little consensus about how moral values are learned. Using a novel social learning task, we examine whether vicarious learning impacts moral values—specifically fairness preferences—during decisions to restore justice. In both laboratory and Internet-based experimental settings, we employ a dyadic justice game where participants receive unfair splits of money from another player and respond resoundingly to the fairness violations by exhibiting robust nonpunitive, compensatory behavior (baseline behavior). In a subsequent learning phase, participants are tasked with responding to fairness violations on behalf of another participant (a receiver) and are given explicit trial-by-trial feedback about the receiver’s fairness preferences (e.g., whether they prefer punishment as a means of restoring justice). This allows participants to update their decisions in accordance with the receiver’s feedback (learning behavior). In a final test phase, participants again directly experience fairness violations. After learning about a receiver who prefers highly punitive measures, participants significantly enhance their own endorsement of punishment during the test phase compared with baseline. Computational learning models illustrate the acquisition of these moral values is governed by a reinforcement mechanism, revealing it takes as little as being exposed to the preferences of a single individual to shift one’s own desire for punishment when responding to fairness violations. Together this suggests that even in the absence of explicit social pressure, fairness preferences are highly labile. PDF, SI
Undergraduate research assistant, Nancy Nkoudou, was interviewed about her research experience, career path, and the project for which she was awarded an Undergraduate Teaching and Research Award (UTRA)! Go Nancy!
Our recent feature in The Hechinger Report!
We are pleased to announce that our paper "Tolerance to ambiguous uncertainty predicts prosocial behavior" has been featured on the Editors’ Highlights webpage in "From Brain to Behaviour" in Nature Communications.
Our recent feature in Psychology Today!
We were pleasantly surprised to learn that our publication was the #1 post on Reddit's science page!
Our research was showcased by Bustle!
Uncertainty is a fundamental feature of human life that can be fractioned into two distinct psychological constructs: risk (known probabilistic outcomes) and ambiguity (unknown probabilistic outcomes). Although risk and ambiguity are known to powerfully bias nonsocial decision-making, their influence on prosocial behavior remains largely unexplored. Here we show that ambiguity attitudes, but not risk attitudes, predict prosocial behavior: the greater an individual’s ambiguity tolerance, the more they engage in costly prosocial behaviors, both during decisions to cooperate (experiments 1 and 3) and choices to trust (experiment 2). Once the ambiguity associated with another’s actions is sufficiently resolved, this relationship between ambiguity tolerance and prosocial choice is eliminated (experiment 3). Taken together, these results provide converging evidence that attitudes toward ambiguity are a robust predictor of one’s willingness to engage in costly social behavior, which suggests a mechanism for the underlying motivations of prosocial action.
The success of our political institutions, environmental stewardship and evolutionary fitness all hinge on our ability to prioritize collective-interest over self-interest. Despite considerable interest in the neuro-cognitive processes that underlie group cooperation, the evidence to date is inconsistent. Several papers support models of prosocial restraint, while more recent work supports models of prosocial intuition. We evaluate these competing models using a sample of lesion patients with damage to brain regions previously implicated in intuition and deliberation. Compared to matched control participants (brain damaged and healthy controls), we found that patients with dorsolateral prefrontal cortex (dlPFC) damage were less likely to cooperate in a modified public goods game, whereas patients with ventromedial prefrontal cortex (vmPFC) damage were more likely to cooperate. In contrast, we observed no association between cooperation and amygdala damage relative to controls. These findings suggest that the dlPFC, rather than the vmPFC or amygdala, plays a necessary role in group-based cooperation. These findings suggest cooperation does not solely rely on intuitive processes. Implications for models of group cooperation are discussed.