Justice systems delegate punishment decisions to groups in the belief that the aggregation of individuals’ preferences facilitates judiciousness. However, group dynamics may also lead individuals to relinquish moral responsibility by conforming to the majority’s preference for punishment. Across five experiments (N = 399), we find Victims and Jurors tasked with restoring justice become increasingly punitive (by as much as 40%) as groups express a desire to punish, with every additional punisher augmenting an individual’s punishment rates. This influence is so potent that knowing about a past group’s preference continues swaying decisions even when they cannot affect present outcomes. Using computational models of decision-making, we test long-standing theories of how groups influence choice. We find groups induce conformity by making individuals less cautious and more impulsive, and by amplifying the value of punishment. However, compared to Victims, Jurors are more sensitive to moral violation severity and less readily swayed by the group. Conformity to a group’s punitive preference also extends to weightier moral violations such as assault and theft. Our results demonstrate that groups can powerfully shift an individual’s punitive preference across a variety of contexts, while additionally revealing the cognitive mechanisms by which social influence alters moral values. PDF | Scientific Reports
Congrats to Eric for a successful poster presentation at the UTRA symposium!
“The Wriston Fellowship is awarded each year to regular untenured members of the faculty who have achieved a record of excellence in teaching and scholarship during their first years at Brown.”
This past weekend we had the opportunity to showcase some of our work to the greater Providence and Rhode Island Community. We used economic games with candy so that children could experience the kinds of questions and topics we explore! It was a day full of cooperation, defection, trust, and sometimes novel decision-making strategies. It was a ton of fun, and we can’t wait for next year!
Eric was recently awarded an undergraduate research grant (UTRA), which he will be using to work in our lab over the summer!
Because humans live in a dynamic and evolving social world, modeling the factors that guide social behavior has remained a challenge for psychology. In contrast, much progress has been made on understanding some of the more basic elements of human behavior, such as associative learning and memory, which has been successfully modeled in other species. Here we argue that applying an associative learning approach to social behavior can offer valuable insights into the human moral experience. We propose that the basic principles of associative learning—conserved across a range of species—can, in many situations, help to explain seemingly complex human behaviors, including altruistic, cooperative, and selfish acts. We describe examples from the social decision-making literature using Pavlovian learning phenomena (e.g., extinction, cue competition, stimulus generalization) to detail how a history of positive or negative social outcomes influences cognitive and affective mechanisms that shape moral choice. Examining how we might understand social behaviors and their likely reliance on domain-general mechanisms can help to generate testable hypotheses to further understand how social value is learned, represented, and expressed behaviorally. Download PDF | Read at PPS
Christopher Schutte, a senior staff writer for the Brown Daily Herald, recently wrote a great piece about the inner-workings of our lab, including insight from multiple lab members! Check it out here!
Congratulations to Jae and Joey for their first authored publication!
A complex web of social and moral norms governs many everyday human behaviors, acting as the glue for social harmony. The existence of moral norms helps elucidate the psychological motivations underlying a wide variety of seemingly puzzling behavior, including why humans help or trust total strangers. In this review, we examine four widespread moral norms: Fairness, altruism, trust, and cooperation, and consider how a single social instrument—reciprocity—underpins compliance to these norms. Using a game theoretic framework, we examine how both context and emotions moderate moral standards, and by extension, moral behavior. We additionally discuss how a mechanism of reciprocity facilitates the adherence to, and enforcement of, these moral norms through a core network of brain regions involved in processing reward. In contrast, violating this set of moral norms elicits neural activation in regions involved in resolving decision conflict and exerting cognitive control. Finally, we review how a reinforcement mechanism likely governs learning about morally normative behavior. Together, this review aims to explain how moral norms are deployed in ways that facilitate flexible moral choices.
There is little consensus about how moral values are learned. Using a novel social learning task, we examine whether vicarious learning impacts moral values—specifically fairness preferences—during decisions to restore justice. In both laboratory and Internet-based experimental settings, we employ a dyadic justice game where participants receive unfair splits of money from another player and respond resoundingly to the fairness violations by exhibiting robust nonpunitive, compensatory behavior (baseline behavior). In a subsequent learning phase, participants are tasked with responding to fairness violations on behalf of another participant (a receiver) and are given explicit trial-by-trial feedback about the receiver’s fairness preferences (e.g., whether they prefer punishment as a means of restoring justice). This allows participants to update their decisions in accordance with the receiver’s feedback (learning behavior). In a final test phase, participants again directly experience fairness violations. After learning about a receiver who prefers highly punitive measures, participants significantly enhance their own endorsement of punishment during the test phase compared with baseline. Computational learning models illustrate the acquisition of these moral values is governed by a reinforcement mechanism, revealing it takes as little as being exposed to the preferences of a single individual to shift one’s own desire for punishment when responding to fairness violations. Together this suggests that even in the absence of explicit social pressure, fairness preferences are highly labile. PDF, SI
Undergraduate research assistant, Nancy Nkoudou, was interviewed about her research experience, career path, and the project for which she was awarded an Undergraduate Teaching and Research Award (UTRA)! Go Nancy!
Our recent feature in The Hechinger Report!
We are pleased to announce that our paper "Tolerance to ambiguous uncertainty predicts prosocial behavior" has been featured on the Editors’ Highlights webpage in "From Brain to Behaviour" in Nature Communications.
Our recent feature in Psychology Today!