Because humans live in a dynamic and evolving social world, modeling the factors that guide social behavior has remained a challenge for psychology. In contrast, much progress has been made on understanding some of the more basic elements of human behavior, such as associative learning and memory, which has been successfully modeled in other species. Here we argue that applying an associative learning approach to social behavior can offer valuable insights into the human moral experience. We propose that the basic principles of associative learning—conserved across a range of species—can, in many situations, help to explain seemingly complex human behaviors, including altruistic, cooperative, and selfish acts. We describe examples from the social decision-making literature using Pavlovian learning phenomena (e.g., extinction, cue competition, stimulus generalization) to detail how a history of positive or negative social outcomes influences cognitive and affective mechanisms that shape moral choice. Examining how we might understand social behaviors and their likely reliance on domain-general mechanisms can help to generate testable hypotheses to further understand how social value is learned, represented, and expressed behaviorally. Download PDF | Read at PPS
Christopher Schutte, a senior staff writer for the Brown Daily Herald, recently wrote a great piece about the inner-workings of our lab, including insight from multiple lab members! Check it out here!
Congratulations to Jae and Joey for their first authored publication!
A complex web of social and moral norms governs many everyday human behaviors, acting as the glue for social harmony. The existence of moral norms helps elucidate the psychological motivations underlying a wide variety of seemingly puzzling behavior, including why humans help or trust total strangers. In this review, we examine four widespread moral norms: Fairness, altruism, trust, and cooperation, and consider how a single social instrument—reciprocity—underpins compliance to these norms. Using a game theoretic framework, we examine how both context and emotions moderate moral standards, and by extension, moral behavior. We additionally discuss how a mechanism of reciprocity facilitates the adherence to, and enforcement of, these moral norms through a core network of brain regions involved in processing reward. In contrast, violating this set of moral norms elicits neural activation in regions involved in resolving decision conflict and exerting cognitive control. Finally, we review how a reinforcement mechanism likely governs learning about morally normative behavior. Together, this review aims to explain how moral norms are deployed in ways that facilitate flexible moral choices.
There is little consensus about how moral values are learned. Using a novel social learning task, we examine whether vicarious learning impacts moral values—specifically fairness preferences—during decisions to restore justice. In both laboratory and Internet-based experimental settings, we employ a dyadic justice game where participants receive unfair splits of money from another player and respond resoundingly to the fairness violations by exhibiting robust nonpunitive, compensatory behavior (baseline behavior). In a subsequent learning phase, participants are tasked with responding to fairness violations on behalf of another participant (a receiver) and are given explicit trial-by-trial feedback about the receiver’s fairness preferences (e.g., whether they prefer punishment as a means of restoring justice). This allows participants to update their decisions in accordance with the receiver’s feedback (learning behavior). In a final test phase, participants again directly experience fairness violations. After learning about a receiver who prefers highly punitive measures, participants significantly enhance their own endorsement of punishment during the test phase compared with baseline. Computational learning models illustrate the acquisition of these moral values is governed by a reinforcement mechanism, revealing it takes as little as being exposed to the preferences of a single individual to shift one’s own desire for punishment when responding to fairness violations. Together this suggests that even in the absence of explicit social pressure, fairness preferences are highly labile. PDF, SI
Undergraduate research assistant, Nancy Nkoudou, was interviewed about her research experience, career path, and the project for which she was awarded an Undergraduate Teaching and Research Award (UTRA)! Go Nancy!
Our recent feature in The Hechinger Report!
We are pleased to announce that our paper "Tolerance to ambiguous uncertainty predicts prosocial behavior" has been featured on the Editors’ Highlights webpage in "From Brain to Behaviour" in Nature Communications.
Our recent feature in Psychology Today!
We were pleasantly surprised to learn that our publication was the #1 post on Reddit's science page!
Our research was showcased by Bustle!
Uncertainty is a fundamental feature of human life that can be fractioned into two distinct psychological constructs: risk (known probabilistic outcomes) and ambiguity (unknown probabilistic outcomes). Although risk and ambiguity are known to powerfully bias nonsocial decision-making, their influence on prosocial behavior remains largely unexplored. Here we show that ambiguity attitudes, but not risk attitudes, predict prosocial behavior: the greater an individual’s ambiguity tolerance, the more they engage in costly prosocial behaviors, both during decisions to cooperate (experiments 1 and 3) and choices to trust (experiment 2). Once the ambiguity associated with another’s actions is sufficiently resolved, this relationship between ambiguity tolerance and prosocial choice is eliminated (experiment 3). Taken together, these results provide converging evidence that attitudes toward ambiguity are a robust predictor of one’s willingness to engage in costly social behavior, which suggests a mechanism for the underlying motivations of prosocial action.
The success of our political institutions, environmental stewardship and evolutionary fitness all hinge on our ability to prioritize collective-interest over self-interest. Despite considerable interest in the neuro-cognitive processes that underlie group cooperation, the evidence to date is inconsistent. Several papers support models of prosocial restraint, while more recent work supports models of prosocial intuition. We evaluate these competing models using a sample of lesion patients with damage to brain regions previously implicated in intuition and deliberation. Compared to matched control participants (brain damaged and healthy controls), we found that patients with dorsolateral prefrontal cortex (dlPFC) damage were less likely to cooperate in a modified public goods game, whereas patients with ventromedial prefrontal cortex (vmPFC) damage were more likely to cooperate. In contrast, we observed no association between cooperation and amygdala damage relative to controls. These findings suggest that the dlPFC, rather than the vmPFC or amygdala, plays a necessary role in group-based cooperation. These findings suggest cooperation does not solely rely on intuitive processes. Implications for models of group cooperation are discussed.
We've got lots of updates as we head into our lab's third year!
First, congrats and best wishes to our graduating RAs Margo, Willy, and Zach! There are unique challenges associated with being part of a lab's first cohort, and our RAs rose to the occasion admirably.
Second, a warm welcome is in order to Jeroen van Baar and Logan Bickel! They will be joining us this summer as our new postdoc and lab manager, respectively.
Third, we're excited that Willy and Jae are soon settling into their new roles as grad students in our lab! Willy will be pursuing a fifth-year Master's degree, and Jae will be starting a PhD.
Fourth, congrats to Nancy for being awarded an undergraduate research grant (UTRA)! She will be using the funding to work in our lab over the summer.
Congrats to Joey for presenting a poster at SPSP, and to Oriel for giving a talk!
How do humans learn to trust unfamiliar others? Decisions in the absence of direct knowledge rely on our ability to generalize from past experiences and are often shaped by the degree of similarity between prior experience and novel situations. Here, we leverage a stimulus generalization framework to examine how perceptual similarity between known individuals and unfamiliar strangers shapes social learning. In a behavioral study, subjects play an iterative trust game with three partners who exhibit highly trustworthy, somewhat trustworthy, or highly untrustworthy behavior. After learning who can be trusted, subjects select new partners for a second game. Unbeknownst to subjects, each potential new partner was parametrically morphed with one of the three original players. Results reveal that subjects prefer to play with strangers who implicitly resemble the original player they previously learned was trustworthy and avoid playing with strangers resembling the untrustworthy player. These decisions to trust or distrust strangers formed a generalization gradient that converged toward baseline as perceptual similarity to the original player diminished. In a second imaging experiment we replicate these behavioral gradients and leverage multivariate pattern similarity analyses to reveal that a tuning profile of activation patterns in the amygdala selectively captures increasing perceptions of untrustworthiness. We additionally observe that within the caudate adaptive choices to trust rely on neural activation patterns similar to those elicited when learning about unrelated, but perceptually familiar, individuals. Together, these findings suggest an associative learning mechanism efficiently deploys moral information encoded from past experiences to guide future choice.
A piece showcasing our work in The Guardian!
Our recent feature in Popular Science!
The Social & Affective Neuroscience Lab at Brown University (Lab Director: Oriel FeldmanHall) invites applications for a full-time Research Assistant/Lab Manager (start date to begin July 2018). Our lab uses behavioral, neuroimaging and psychophysiological techniques to explore the cognitive and neural basis of social decision-making (read more at FeldmanHallLab.com).
The research assistant/lab manager will gain experience with all aspects of the research process, which could serve as a launch pad to graduate studies. Primary responsibilities will include: (1) data acquisition using behavioral, psychophysiological, and brain imaging techniques; (2) management and analyses of datasets; (3) subject recruitment and screening; and (4) managing the lab and performing administrative duties, including IRB documentation.
The position is designed for an individual with a Bachelor's degree in psychology, neuroscience, computer science, cognitive science, or related fields. Previous experience in a lab is required. An ability to work independently with good judgment, organizational and time management skills is necessary.
A high degree of familiarity with programs such as E-Prime, SPSS, R, Matlab, SPM (or FSL), AcqKnowledge & BIOPAC systems, is especially desired but not required, and would otherwise be learned on the job. Further duties include managing the day-to-day activities of the lab including running experiments, managing subject payment systems, preparing experimental materials, handling IRB protocols, and training and supervising undergraduate research assistants.
To apply, please submit your application to the Brown University Recruitment webpage at https://brown.wd5.myworkdayjobs.com/staff-careers-brown/jobs (search for Research Assistant REQ142187). Please also email Oriel FeldmanHall (firstname.lastname@example.org) your CV, a list of statistical and programming expertise, and the contact information of two references.
Congrats to Joey for presenting his First Year Project to the department!