Mariya Toneva
Tenure-track faculty (W2)
Max Planck Institute for Software Systems
Email: mtoneva [at] mpi-sws [dot] org
Office: Room 438 in E1.5, Saarbrücken, Germany
Curriculum Vitae
Bio
My research is at the intersection of Machine Learning, Natural Language Processing, and Neuroscience, with a focus on building computational models of language processing in the brain that can also improve natural language processing systems. See this 5min video for a brief summary of my research highlights and future directions.
I lead the Bridging AI and Neuroscience group (BrAIN) at the Max Planck Institute for Software Systems. I'm actively looking for strong candidates for postdoctoral, PhD, and research internship positions.
Prior to MPI-SWS, I was a C.V. Starr Fellow at the Princeton Neuroscience Institute, where I studied the role of episodic memory in language comprehension together with Ken Norman and Uri Hasson. I received my Ph.D. from Carnegie Mellon University in a joint program between Machine Learning and Neural Computation, where I had the privilege of being advised by Leila Wehbe and Tom Mitchell towards my Ph.D. thesis "Bridging Language in Machines with Language in the Brain". Before beginning my graduate studies at CMU, I received a B.S. in both Computer Science and Cognitive Science from Yale University.
I also enjoy taking care of (helpful) bacteria and yeast and turning them into Bulgarian yogurt, kombucha, and sourdough bread from time to time. One of my favorite ways to relax is to bike or run along the many beautiful European paths with my husband Christoph Dann, who is always looking for sample efficient ways to reinforce my learning.
News
- Sept 24: 🎉 Work by PhD student Gabriele Merlin accepted to EMNLP 2024!
- July 24: Wrapping up preparations for CogSci 2024 in Rotterdam! Come join us!
- June 24: 🎉 Works by research interns Subba Reddy Oota and Ruchit Rawal accepted to ACL 2024 and ACL 2024 Findings! June 24: Speaking at the Symposium on Neuro-AI: Bridging the gap between biological and machine intelligence about validating and improving LLMs as model organisms of human language processing.
- May 24: Speaking at ICLR 2024 Workshop on Representational Alignment about improving LLM's alignment to the human brain.
- May 24: Speaking at Brain Science and Large Language Models: has a quantum leap occurred? about LLMs as model organisms of human language processing.
- Jan 24: Teaching a seminar course on Bridging Language in Machines with Language in the Brain.
- Sept 23: 🎉 Work by research intern Subba Reddy Oota revealing joint processing of syntactic properties in LLMs and brains accepted to NeurIPS 2023.
- Feb 23: 🎉 Together with 12 other faculty in Saarbruecken, awarded funding by DFG for a research training group on neuro-explicit models for language, vision, and action. Check out more info here: neuroexplicit.org .
Publications and Preprints
ML venues Neuro venues Cogsci venues PreprintsLanguage models and brains align due to more than next-word prediction and word-level information
G. Merlin and M. Toneva
EMNLP 2024 [abs] [pdf]
Pretrained language models have been shown to significantly predict brain recordings of people comprehending language. Recent work suggests that the prediction of the next word is a key mechanism that contributes to this alignment. What is not yet understood is whether prediction of the next word is necessary for this observed alignment or simply sufficient, and whether there are other shared mechanisms or information that are similarly important. In this work, we take a step towards understanding the reasons for brain alignment via two simple perturbations in popular pretrained language models. These perturbations help us design contrasts that can control for different types of information. By contrasting the brain alignment of these differently perturbed models, we show that improvements in alignment with brain recordings are due to more than improvements in next-word prediction and word-level information.
Improving semantic understanding in speech language models via brain-tuning
O. Moussa, D. Klakow, and M. Toneva
arXiv 2024 [abs] [pdf]
Speech-language models align impressively with human brain responses to natural language. However, current models rely heavily on low-level speech features, indicating they lack brain-relevant semantics, limiting their utility as models of semantic processing in the brain. In this work, we address this limitation by inducing brain-relevant bias into the models via fine-tuning with fMRI recordings of people listening to natural stories, a process we call brain-tuning. After testing it on three different pretrained backbones, we show that brain-tuning improves alignment with new brain recordings in semantic language regions and reduces reliance on low-level speech features. Notably, brain-tuning leads to 1) consistent improvements in performance across various downstream tasks and 2) a representational space with increased semantic preference. Our results provide the first evidence that incorporating brain signals into the training of language models improves their semantic understanding.
Assessing episodic memory in LLMs with sequence order recall tasks
M. Pink, V. Vo, Q. Wu, J. Mu, J. Turek, U. Hasson, K. Norman, S. Michelmann, A. Huth, and M. Toneva
arXiv 2024 [abs] [pdf]
Current LLM benchmarks focus on evaluating models' memory of facts and semantic relations, primarily assessing semantic aspects of long-term memory. However, in humans, long-term memory also includes episodic memory, which links memories to their contexts, such as the time and place they occurred. The ability to contextualize memories is crucial for many cognitive tasks and everyday functions. This form of memory has not been evaluated in LLMs with existing benchmarks. To address the gap in evaluating memory in LLMs, we introduce Sequence Order Recall Tasks (SORT), which we adapt from tasks used to study episodic memory in cognitive psychology. SORT requires LLMs to recall the correct order of text segments, and provides a general framework that is both easily extendable and does not require any additional annotations. We present an initial evaluation dataset, Book-SORT, comprising 36k pairs of segments extracted from 9 books recently added to the public domain. Based on a human experiment with 155 participants, we show that humans can recall sequence order based on long-term memory of a book. We find that models can perform the task with high accuracy when relevant text is given in-context during the SORT evaluation. However, when presented with the book text only during training, LLMs' performance on SORT falls short. By allowing to evaluate more aspects of memory, we believe that SORT will aid in the emerging development of memory-augmented models.
Speech language models lack important brain-relevant semantics
S.R. Oota, E. Çelik, F. Deniz, and M. Toneva
ACL 2024 [abs] [pdf]
Despite known differences between reading and listening in the brain, recent work has shown that text-based language models predict both text-evoked and speech-evoked brain activity to an impressive degree. This poses the question of what types of information language models truly predict in the brain. We investigate this question via a direct approach, in which we systematically remove specific low-level stimulus features (textual, speech, and visual) from language model representations to assess their impact on alignment with fMRI brain recordings during reading and listening. Comparing these findings with speech-based language models reveals starkly different effects of low-level features on brain alignment. While text-based models show reduced alignment in early sensory regions post-removal, they retain significant predictive power in late language regions. In contrast, speech-based models maintain strong alignment in early auditory regions even after feature removal but lose all predictive power in late language regions. These results suggest that speech-based models provide insights into additional information processed by early auditory regions, but caution is needed when using them to model processing in late language regions. We make our code publicly available.
Perturbed examples reveal invariances shared by language models
R. Rawal and M. Toneva
ACL 2024 Findings [abs] [pdf]
An explosion of work in language is leading to ever-increasing numbers of available natural language processing models, with little understanding of how new models compare to better-understood models. One major reason for this difficulty is saturating benchmark datasets, which may not reflect well differences in model performance in the wild. In this work, we propose a novel framework for comparing two natural language processing models by revealing their shared invariance to interpretable input perturbations that are designed to target a specific linguistic capability (e.g., Synonym-Invariance, Typo-Invariance). Via experiments on models from within the same and across different architecture families, this framework offers a number of insights about how changes in models (e.g., distillation, increase in size, amount of pre-training) affect multiple well-defined linguistic capabilities. Furthermore, we also demonstrate how our framework can enable evaluation of the invariances shared between models that are available as commercial black-box APIs (e.g., InstructGPT family) and models that are relatively better understood (e.g., GPT-2). Across several experiments, we observe that large language models share many of the invariances encoded by models of various sizes, whereas the invariances encoded by large language models are only shared by other large models. Possessing a wide variety of invariances may be a key reason for the recent successes of large language models, and our framework can shed light on the types of invariances that are retained by or emerge in new models.
Joint processing of linguistic properties in brains and language models
S.R. Oota, M. Gupta, and M. Toneva
NeurIPS 2023 [abs] [pdf]
Language models have been shown to be very effective in predicting brain recordings of subjects experiencing complex language stimuli. For a deeper understanding of this alignment, it is important to understand the alignment between the detailed processing of linguistic information by the human brain versus language models. In NLP, linguistic probing tasks have revealed a hierarchy of information processing in neural language models that progresses from simple to complex with an increase in depth. On the other hand, in neuroscience, the strongest alignment with high-level language brain regions has consistently been observed in the middle layers. These findings leave an open question as to what linguistic information actually underlies the observed alignment between brains and language models. We investigate this question via a direct approach, in which we eliminate information related to specific linguistic properties in the language model representations and observe how this intervention affects the alignment with fMRI brain recordings obtained while participants listened to a story. We investigate a range of linguistic properties (surface, syntactic and semantic) and find that the elimination of each one results in a significant decrease in brain alignment across all layers of a language model. These findings provide direct evidence for the role of specific linguistic information in the alignment between brain and language models, and opens new avenues for mapping the joint information processing in both systems.
Getting aligned on representational alignment
I. Sucholutsky, L. Muttenthaler, (many authors), M. Toneva , and T.L. Griffiths
arXiv 2023 [abs] [pdf]
Biological and artificial information processing systems form representations of the world that they can use to categorize, reason, plan, navigate, and make decisions. To what extent do the representations formed by these diverse systems agree? Can diverging representations still lead to the same behaviors? And how can systems modify their representations to better match those of another system? These questions pertaining to the study of representational alignment are at the heart of some of the most active research areas in contemporary cognitive science, neuroscience, and machine learning. Unfortunately, there is limited knowledge-transfer between research communities interested in representational alignment, and much of the progress in one field ends up being rediscovered independently in another, when greater cross-field communication would be advantageous. To improve communication between fields, we propose a unifying framework that can serve as a common language between researchers studying representational alignment. We survey the literature from the fields of cognitive science, neuroscience, and machine learning, and demonstrate how prior work fits into this framework. Finally, we lay out open problems in representational alignment where progress can benefit all three fields. We hope that our work can catalyze cross-disciplinary collaboration and accelerate progress for all communities studying and developing information processing systems. We note that this is a working paper and encourage readers to reach out with their suggestions for future revisions.
Vision-language integration in multimodal video transformers (partially) aligns with the brain
D.T. Dong and M. Toneva
arXiv 2023 [abs] [pdf]
Integrating information from multiple modalities is arguably one of the essential prerequisites for grounding artificial intelligence systems with an understanding of the real world. Recent advances in video transformers that jointly learn from vision, text, and sound over time have made some progress toward this goal, but the degree to which these models integrate information from modalities still remains unclear. In this work, we present a promising approach for probing a pre-trained multimodal video transformer model by leveraging neuroscientific evidence of multimodal information processing in the brain. Using brain recordings of participants watching a popular TV show, we analyze the effects of multi-modal connections and interactions in a pre-trained multi-modal video transformer on the alignment with uni- and multi-modal brain regions. We find evidence that vision enhances masked prediction performance during language processing, providing support that cross-modal representations in models can benefit individual modalities. However, we don't find evidence of brain-relevant information captured by the joint multi-modal transformer representations beyond that captured by all of the individual modalities. We finally show that the brain alignment of the pre-trained joint representation can be improved by fine-tuning using a task that requires vision-language inferences. Overall, our results paint an optimistic picture of the ability of multi-modal transformers to integrate vision and language in partially brain-relevant ways but also show that improving the brain alignment of these models may require new approaches.
What happens during finetuning of vision transformers: an invariance based investigation
G. Merlin, V. Nanda, R. Rawal, and M. Toneva
CoLLAs 2023 [abs] [pdf]
The pretrain-finetune paradigm usually improves downstream performance over training a model from scratch on the same task, becoming commonplace across many areas of machine learning. While pretraining is empirically observed to be beneficial for a range of tasks, there is not a clear understanding yet of the reasons for this effect. In this work, we examine the relationship between pretrained vision transformers and the corresponding finetuned versions on several benchmark datasets and tasks. We present new metrics that specifically investigate the degree to which invariances learned by a pretrained model are retained or forgotten during finetuning. Using these metrics, we present a suite of empirical findings, including that pretraining induces transferable invariances in shallow layers and that invariances from deeper pretrained layers are compressed towards shallower layers during finetuning. Together, these findings contribute to understanding some of the reasons for the successes of pretrained models and the changes that a pretrained model undergoes when finetuned on a downstream task.
Pointwise Representational Similarity
C. Kolling, T. Speicher, V. Nanda, M. Toneva , and K. Gummadi
arXiv 2023 [abs] [pdf]
With the increasing reliance on deep neural networks, it is important to develop ways to better understand their learned representations. Representation similarity measures have emerged as a popular tool for examining learned representations However, existing measures only provide aggregate estimates of similarity at a global level, i.e. over a set of representations for N input examples. As such, these measures are not well-suited for investigating representations at a local level, i.e. representations of a single input example. Local similarity measures are needed, for instance, to understand which individual input representations are affected by training interventions to models (e.g. to be more fair and unbiased) or are at greater risk of being misclassified. In this work, we fill in this gap and propose Pointwise Normalized Kernel Alignment (PNKA), a measure that quantifies how similarly an individual input is represented in two representation spaces. Intuitively, PNKA compares the similarity of an input's neighborhoods across both spaces. Using our measure, we are able to analyze properties of learned representations at a finer granularity than what was previously possible. Concretely, we show how PNKA can be leveraged to develop a deeper understanding of (a) the input examples that are likely to be misclassified, (b) the concepts encoded by (individual) neurons in a layer, and (c) the effects of fairness interventions on learned representations.
Training language models to summarize narratives improves brain alignment
K.L. Aw and M. Toneva
ICLR 2023 Top 25% notable paper (Spotlight) [abs] [pdf]
Building systems that achieve a deeper understanding of language is one of the central goals of natural language processing (NLP). Towards this goal, recent works have begun to train language models on narrative datasets which require extracting the most critical information by integrating across long contexts. However, it is still an open question whether these models are learning a deeper understanding of the text, or if the models are simply learning a heuristic to complete the task. This work investigates this further by turning to the one language processing system that truly understands complex language: the human brain. We show that training language models for deeper narrative understanding results in richer representations that have improved alignment to human brain activity. We further find that the improvements in brain alignment are larger for character names than for other discourse features, which indicates that these models are learning important narrative elements. Taken together, these results suggest that this type of training can indeed lead to deeper language understanding. These findings have consequences both for cognitive neuroscience by revealing some of the significant factors behind brain-NLP alignment, and for NLP by highlighting that understanding of long-range context can be improved beyond language modeling.
Large language models can segment narrative events similarly to humans
S. Michelmann, M. Kumar, K.A. Norman, M. Toneva
arXiv 2023 [abs] [pdf]
Humans perceive discrete events such as "restaurant visits" and "train rides" in their continuous experience. One important prerequisite for studying human event perception is the ability of researchers to quantify when one event ends and another begins. Typically, this information is derived by aggregating behavioral annotations from several observers. Here we present an alternative computational approach where event boundaries are derived using a large language model, GPT-3, instead of using human annotations. We demonstrate that GPT-3 can segment continuous narrative text into events. GPT-3-annotated events are significantly correlated with human event annotations. Furthermore, these GPT-derived annotations achieve a good approximation of the "consensus" solution (obtained by averaging across human annotations); the boundaries identified by GPT-3 are closer to the consensus, on average, than boundaries identified by individual human annotators. This finding suggests that GPT-3 provides a feasible solution for automated event annotations, and it demonstrates a further parallel between human cognition and prediction in large language models. In the future, GPT-3 may thereby help to elucidate the principles underlying human event perception.
A roadmap to reverse engineering real-world generalization by combining naturalistic paradigms, deep sampling, and predictive computational models
P. Herholz, E. Fortier, M. Toneva , N. Farrugia, L. Wehbe, V. Borghesani
Neurons, Behavior, Data Science, and Theory 2023 [abs] [pdf]
Real-world generalization, e.g., deciding to approach a never-seen-before animal, relies on contextual information as well as previous experiences. Such a seemingly easy behavioral choice requires the interplay of multiple neural mechanisms, from integrative encoding to category-based inference, weighted differently according to the circumstances. Here, we argue that a comprehensive theory of the neuro-cognitive substrates of real-world generalization will greatly benefit from empirical research with three key elements. First, the ecological validity provided by multimodal, naturalistic paradigms. Second, the model stability afforded by deep sampling. Finally, the statistical rigor granted by predictive modeling and computational controls.
Combining computational controls with natural text reveals aspects of meaning composition
M. Toneva, T. Mitchell, and L. Wehbe
Nature Computational Science 2022 [abs] [pdf]
To study a core component of human intelligence---our ability to combine the meaning of words---neuroscientists look for neural correlates of meaning composition, such as brain activity proportional to the difficulty of understanding a sentence. However, little is known about the product of meaning composition---the combined meaning of words beyond their individual meaning. We term this product ``supra-word meaning” and devise a computational representation for it by using recent neural network algorithms and a new technique to disentangle composed- from individual-word meaning. Using functional magnetic resonance imaging, we reveal that hubs that are thought to process lexical-level meaning also maintain supra-word meaning, suggesting a common substrate for lexical and combinatorial semantics. Surprisingly, we cannot detect supra-word meaning in magnetoencephalography, which suggests that composed meaning is maintained through a different neural mechanism than synchronized firing. This sensitivity difference has implications for past neuroimaging results and future wearable neurotechnology.
Memory for long narratives
M. Toneva , V. Vo, J. Turek, S. Jain, S. Michelmann, M. Capotă, A. Huth, U. Hasson, and K. Norman
CEMS 2022 [abs]
Language is the primary way in which we communicate, and yet it is not clear how we draw on previous experiences to understand language. In this work, we aim to investigate the role of episodic memory in language comprehension, by building models of this process and by collecting new benchmark datasets. As an initial step, we sought to characterize how well people remember information from long narratives, by asking participants to recall chapters of a recently-read novel when cued with a passage from the start of the chapter. We evaluated the precision of this recall by comparing its semantic representations--constructed using a language model--to those of the corresponding chapters. Analyses of the data are ongoing. In preliminary analyses, we find that a number of events were recalled with high precision across participants, and we do not find an effect of event position within a chapter on the precision of recall.
The Courtois Neuromod project: a deep, multi-domain fMRI dataset to build individual brain models
J. Boyle*, B. Pinsard*, V. Borghesani, M. Saint-Laurent, F. Lespinasse, F. Paugam, P. Sainath, S. Rastegarnia, A. Boré, J. Chen, A. Cyr, E. Dessureault, E. DuPre, Y. Harel, M. Toneva , S. Belleville, S. Brambati, J. Cohen-Adad, A. Fuente, M. Hebart, K. Jerbi, P. Rainville, L. Wehbe, and P. Bellec
OHBM 2022 Oral presentation [abs]
Several large individual fMRI datasets have emerged to train artificial intelligence (AI) models on specific cognitive processes, including natural images (NSD1, BOLD500022) and movie viewing (Dr Who3). However, a key feature of the brain is the capacity to integrate and switch between specialized processes and cognitive contexts. The Courtois Project on Neuronal Modelling (CNeuroMod) is creating a rich neuroimaging dataset to probe numerous cognitive domains simultaneously, in the same subjects, with carefully controlled and /or naturalistic stimuli, in order to build integrative AI models. CNeuroMod will eventually feature hundreds of hours of neuroimaging data per subject, and is already the largest individual fMRI dataset currently available.
CNeuroMod features fMRI recordings from 6 English-speaking participants (3 women). 4 subjects are scanned acutely (80h+ / year) and 2 are scanned intensively (40h+ / year). The 4 acutely scanned participants have reached, or are close to, 100h of MRI data. Information on previously reported datasets (hcptrt, movie10, shinobi, mario, triplet, friends and things) are available at https://docs.cneuromod.ca. Here, we highlight 8 new datasets to (1) validate our set-up, (2) map functional areas, and (3) expand the set of naturalistic stimuli covered.
First, the effectiveness of our auditory protection protocol and the reactivity of our custom-built controller will be assessed, respectively - audition task (i.e, auditory threshold inside and outside the mri) and gamepad task (i.e. comparing motor responses using custom vs commercial controller).
Mapping of visual areas will be possible thanks to a classical retinotopy task (retino), and a functional localizer (fLoc) isolating category-selective cortical regions5. localizers consists of multiple sessions of language tasks spanning sensory modalities (auditory6, visual7), & languages (French and English8). potter dataset consists of reading chapter 9 from Harry Potter book to investigate language processing9. Finally, multfs is a study of working memory using different tasks, stimuli and features. emotions is passive watching of annotated, emotionally evocative short videos10. Preprocessed imaging data, behavioural responses, and physiological recording are formatted in BIDS and available using a registered access system and the DataLad version control tool. Data request is available via our website - https://www.cneuromod.ca/.
The CNeuroMod project has assembled an unprecedented resource to study individual functional brain activity for a wide range of controlled and naturalistic stimuli. For each type of stimuli included in CNeuroMod, the relevant subset of data is one of the largest dataset available for the community. Taken together, CNeuroMod opens a unique avenue to create AI models of integrative processes in the brain. We anticipate that this wealth of longitudinal data will help researchers discover novel insights into the way human brains process complex, naturalistic stimuli.
Same cause; different effect in the brain
M. Toneva* , J. Williams*, A. Bollu, C. Dann, and L. Wehbe
CLeaR 2022 [abs] [pdf] [code]
To study information processing in the brain, neuroscientists manipulate experimental stimuli while recording participant brain activity. They can then use encoding models to find out which brain “zone" (e.g. which region of interest, volume pixel or electrophysiology sensor) is predicted from the stimulus properties. Given the assumptions underlying this setup, when the stimulus properties are predictive of the activity in a zone, these properties are understood to cause activity in that zone. In recent years, researchers have begun using neural networks to construct representations that capture the diverse properties of complex stimuli, such as natural language or natural images. Encoding models built using these high-dimensional representations are often able to accurately predict the activity in large swathes of cortex, suggesting that the activity in all these brain zones is caused by the stimulus properties captured in the neural network representation. It is then natural to ask: "Is the activity in these different brain zones caused by the stimulus properties in the same way?" In neuroscientific terms, this corresponds to asking if these different zones process the stimulus properties in the same way. Here, we propose a new framework that enables researchers to ask if the properties of a stimulus affects two brain zones in the same way. We use simulated data and two real fMRI datasets with complex naturalistic stimuli to show that our framework enables us to make such inferences. Our inferences are strikingly consistent between the two datasets, indicating that the proposed framework is a promising new tool for neuroscientists to understand how information is processed in the brain.
Single-trial MEG data can be denoised through cross-subject predictive modeling
S. Ravishankar, M. Toneva, and L. Wehbe
Frontiers in Computational Neuroscience 2021 [abs] [pdf]
A pervasive challenge in brain imaging is the presence of noise that hinders investigation of underlying neural processes, with Magnetoencephalography (MEG) in particular having very low Signal-to-Noise Ratio (SNR). The established strategy to increase MEG's SNR involves averaging multiple repetitions of data corresponding to the same stimulus. However, repetition of stimulus can be undesirable, because underlying neural activity has been shown to change across trials, and repeating stimuli limits the breadth of the stimulus space experienced by subjects. In particular, the rising popularity of naturalistic studies with a single viewing of a movie or story necessitates the discovery of new approaches to increase SNR. We introduce a simple framework to reduce noise in single-trial MEG data by leveraging correlations in neural responses across subjects as they experience the same stimulus. We demonstrate its use in a naturalistic reading comprehension task with 8 subjects, with MEG data collected while they read the same story a single time. We find that our procedure results in data with reduced noise and allows for better discovery of neural phenomena. As proof-of-concept, we show that the N400m's correlation with word surprisal, an established finding in literature, is far more clearly observed in the denoised data than the original data. The denoised data also shows higher decoding and encoding accuracy than the original data, indicating that the neural signals associated with reading are either preserved or enhanced after the denoising procedure.
Does injecting linguistic structure into language models lead to better alignment with brain recordings?
M. Abdou, A. V. González, M. Toneva , D. Hershcovich, A. Søgaard
arXiv 2021 [abs] [pdf]
Neuroscientists evaluate deep neural networks for natural language processing as possible candidate models for how language is processed in the brain. These models are often trained without explicit linguistic supervision, but have been shown to learn some linguistic structure in the absence of such supervision (Manning et al., 2020), potentially questioning the relevance of symbolic linguistic theories in modeling such cognitive processes (Warstadt and Bowman,2020). We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms. Using structure from dependency or minimal recursion semantic annotations, we find alignments improve significantly for one of the datasets. For another dataset, we see more mixed results. We present an extensive analysis of these results. Our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain, expanding the range of possible scientific inferences a neuroscientist could make, and opens up new opportunities for cross-pollination between computational neuroscience and linguistics.
Modeling task effects on meaning representation in the brain via zero-shot MEG prediction
M. Toneva*, O.Stretcu*, B. Poczos, L. Wehbe, and T. Mitchell
NeurIPS 2020 [abs] [pdf] [code] [video]
How meaning is represented in the brain is still one of the big open questions in neuroscience. Does a word (e.g., bird) always have the same representation, or does the task under which the word is processed alter its representation (answering “can you eat it?” versus “can it fly?”)? The brain activity of subjects who read the same word while performing different semantic tasks has been shown to differ across tasks. However, it is still not understood how the task itself contributes to this difference. In the current work, we study Magnetoencephalography (MEG) brain recordings of participants tasked with answering questions about concrete nouns. We investigate the effect of the task (i.e. the question being asked) on the processing of the concrete noun by predicting the millisecond-resolution MEG recordings as a function of both the semantics of the noun and the task. Using this approach, we test several hypotheses about the task-stimulus interactions by comparing the zero-shot predictions made by these hypotheses for novel tasks and nouns not seen during training. We find that incorporating the task semantics significantly improves the prediction of MEG recordings, across participants. The improvement occurs 475 − 550ms after the participants first see the word, which corresponds to what is considered to be the ending time of semantic processing for a word. These results suggest that only the end of semantic processing of a word is task-dependent, and pose a challenge for future research to formulate new hypotheses for earlier task effects as a function of the task and stimuli.
Investigating different alignment methods between natural and artificial neural networks for language processing
A. Bollu, M. Toneva, and L. Wehbe
SNL 2020 [abs]
Aligning the internal representational spaces of state-of-the-art natural language processing (NLP) models with those of the brain has revealed a great deal of overlap in what both systems capture about their language input. Prior work investigated this alignment using linear encoding models that predict each fMRI voxel as an independent linear combination of the NLP representations[1]. However, a linear mapping may fail to align nonlinearly encoded information within the NLP and fMRI representations, and is not well equipped to benefit from information shared among groups of voxels. Here, we investigate the effect of varying encoding model complexity on alignment performance. We align fMRI recordings of 8 participants reading naturalistic text word-by-word with intermediate representations from BERT[2], a state-of-the-art NLP model, that correspond to the stimulus text[3]. We investigate three encoding models that predict the fMRI voxels as a function of the BERT representations: LinearAnalytical - linear model where weights were estimated using a closed-form solution; LinearGD - linear model trained using gradient descent (GD); MLPGD - multilayer perceptron (MLP) with one hidden layer trained using Batch GD. Two key features separate MLPGD from the linear models: (1) a nonlinear activation layer and (2) predicting all voxels jointly using a shared hidden layer. We include LinearGD to identify whether any performance differences can be attributed to the training method. We evaluate alignment performance by computing the mean Pearson correlations[4] between predicted and true voxel activities within regions of interest (ROIs) known to be consistently activated during language processing[5,6]. We additionally evaluate each encoding model against a noise ceiling[7], computed based on pairwise correlations between participants’ fMRI recordings. We use paired t-tests to test for significant differences between model performance across subjects and pycortex[8] to visualize voxel correlations on a 3D brain surface. We find no significant difference between LinearAnalytical and LinearGD (p>0.05 for all ROIs). LinearGD performs on par with the noise ceiling in 5 ROIs (p>0.2), and worse in the dorsomedial prefrontal cortex (dmPFC, p=0.009), inferior frontal gyrus pars orbitalis (IFGorb, p=0.05) and posterior cingulate (pCingulate, p=0.06), revealing room for improvement in those regions. Differences between LinearGD and MLPGD evaluated based on the whole ROIs are not significant (p>0.05), but qualitative analysis reveals smaller clusters within the dmPFC, IFGorb and pCingulate where MLPGD outperforms LinearGD. We further observe that the encoding models sometimes outperform the estimated noise ceiling, especially within the posterior temporal lobe, angular gyrus and middle frontal gyrus. Interestingly, our qualitative analysis of voxel correlations reveals clusters within the dmPFC, IFGorb and pCingulate that are better predicted by the MLP architecture. One interpretation of this finding is that these clusters may process different information from the rest of the region -- information that only a nonlinear alignment can reveal -- but further investigation is necessary. We also find that the noise ceiling computation provides suboptimal estimates. A better noise ceiling may provide stronger evidence for our observations and highlight other areas where the encoding model can be improved upon as a guide to future research. References: https://tinyurl.com/y2v23rd2
Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)
M. Toneva and L. Wehbe
NeurIPS 2019 [abs] [pdf] [code]
Neural networks models for NLP are typically implemented without the explicit encoding of language rules and yet they are able to break one performance record after another. This has generated a lot of research interest in interpreting the representations learned by these networks. We propose here a novel interpretation approach that relies on the only processing system we have that does understand language: the human brain. We use brain imaging recordings of subjects reading complex natural text to interpret word and sequence embeddings from 4 recent NLP models - ELMo, USE, BERT and Transformer-XL. We study how their representations differ across layer depth, context length, and attention type. Our results reveal differences in the context-related representations across these models. Further, in the transformer models, we find an interaction between layer depth and context length, and between layer depth and attention type. We finally hypothesize that altering BERT to better align with brain recordings would enable it to also better understand language. Probing the altered BERT using syntactic NLP tasks reveals that the model with increased brain-alignment outperforms the original model. Cognitive neuroscientists have already begun using NLP networks to study the brain, and this work closes the loop to allow the interaction between NLP and cognitive neuroscience to be a true cross-pollination.
Inducing brain-relevant bias in natural language processing models
D. Schwartz, M. Toneva , and L. Wehbe
NeurIPS 2019 [abs] [pdf] [code]
Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain. However, these models have not been specifically designed to capture the way the brain represents language meaning. We hypothesize that fine-tuning these models to predict recordings of brain activity of people reading text will lead to representations that encode more brain-activity-relevant language information. We demonstrate that a version of BERT, a recently introduced and powerful language model, can improve the prediction of brain activity after fine-tuning. We show that the relationship between language and brain activity learned by BERT during this fine-tuning transfers across multiple participants. We also show that fine-tuned representations learned from both magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) are better for predicting fMRI than the representations learned from fMRI alone, indicating that the learned representations capture brain-activity-relevant information that is not simply an artifact of the modality. While changes to language representations help the model predict brain activity, they also do not harm the model's ability to perform downstream NLP tasks. Our findings are notable for research on language understanding in the brain.
Investigating task effects on brain activity during stimulus presentation in MEG
M. Toneva*, O.Stretcu*, B. Poczos, and T. Mitchell
OHBM 2019 [abs]
Recorded brain activity of subjects who perceive the same stimulus (e.g. a word) while performing different semantic tasks (e.g. identifying whether the word belongs to a particular category) has been shown to differ across tasks. However, it is not well understood how precisely the task contributes to this brain activity. In the current work, we propose multiple hypotheses of how possible interactions between the task and stimulus semantics can be related to the observed brain activity. We test these hypotheses by designing machine learning models to represent each hypothesis, training them to predict the recorded brain activity, and comparing their performance. We investigate a magnetoencephalography (MEG) dataset, where subjects were tasked to answer 20 yes/no questions (e.g. `Is it manmade?') about concrete nouns and their line drawings. Each question-stimulus pair is presented only once. Here we consider each question as a different task. We show that incorporating task semantics improves the prediction of single-trial MEG data by an average of 10% across subjects.
An empirical study of example forgetting during deep neural network learning
M. Toneva*, A. Sordoni*, R. Tachet des Combes*, A. Trischler, Y. Bengio, and G. Gordon
ICLR 2019 [abs] [pdf] [code] [open review]
Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks. Our goal is to understand whether a related phenomenon occurs when data does not undergo a clear distributional shift. We define a ``forgetting event'' to have occurred when an individual training example transitions from being classified correctly to incorrectly over the course of learning. Across several benchmark data sets, we find that: (i) certain examples are forgotten with high frequency, and some not at all; (ii) a data set's (un)forgettable examples generalize across neural architectures; and (iii) based on forgetting dynamics, a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance.
Word length processing in left lateraloccipital through region-to-region connectivity: an MEG study
M. Toneva and T. Mitchell
OHBM 2018 [abs]
A previous MEG study found that many features of stimuli can be decoded around the same time but at different places in the brain, posing the question of how information processing is coordinated between brain regions. Previous approaches to this question are of two types. The first uses a classifier or regression to uncover the relative timings of feature decodability in different brain regions. While addressing when and where information is processed, this approach does not specify how information is coordinated. The second type estimates when the connectivity between regions changes. While this approach assumes that information is coordinated by communication, it does not directly relate to the information content. We aim to more directly relate processing of information content to connectivity. We examine whether, during presentation of a word stimulus, the length of the word relates to how strongly the region that best encodes word length - left lateraloccipital cortex (LOC) - connects to other regions, at different times. For this purpose, we analyze MEG data from an experiment in which 9 subjects were presented 60 concrete nouns along with their line drawings. Our results suggest that the region that is best at encoding a perceptual stimulus feature - word length - has a connectivity network in which the connection strengths vary with the value of the feature. Furthermore, we observe this relationship prior to the peak information decodability in any region. One hypothesis is that information necessary for the processing of the feature is communicated to the seed region by varying connection strengths. Further analysis for a stimulus feature with a later decodability peak, such as a semantic feature, would add to the current results.
MEG representational similarity analysis implicates hierarchical integration in sentence processing
N. Rafidi*, D. Schwartz*, M. Toneva*, S. Jat, and T. Mitchell
OHBM 2018 [abs]
Multiple hypotheses exist for how the brain constructs sentence meaning. Most fall into two groups based on their assumptions about the processing order of the words within the sentence. The first considers a sequential processing order, while the second uses hierarchical syntactic rules. We test which hypothesis best explains MEG data recorded during reading of sentences with active and passive voice. Under the sequential hypothesis, the voice of a sentence should change its neural signature because word order changes. Under the hierarchical hypothesis, active and passive sentences corresponding to the same proposition should exhibit similar neural signatures. We test how well three language models explain MEG data collected during noun-verb-noun sentence reading. The models we test are bag of words (BoW), sequential word order, and hierarchical. All three models correlate with the MEG data for some timepoints, after verb presentation and briefly post sentence. However, the hierarchical model correlates significantly for more timepoints and is often the best correlated model even if that correlation is not significant. Our analysis shows that a hierarchical model of meaning correlates with neural activity for a longer duration than models which use a bag of words meaning representation or sequential meaning construction. Additionally, just after verb presentation the hierarchical model is the model best correlated with the MEG data. Our method enables the study of language processing hypotheses in the brain at a fine time scale and can be applied to a wide variety of language models.
Applying artificial vision models to human scene understanding
E. M. Aminoff, M. Toneva, A. Shrivastava, X. Chen, I. Misra, A. Gupta, and M. J. Tarr
Frontiers in Computational Neuroscience 2015 [abs] [pdf]
How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective—the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area (TOS)—have typically focused on single visual dimensions (e.g., size), rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: (1) BOLD activity in scene-selective brain regions; (2) behavioral measured judgments of visually-perceived scene similarity; and (3) several different computer vision models. These correlations revealed: (1) models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; (2) NEIL and SUN—the models that best accounted for the patterns obtained from PPA and TOS—were different from the GIST model that best accounted for the pattern obtained from RSC; (3) The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method—NEIL (“Never-Ending-Image-Learner”), which incorporates visual features learned as statistical regularities across web-scale numbers of scenes—showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network.
Scene-space encoding within the functional scene-selective network
E. M. Aminoff, M. Toneva, A. Gupta, and M. J. Tarr
VSS 2015 [abs]
High-level visual neuroscience has often focused on how different visual categories are encoded in the brain. For example, we know how the brain responds when viewing scenes as compared to faces or other objects - three regions are consistently engaged: the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area/transverse occipital sulcus (TOS). Here we explore the fine-grained responses of these three regions when viewing 100 different scenes. We asked: 1) Can neural signals differentiate the 100 exemplars? 2) Are the PPA, RSC, and TOS strongly activated by the same exemplars and, more generally, are the "scene-spaces" representing how scenes are encoded in these regions similar? In an fMRI study of 100 scenes we found that the scenes eliciting the greatest BOLD signal were largely the same across the PPA, RSC, and TOS. Remarkably, the orderings, from strongest to weakest, of scenes were highly correlated across all three regions (r = .82), but were only moderately correlated with non-scene selective brain regions (r = .30). The high similarity across scene-selective regions suggests that a reliable and distinguishable feature space encodes visual scenes. To better understand the potential feature space, we compared the neural scene-space to scene-spaces defined by either several different computer vision models or behavioral measures of scene similarity. Computer vision models that rely on more complex, mid- to high-level visual features best accounted for the pattern of BOLD signal in scene-selective regions and, interestingly, the better-performing models exceeded the performance of our behavioral measures. These results suggest a division of labor where the representations within the PPA and TOS focus on visual statistical regularities within scenes, whereas the representations within the RSC focus on a more high-level representation of scene category. Moreover, the data suggest the PPA mediates between the processing of the TOS and RSC.
Towards a model for mid-level feature representation of scenes
M. Toneva, E. M. Aminoff, A. Gupta, and M. Tarr
WIML workshop at NeurIPS 2014 Oral presentation [abs]
Never Ending Image Learner (NEIL) is a semi-supervised learning algorithm that continuously pulls images from the web and learns relationships among them. NEIL has classified over 400,000 images into 917 scene categories using 84 dimensions - termed “attributes”. These attributes roughly correspond to mid-level visual features whose differential combinations define a large scene space. As such, NEIL’s small set of attributes offers a candidate model for the psychological and neural representation of scenes. To investigate this, we tested for significant similarities between the structure of scene space defined by NEIL and the structure of scene space defined by patterns of human BOLD responses as measured by fMRI. The specific scenes in our study were selected by reducing the number of attributes to the 39 that best accounted for variance in NEIL’s scene-attribute co-classification scores. Fifty scene categories were then selected such that each category scored highly on a different set of at most 3 of the 39 attributes. We then selected the two most representative images of the corresponding high-scoring attributes from each scene category, resulting in a total of 100 stimuli used. Canonical correlation analyses (CCA) was used to test the relationship between measured BOLD patterns within the functionally-defined parahippocampal region and NEIL’s representation of each stimulus as a vector containing stimulus-attribute co-classification scores on the 39 attributes. CCA revealed significant similarity between the local structures of the fMRI data and the NEIL representations for all participants. In contrast, neither the entire set of 84 attributes nor 39 randomly-chosen attributes produced significant results using this CCA method. Overall, our results indicate that subsets of the attributes learned by NEIL are effective in accounting for variation in the neural encoding of scenes – as such they represent a first pass compositional model of mid-level features for scene representation.
An exploration of social grouping: effects of behavioral mimicry, appearance, and eye gaze
A. Nawroj, M. Toneva, H. Admoni, and B. Scassellati
CogSci 2014 Oral presentation [abs] [pdf]
People naturally and easily establish social groupings based on appearance, behavior, and other nonverbal signals. However, psychologists have yet to understand how these varied signals interact. For example, which factor has the strongest effect on establishing social groups? What happens when two of the factors conflict? Part of the difficulty of answering these questions is that people are unique and stochastic stimuli. To address this problem, we use robots as a visually simple and precisely controllable platform for examining the relative in- fluence of social grouping features. We examine how behavioral mimicry, similarity of appearance, and direction of gaze influence peoples’ perception of which group a robot belongs to. Experimental data shows that behavioral mimicry has the most dominant influence on social grouping, though this influence is modulated by appearance. Non-mutual gaze was found to be a weak modulator of the perception of grouping. These results provide insight into the phenomenon of social grouping, and suggest areas for future exploration.
The physical presence of a robot tutor increases cognitive learning gains
D. Leyzberg, S. Spaulding, M. Toneva, and B. Scassellati
CogSci 2012 [abs] [pdf]
We present the results of a 100 participant study on the role of a robot's physical presence in a robot tutoring task. Participants were asked to solve a set of puzzles while being provided occasional gameplay advice by a robot tutor. Each participant was assigned one of five conditions: (1) no advice, (2) robot providing randomized advice, (3) voice of the robot providing personalized advice, (4) video representation of the robot providing personalized advice, or (5) physically-present robot providing personalized advice. We assess the tutor's effectiveness by the time it takes participants to complete the puzzles. Participants in the robot providing personalized advice group solved most puzzles faster on average and improved their same-puzzle solving time significantly more than participants in any other group. Our study is the first to assess the effect of the physical presence of a robot in an automated tutoring interaction. We conclude that physical embodiment can produce measurable learning gains.
Robot gaze does not reflexively cue human attention
H. Admoni, C. Bank, J. Tan, M. Toneva, and B. Scassellati
CogSci 2011 [abs] [pdf]
Joint visual attention is a critical aspect of typical human interactions. Psychophysics experiments indicate that people exhibit strong reflexive attention shifts in the direction of another person's gaze, but not in the direction of non-social cues such as arrows. In this experiment, we ask whether robot gaze elicits the same reflexive cueing effect as human gaze. We consider two robots, Zeno and Keepon, to establish whether differences in cueing depend on level of robot anthropomorphism. Using psychophysics methods for measuring attention by analyzing time to identification of a visual probe, we compare attention shifts elicited by five directional stimuli: a photograph of a human face, a line drawing of a human face, Zeno's gaze, Keepon's gaze and an arrow. Results indicate that all stimuli convey directional information, but that robots fail to elicit attentional cueing effects that are evoked by non-robot stimuli, regardless of robot anthropomorphism.
Teaching
2023-2024: Taught a seminar course on Bridging Language in Machines with Language in the Brain.
2016: Instructed a summer course on Machine Learning for Neuroscience as part of the 2016 Multimodal Neuroimaging Training Program
-
Lecture 1 [slides][audio did not work, so no video for this one]
- classification:
- naive Bayes
- SVM
- kNN
- linear regression
-
Lecture 2 [slides]
- model selection:
- overfitting
- cross validation
- feature selection
- regularization
- significance testing:
- permutation test
- multiple comparisons corrections
-
Lecture 3 [slides]
- dimensionality reduction:
- PCA and ICA
- CCA
- Laplacian eigenmaps
- clustering
- k-means
- spectral clustering
- divisive and agglomerative clustering
-
Lecture 4 [slides]
- latent variable models (Hidden Markov Models)
- reinforcement learning
- deep learning
- common architectures and their uses: RNN, LSTM, DBN, CNN
- AlphaGo algorithm details