PostLab
Univerity of Wisconsin-Madison

SfN 2018 - Nanosymposium Session 275

WORKING MEMORY I

Monday, November 5, 8:00 - 11:30AM, 2

Chair: Bradley Postle, University of Wisconsin - Madison

Co-Chair: Clayton E. Curtis, New York University

Speakers: M. Rahmati, Y. Cai, K. K. Sreenivasan, Q. Yu, L. T. Likova, V. Salmela, E. F. Ester, T. C. Sprague, M. Widhalm, G. -Y. Bae, J. A. Brissenden, A.L. Noyce, F. Bouchacourt, J. M. Castelhano

 

Spatial priority in the service of non-spatial working memory

*M. RAHMATI1,1, M. PAYTON2, T. C. SPRAGUE1, C. E. CURTIS1, K. K. SREENIVASAN3;
1New York Univ., New York, NY; 2New York Univ., Madison, WI; 3New York Univ. Abu Dhabi, Abu Dhabi, United Arab Emirates

Previous studies (e.g., Serences et al., 2009; Harrison & Tong, 2009; Rahmati et al., 2018) support a sensory recruitment model of working memory (WM), which posits that the same neural mechanisms that encode sensory information also encode WM content (Postle & D’Esposito, 2015). Persistent activity during WM maintenance in frontal and parietal cortex may provide top-down feedback signals that sculpt neural population activity in visual cortex during WM, keeping information in an accessible state (Curtis & D’Esposito, 2003; Sreeninvasan et al. 2014). Many of the areas that show robust delay period activity are in topographically organized portions of frontal and parietal cortex, where the topography may coordinate the prioritization of items in WM with retinotopic visual cortex (Jerde et al. 2012). Several studies provide strong evidence for this viewpoint in the context of spatial WM. Here, we extend this idea and test the hypothesis that spatial feedback signals might paradoxically even support WM for non-spatial features. To do so, we scanned subjects while they performed a WM task that required them to maintain the orientation of a Gabor patch presented peripherally in one quadrant of the visual field. After a long 10.5s delay, subjects compared the memorized orientation with the orientation of a second Gabor in the quadrant diagonal to the sample. This allowed us to dissociate the spatial position of the encoded stimulus and the spatial position of the test stimulus. In a pilot psychophysical study, we demonstrate that WM performance was better when the sample and test stimuli were in the same quadrant than when they were diagonal to one another. With fMRI and using an inverted encoding model (IEM; Brouwer & Heeger 2011; Sprague & Serences, 2013) of visual space using the patterns of voxel activity from the delay period activity in visual cortex, we could reconstruct the location of the sample stimulus even though its position was task irrelevant. Additionally, using an IEM of Gabor orientation, we could reconstruct the orientation of the stimulus at both the sample and test locations. Together, these results support the hypothesis that spatial feedback signals may prioritize the representation of even non-spatial features encoded at the prioritized location. Perhaps the shared spatial topographic organization in frontoparietal cortex and visual cortex provides a matched interface for perception and higher-order cognitive functions like WM, even when the relevant content is non-spatial in nature.

Reconstructing stimulus identity and context binding from the CDA

*Y.CAI1,3, J. SAMAHA4, B. R. POSTLE2;
1Psychiatry, 2Univ. of Wisconsin-Madison, Madison, WI; 3Sch. of Psychology, Beijing Normal Univ., Beijing, China; 4Psychology, Univ. of Wisconsin-Madison, WI

A recent fMRI study comparing working-memory activity for one motion patch (1M) vs. 3 motion patches (3M) vs. 1 motion and 2 color patches (1M2C) showed a pattern of parietal delay-period activity of 1M = 1M2C < 3M, suggesting that this activity was sensitive to demands on context binding rather than on stimulus representation per se (Gosseries, Yu, et al., 2018). Might the same be true for the contralateral delay activity (CDA) ERP component? To address this question we applied multivariate inverted encoding modeling (IEM) to EEG data collected while subjects performed a delayed recognition (DR; a.k.a. “change detection”) task. First, to train IEMs, subjects performed a perceptual task that entailed viewing a series of variously oriented black bars. The DR task began with an arrow cuing that trial’s critical hemifield, followed by two balanced arrays of 1 or 3 items, one in each visual field, with trial conditions of 1 orientation (1O), 3 orientations (3O), and 1O + 1 color patch (1C) + 1 luminance patch (1L; i.e., "1O1C1L" trials); 1O vs. 1O1C1L operationalized load, and 1O1C1L vs. 3L operationalized context binding. DR performance followed the pattern 1O > 1O1C1L > 3O, with Cowan’s ks of 2.13 (SD=0.32) for 1O1C1L, and of 1.69 (SD=0.42) for 3O. Before computing the CDA, we compared voltages from electrodes contralateral vs. ipsilateral to the cued hemifield, and noted patterns of increasing negativity from 1O < 1O1C1L < 3O from the final 500 msec of the 900 msec delay period, in both sets of electrodes. Subtracting ipsilateral from contralateral signals to compute the CDA removed the 1O1C1L vs. 3O difference, suggesting that this subtraction may remove some signal related to context binding. To assess the informational content of the CDA, we sought to reconstruct representations of stimulus orientation by feeding the subtracted values from contralateral electrodes into the IEM of orientation constructed from the perceptual task (perceptual IEM trained with (unsubtracted) voltages from the same electrodes as contralateral electrodes from DR task). Results revealed successful reconstruction of remembered orientations from the 1O and from 1O1C1L conditions, suggesting that the CDA can contain nonspatial information that is specific to remembered stimuli. Furthermore, the width of the 1O1C1L reconstruction was broader than that from 1O trials, indicating a load-related decline in the precision of the neural representation. Finally, the superior IEM reconstruction of remembered orientation from 1O1C1L than from 3O trials suggests that the CDA contains information about the fidelity of stimulus representation that is not reflected in the first-order index of its magnitude.

Oscillations associated with binding errors in visual working memory

*K.K. SREENIVASAN, A. TEMUDO, V. BABUSHKIN;
New York Univ. Abu Dhabi, Abu Dhabi, United Arab Emirates

Memory errors are a window into the capacity limits that famously constrain visual working memory (VWM). When subjects maintain multiple items in VWM and are asked to report a feature of one item, they sometimes mistakenly report the feature of another item. This is referred to as a binding error. Understanding the neurophysiology underlying binding errors can provide key insights into how coherent representations are maintained in VWM.
One biophysiological model (Barbosa and Compte, 2015) suggests that object features are stored as bumps in individual attractor networks, and that features are bound together through synchronization between individual bumps. Crucially, this synchronization is mediated by intrinsic low-frequency oscillations in the network. A key prediction of this model is that binding errors should result from disruptions in the low frequency oscillatory pattern of the network. Our aim was to validate this model using magnetoencephalography (MEG) to measure network oscillations in a VWM task designed to induce binding errors.
On each trial, subjects briefly saw 3 circles and had to remember their colors and locations over a memory delay. After the delay, they were sequentially cued to report the location of each circle via a central color cue. Subjects’ behavioral reports were analyzed using a maximum likelihood approach that assigned each response a likelihood of being a binding error. Trials with likelihoods greater than 0.7 were considered binding error trials. To examine low frequency network activity associated with binding errors, we computed a phase preservation index (PPI) for each MEG sensor separately for trials with and without binding errors. PPI measures the consistency of the relationship in oscillatory phase across trials. Binding errors were associated with significantly reduced PPI in the upper beta range (25-30 Hz) during the memory delay in frontal sensors. This pattern of reduced phase consistency was specific to binding errors, as opposed to other VWM errors. This finding provides initial support for the idea that object features are bound via low-frequency network oscillations.

Continuous theta-burst stimulation of parietal cortex alters representational structure of occipital stimulus representations in visual working memory

*Q. YU1, O. GOSSERIES2, B. POSTLE1
1Univ. of Wisconsin-Madison, Madison, WI; 2Univ. and Univ. Hosp. of Liege, Liege, Belgium

Persistent elevated activity in parietal cortex and decodable mnemonic representation in occipital cortex have been consistently observed during working memory maintenance and have thus been a major focus of working memory research. Recent work has suggested that persistent elevated activity in parietal cortex reflects demands of context binding, and that BOLD activity in parietal cortex, multivariate decoding accuracy of stimulus identity in occipital cortex, and behavioral memory precision are all inter-related (Gosseries, Yu, et al., 2018). In the current study, we sought to causally examine the relationship between parietal function and mnemonic representations in occipital cortex using continuous theta-burst stimulation (cTBS). Participants performed a delayed-recall (a.k.a., “-estimation”) task on motion directions of load 1, 2, and 3. Each participant underwent a baseline session (no cTBS), two IPS-stimulation sessions (cTBS on IPS), and two MT-stimulation sessions (cTBS on MT). We used multivariate pattern analysis (MVPA) to examine the stimulus representations in occipital voxels with strongest sample-driven activity. Replicating previous findings, the remembered motion direction could be decoded in all three load conditions during the delay period, and decoding accuracy decreased with increasing memory load. This pattern was observed in all conditions, with or without cTBS, except that decoding accuracy for load 3 in the IPS-stimulation condition dramatically dropped to baseline in the middle of the delay period, a result suggesting that perturbation of IPS function impacted stimulus representation in occipital cortex at high loads. Moreover, when the classifier was trained on the baseline condition and tested on the cTBS conditions, or vice versa, most of the decoding performance returned to baseline. This failure in cross-condition decoding suggested a change in representational structure of stimulus representations in occipital cortex when cTBS was applied. These results together suggest a causal role of IPS in controlling stimulus representations in occipital cortex in visual working memory, particularly in conditions that put a heavy demand on context binding, an operation that may be governed by parietal salience maps.

Memory visualization and spatial learning

*L. T. LIKOVA1, C. W. TYLER2
2Smith-Kettlewell Brain Imaging Ctr., 1Smith-Kettlewell Eye Res. Inst., San Francisco, CA

Introduction. To analyze mechanisms of learning and visual working memory, we asked what brain networks are involved in the processes of study through direct viewing, and of visualization from immediate memory, of previously unfamiliar material.
Methods. Functional MRI was run while complex spatial structures in the form of line-drawings were alternately i) shown and viewed to be learned, and ii) mentally visualized on a blank screen in a novel procedure to enhance their memory representations. During every trial, the viewing and visualization blocks were 30 s each, separated by 20 s rest periods, and repeated 3 times. The brain imaging session was followed by testing of comprehension and by reconstruction through memory-guided drawing of the learned material. Results & Conclusions. The first site of particular interest was the primary visual cortex (V1) as our previous studies in the blind have implied that V1 neurally implements - in an amodal form - the ‘spatial sketchpad’ for working memory (Likova, 2012, 2013). The primary visual cortex was subdivided into foveal, parafoveal, mid- and far-peripheral regions. Remarkably, direct viewing and visualization equally activated the far- and mid-periphery regions, whereas the visualization signal in the parafoveal representation dropped to about half of that of the direct viewing, and surprisingly even inverted into strong suppression throughout the foveal confluence. Stemming from peripheral V1, a distributed visualization network included parietal and frontal regions. Conversely, the classical visual hierarchy beyond V1 was not involved. Granger causality analysis was used to disentangle the interregional interactions within the activated networks and to provide deeper insights into cortical mechanisms of visualization from memory and its involvement in learning.

A population coding model for simple and complex visual objects in working memory

*V. SALMELA1,2, K. ÖLANDER1, I. MUUKKONEN1, P. M. BAYS2
1Dept. of Psychology and Logopedics, Univ. of Helsinki, Helsinki, Finland; 2Dept. of Psychology, Univ. of Cambridge, Cambridge, United Kingdom 

Many studies of visual working memory test humans’ ability to reproduce primary visual features of simple objects, such as the orientation of a grating or the hue of a color patch, after a delay. A quintessential finding of such studies is that the precision of responses declines continuously with increases in the number of features or objects in memory. This phenomenon, and the specific distributions of error observed at each set size, can be parsimoniously explained in terms of neural population codes. Here we examined visual working memory for high-level objects, images of human faces. We presented participants with memory arrays consisting of oriented gratings, facial expressions (angry, sad, fearful, disgusted or happy), or a mixture of both. Memory precision was measured using a reproduction task in which participants adjusted, after a two second retention interval, the expression (or orientation) of a probe item to match a cued item from the memory array. Precision of reproduction for all five facial expressions declined continuously as the memory load was increased from one to five faces. When both gratings and faces had to be remembered simultaneously, an asymmetry was observed. We found that increasing the number of faces decreased precision of orientation recall, but increasing the number of gratings did not affect recall of facial features. These results suggest that memorizing faces involves the automatic encoding of low-level features, including orientation, in addition to high-level expression information. We adapted the population coding model for circular variables to make it applicable to the non-circular and bounded parameter space used for expression estimation. The model had two free parameters, tuning width and gain constant, that determined the Gaussian tuning and overall response level of neurons encoding expressions of different intensity. Total population activity was held constant as a function of memory load according to the principle of normalization. The intensity of expression was decoded from the population response by drawing samples from the Bayesian posterior distribution. The decreasing activity associated with each item explained the decrease in memory precision with set size, and the differences between expressions were explained primarily by differences in the gain constant. Replacing the uniform prior with a Gaussian further improved the fit to data, by accounting for the bias in participants’ responses towards neutral or moderate expressions. Our results show that principles of population coding can be applied to model memory representations at multiple levels of the visual hierarchy.

Retrospectively cued attention shifts mitigate information loss in human cortex during working memory storage

*E. F. ESTER, L. RODRIGUEZ, A. NOURI; 
Psychology, Florida Atlantic Univ., Boca Raton, FL

Working memory (WM) performance can be improved by a retrospective cue presented after encoding is complete. Several (non-exclusive) mechanisms may be responsible for this improvement, and the effects of retrospective cues on neural representations of memoranda are poorly understood. To address these issues, we combined EEG and image reconstruction techniques to track cue-driven changes in spatial WM representations over time. Participants encoded the spatial locations of two colored discs (blue and red). During neutral trials, an uninformative color cue presented after the encoding display informed participants to remember the locations of both discs across a 2500 ms blank interval. During valid trials a 100% reliable color cue indicated which disc would be probed at the end of the trial. Valid cues were presented either immediately after offset of the encoding display (valid-early, VE), or at the midpoint (1250 ms) of the subsequent blank interval (valid-late, VL). To examine the effects of retro-cues on spatial WM representations, we computed an estimate of location-specific information for cued and uncued locations by applying an inverted encoding model to spatiotemporal patterns of induced alpha-band activity over occipitoparietal electrode sites during the blank delay period (e.g., Foster et al. J Neurophysiol 2016). During neutral trials we observed a monotonic decrease in location-specific information over the course of the delay period. During valid trials this decrease was eliminated (VE trials) or partially reversed (VL trials) for cued locations and exacerbated for un-cued locations (VE and VL trials). Our findings suggest that retrospectively-cued shifts of attention enhance memory performance by preventing (VE) or partially reversing (VL) information loss during WM storage.

Tracking the dynamics and uncertainty of visual spatial working memory representations across human cortex

*T. C. SPRAGUE1, A. YOO1, M. RAHMATI1, W. MA1,2, C. E. CURTIS1,2
1Dept. of Psychology, 2Ctr. for Neural Sci., New York Univ., New York, NY

Visual working memory (WM) enables the maintenance and manipulation of information over brief delays. Nearly a decade of neuroimaging studies applying a variety of machine learning techniques have identified neural correlates of WM representations of features like orientation and color in visual (Serences et al, 2009; Harrison & Tong, 2009), parietal (Bettencourt & Xu, 2016), and frontal cortex (Ester et al, 2015; Yu & Shim, 2017). Additionally, when participants must precisely maintain spatial positions in WM, robust decoding of those positions is possible across many retinotopically-organized visual maps (Jerde et al, 2012; Sprague et al, 2014; Rahmati et al, 2018). How do neural representations in each of these visual maps unfold over time to support WM behavior? We applied several multivariate analyses to (1) assay the temporal evolution of WM representations across human cortex and evaluate their stability, (2) decode the uncertainty with which each region represents the remembered feature value and (3) relate these metrics to aspects of behavioral performance. Participants performed a single-item memory-guided saccade task while we measured neural responses using whole-brain fMRI at sub-second temporal resolution (1.33 Hz). We applied a linear inverted encoding model (IEM) to reconstruct the contents of WM as they evolved through each 12-s trial from visual maps in occipital, parietal, and frontal cortex. Despite the sluggishness of the BOLD signal, we observed stark differences in the temporal profile of information content across several visual maps. For example, V3AB representations were observed at earlier timepoints within the trial than those in earlier (V1-V3) or later (IPS0-2) visual maps. Moreover, representations in this map (among several others) were remarkably stable: models estimated at the beginning or end of the delay period enabled reconstruction of nearly identical representations. Next, we extended the linear IEM to a full generative model, which enabled us to recover not just a point estimate of the neural representation, but a full likelihood function over feature space, from which we could estimate uncertainty (van Bergen et al, 2015). Within several posterior parietal and occipital visual maps (including V3AB), we found that increases in decoded uncertainty predict wider memory error distributions, suggesting a critical link between our measure of neural response patterns and the quality of WM representations. Ongoing work aims to extend these methods to larger WM loads and additional task demands, including explicit and implicit judgments about the quality of WM representations (Rademaker et al, 2012; Suchow et al, 2017).

Can TMS to visual cortex reactivate unattended representations held in visual working memory?

*M. WIDHALM1, N. S. ROSE2;
2Dept. of Psychology, 1Univ. of Notre Dame, Notre Dame, IN

 Recent research on working memory (WM) has shown evidence for activity-silent retention mechanisms and the reactivation of latent representations in WM with transcranial magnetic stimulation (TMS) on simultaneously-recorded EEG (Rose et al., 2016, Science). What is unclear is if TMS to sensory cortex can reactivate stimulus-specific features of these representations. Here we used a concurrent TMS-EEG protocol in seven healthy young adults (aged 18-35) to investigate if TMS to primary visual cortex could reactivate stimulus-specific features of latent representations in visual WM. We first applied single pulse TMS (spTMS) to left V1/V2 to localize phosphenes in the lower right visual field for each participant. Then two oriented gratings were presented -- one at the phosphene location for each subject and the other in the opposite (left) hemifield at the same angle and distance from fixation. These gratings were to be retained on a WM task with two retro-cues and two recognition probes, such that one grating would be attended and the other un-attended following each retro-cue. During the delay period following both cue 1 and cue 2, spTMS was applied to the retinotopic location of the target sensory representation in primary visual cortex at 110% of phosphene threshold. We used inverted encoding models to reveal if the specific orientation of the latent memory item could be reconstructed from the TMS-evoked response on simultaneously recorded EEG. Orientation of both the attended and unattended items could be reconstructed from time windows 80-600ms after TMS (ps < .01). TMS also had the effect of reducing recognition memory precision for items presented ipsilateral to TMS that were initially held in an unattended state (p= .03). In sum, TMS to primary visual cortex caused the reactivation of stimulus specific features of latent representations held in visual WM and affected visual WM precision. These results provide causal evidence for a role of sensory recruitment for visual WM.

Motion perception in 360º: Decoding direction of motion using alpha-band EEG oscillations and sustained ERPs

*G. -Y. BAE, S. J. LUCK;
Psychology, UC Davis, Davis, CA

Recent advances in multivariate classification have made it possible to decode neural representations using the topography of human scalp EEG signals. However, it is unclear whether the EEG decoding reflects bona fide stimulus representations or attention-related support mechanisms that underlie task performance. The present study tested the hypothesis that alpha band (8-12 Hz) oscillations primarily reflect attentional mechanisms whereas sustained ERPs reflect both stimulus representations and attentional mechanisms. To test this hypothesis, we recorded the EEG while observers performed a motion direction estimation task. They viewed random dot kinematograms (RDKs; 25.6% or 51.2% coherence) in which the coherent motion could be in any direction from 0°-360°, and they reported their perception of the exact motion direction at the end of the stimulus. In the decoding analyses, the stimulus direction was discretized into 16 direction bins, and a multiclass support vector machine (SVM) was trained to classify the data from a given direction into one of the 16 direction bins. We decoded the direction of motion at each time point during both the stimulus period (during which motion information was being accumulated) and the report period (during which a shift of attention was necessary to make a fine-tuned direction report). For trials with high motion coherence (51.2%), we found that ERP-based decoding was above chance during both the stimulus and the report periods, whereas alpha-based decoding was near chance during the stimulus period but was above chance during the report period. However, both ERP-based and alpha-based decoding were at chance during both the stimulus and the report periods for trials with low motion coherence (25.6%). Because the lack of decodability for the low motion coherence could be due to large variability in the perceived motion direction, we attempted to decode the reported direction instead of the stimulus direction. ERP-based decoding of reported direction for the low motion coherence trials was above chance during both the stimulus and report periods. Alpha-based decoding of the reported direction was only briefly above chance during the stimulus period but well above chance during the report period. Together, these results show that sustained ERP activity reflects both the actual stimulus direction and the reported direction, whereas alpha-band oscillations primarily reflected the process of converting the perceived direction into a report.

Stimulus-specific visual working memory representations in human cerebellum

*J. A. BRISSENDEN1, S. M. TOBYNE2, M. A. HALKO3, D. C. SOMERS1;
1Psychological and Brain Sci., 2Grad. Program for Neurosci., Boston Univ., Boston, MA; 3Neurol., Harvard Med. Sch. / Beth Israel Deaconess Med. Ctr., Boston, MA

The question of where working memory (WM) contents are stored in the brain is the subject of ongoing debate. Based on electrophysiological recordings in non-human primates and neuroimaging in humans, it has long been asserted that pre-frontal cortex (PFC) supports WM maintenance (Funahashi et al., 1989; Courtney et al., 1998; Mendoza-Halliday et al., 2014). On the other hand, the sensory recruitment hypothesis posits that WM storage is mediated by the same areas involved in the initial sensory processing of stimuli and that the PFC instead serves as a source of top-down biasing signals (Pasternak & Greenlee, 2005; D’Esposito & Postle, 2015). Recently, it has been suggested that working memory contents are distributed across a number of cortical areas including both sensory and PFC regions (Serences, 2016; Christophel et al., 2017). Despite findings of robust connectivity between cerebellar sub-regions and cortical areas implicated in working memory storage, no one has examined whether any portion of the cerebellum encodes stimulus-specific representations during visual working memory. To investigate WM stimulus specificity in the cerebellum, participants were presented with two circular patches of coherent dot motion followed by a post-cue indicating which motion direction to maintain over a long delay-period (10 s). Participants then adjusted a probe stimulus to match the remembered motion direction. Using a forward encoding model of motion direction, we were able to accurately reconstruct the remembered motion direction from the delay-period multi-voxel activity patterns of cerebellar lobules VIIb/VIIIa. In contrast, non-remembered motion directions could not be reconstructed from cerebellar delay-period activity patterns. These results bolster the notion that a distributed network of brain areas supports WM storage and further show that this network is not limited to cortical structures. Moreover, our findings provide new insight into the function of the cerebellum and its contributions to cognitive processing.

Sensory-selective and sensory-independent auditory and visual working memory in human cerebral cortex

*A. L. NOYCE1, S. M. TOBYNE2, B. SHINN-CUNNINGHAM3, D. C. SOMERS1;
1Psychological and Brain Sci., 2Grad. Program for Neurosci., 3Biomed. Engin., Boston Univ., Boston, MA

Working memory (WM) depends on stimulus-specific sensory representations as well as on more general cognitive processes (e.g. Pasternak & Greenlee 2005; Duncan 2010; D’Esposito & Postle 2015; Ester et al. 2015, Sarma et al. 2016). However, there is still little consensus on the relative contributions of these two systems, or even on which brain structures participate in which. Most prior work has been exclusively within the visual sensory modality, measuring WM specialization for particular visual feature dimensions. Here, we take a broader focus by measuring recruitment in individual subjects during visual WM and auditory WM in order to characterize cortical regions in terms of the degree of sensory selectivity or sensory independence exhibited. Subjects (n=15) performed visual and auditory 2-back while fMRI was collected (TR = 2s, TE = 30ms, 2mm voxels). The magnitude of visual and auditory WM recruitment at each cortical vertex in each subject was used to compute the Multiple Demand Index (Noyce et al. 2017), a continuous measure of shared activation across tasks. We observed sensory-selective activation bilaterally in lateral frontal regions along the precentral sulcus (as previously reported; Michalka et al. 2015; Noyce et al. 2017), as well as in the expected posterior cortical regions. Our previous group-level functional connectivity analysis of Human Connectome Project data suggested the existence of additional sensory-selective regions in more anterior portions of LFC (Tobyne et al., 2017). Here we confirm their existence in individual subjects by revealing sensory-selective WM task recruitment of additional frontal regions (visual: mid inferior frontal sulcus (midIFS); auditory: frontal operculum (FO)). Sensory-independent regions lie immediately adjacent to many of these structures. These include superior & inferior frontal junction and anterior & mid inferior frontal sulcus, as well as anterior insula, medial superior frontal gyrus, lateral/anterior portions of the intraparietal sulcus (IPS), and posterior superior temporal sulcus (STS). The precise organization of sensory-selective and multisensory-selective structures is fine-grained, with regions in close proximity exhibiting different modality preferences; conventional group-average analyses are insufficient to detect organization that can be observed in within-subject analyses. These results demonstrate a new approach to understanding the complex organization of sensory-specific and sensory-independent structures that support human cognition.

A flexible model of working memory

*F. BOUCHACOURT1, T. BUSCHMAN2;
1Princeton Neurosci. Inst., Princeton Neurosci. Inst., Princeton, NJ; 2Princeton Neurosci. Inst. & Dept. of Psychology, Princeton Univ., Princeton, NJ

Working memory is fundamental to complex cognition, providing the workspace on which thoughts are held and manipulated. A defining characteristic of working memory is its flexibility: we can hold anything in mind. However, typical models of working memory rely on tightly tuned attractors to maintain a persistent state of activity and therefore do not allow for the flexibility observed in behavior.
Here we present a novel network model that captures the flexibility of working memory. To achieve this, the network uses a two-layer structure. First, a “sensory” layer encodes inputs into several independent pools of selectively tuned neurons. This layer is then randomly, reciprocally connected with a second “random” layer of neurons. The bi-directional recurrent connectivity between the sensory and random layers maintains inputs. Importantly, due to the parameter-free nature of these interactions, the model can maintain any inputs, without tuning, capturing the flexibility of working memory. However, this flexibility comes at a cost: the randomness of connections between the sensory layer and random layer leads to interference between memory representations, resulting in a capacity limitation on the number of items that can be maintained in the network.
Our model provides a mechanistic account for several behavioral and neural hallmarks of working memory. First, the network has a limited capacity, able to maintain only a few items at a time. Second, consistent with electrophysiological and imaging evidence, adding multiple memories leads to divisive-normalization-like interference due to balanced excitation and inhibition in the network. Such interference reproduces experimental observations on the effect of time, load, and their interaction, on memory degradation in analog tasks. Third, neural representations are distributed across the network, as seen in humans and animals. Fourth, neurons in the untuned layer show the high-dimensional, "mixed" selectivity observed in prefrontal cortex. Finally, although neural activity in the model is dynamic, mnemonic representations are separable within a stable subspace, consistent with recent monkey electrophysiology findings for single item working memory tasks. The model makes several predictions, including that increasing memory load should not change the memory subspace but should reduce the discriminability between memories in this space, making them harder to decode. In summary, we present a simple, parameter-free, network model that uniquely allows for flexible representations while still capturing key behavioral and neural characteristics of working memory.

Intracranial recordings within human hippocampus reveal task dependent spectral signatures

*J. M. CASTELHANO1, I. C. DUARTE1, F. PELLE2, S. FRANCIONE2, F. SALES3, M. CASTELO-BRANCO4;
1IBILI/ICNAS, Univ. of Coimbra, Coimbra, Portugal; 2Claudio Munari Epilepsy Surgery Center, Niguarda Hosp., Milan, Italy; 3Epilepsy unit, CHUC, Coimbra, Portugal; 4IBILI - Fac. of Medicine, Univ. of Coimbra, Coimbra, Portugal

It is well-known that the hippocampal formation plays a crucial role in memory encoding. However, functional specialization of distinct parts within hippocampus remain unclear. The separation of functions of anterior and posterior regions of the hippocampus is well-recognized. The posterior hippocampus has been shown to be involved in spatial memory and navigation while the anterior hippocampus mediating other complex memory functions. We aimed to clarify the relative role of anterior/posterior human hippocampus in a wide array of memory and non-memory related tasks, through the analysis of task related oscillatory patterns. Patients had been submitted to stereo-electroencephalography (sEEG) with a 3D array of electrodes implanted in different areas of their brain for localization of seizure foci. Many of these patients have implanted depth electrodes with contacts reaching the hippocampus region. We studied the hippocampus function while subjects were performing distinct neuropsychological tasks, relevant for their assessment. To our knowledge, this is the first study including sEEG analysis of subjects performing distinct neuropsychological tasks for long periods. Invasive data were acquired from 7 subjects who had stereotactically implanted intracranial depth electrodes. Tasks were chosen to assess different cognitive tasks related to memory. Briefly, we divided the data into blocks containing different tasks (Rey Figure, Benton tasks, visuo-spatial memory, face recognition and selective attention) and performed Time-frequency analysis between 5-500Hz [Uhlhaas et al., 2006]. Significant induced oscillations were detected by frequency band of interest (Theta 4-8Hz, Alpha 8-12Hz, Beta 15-25Hz, Gamma, above 30Hz) in the hippocampal contacts. Statistical analysis (Friedman test) comparing the power per frequency band, tasks and hippocampus regions (anterior/posterior) confirmed a main effect of frequency band (p=0.002). In the lower frequency bands (theta and alpha), we found mainly differences between high memory load tasks and no memory load tasks (p=0.002) in the anterior hippocampus, while power at gamma frequencies was higher for visual attention tasks, in particular in the anterior hippocampus. Furthermore, we found a pattern of activation in alpha and beta bands in posterior hippocampus that show a gradient related to the memory load of the task in hand (lower power for simpler visuoconstructive tasks). The present findings support the critical role of low frequency (human theta and alpha) oscillations in the hippocampus during memory tasks and suggests the presence of task related spectral signatures.

©2024 Postle Lab.