Loading...

Future and past events

Abstract

Imaging technologies are increasingly used to generate high-resolution reference maps of brain structure and function. Modern scientific discovery relies on making comparisons between new maps (e.g. task activations, group structural differences) and these reference maps. Although recent data sharing initiatives have increased the accessibility of such brain maps, data are often shared in disparate coordinate systems (or ``spaces''), precluding systematic and accurate comparisons among them. Here we introduce the neuromaps toolbox, an open-access software package for accessing, transforming, and analyzing structural and functional brain annotations. We implement two registration frameworks to generate high-quality transformations between four standard coordinate systems commonly used in neuroimaging research. The initial release of the toolbox features >40 curated reference maps and biological ontologies of the human brain, including maps of gene expression, neurotransmitter receptors, metabolism, neurophysiological oscillations, developmental and evolutionary expansion, functional hierarchy, individual functional variability, and cognitive specialization. Robust quantitative assessment of map-to-map similarity is enabled via a suite of spatial autocorrelation-preserving null models. Finally, we demonstrate two examples of how neuromaps can be used to contextualize brain maps with respect to canonical annotations. By discovering novel associations with previously-established features of brain structure and function, neuromaps generates biological insight about new brain maps. Altogether,neuromaps combines open-access data with transparent functionality for standardizing and comparing brain maps, providing a systematic workflow for comprehensive structural and functional annotation enrichment analysis of the human brain.

Bio

Bratislav Misic leads the Network Neuroscience Lab and investigates how cognitive operations and complex behaviour emerge from the connections and interactions among brain areas. The goal of this research is to quantify the effects of disease on brain structure and function. His research program emphasizes representations and models that not only embody the topological organization of the brain, but also capture the complex multi-scale relationships that link brain network topology to dynamic biological processes, such as neural signalling and disease spread. Misic's research lies at the intersection of network science, dynamical systems and multivariate statistics, with a focus on complex data sets involving multiple neuroimaging modalities, including fMRI, DWI, MEG/EEG and PET.

Abstract

In my upcoming presentation, I will introduce an extension of the generalized Ising model that integrates metabolic activity as a key input, offering fresh insights into the metabolic basis of neurodegenerative diseases. This innovative approach acknowledges the critical influence of metabolic alterations on the progression of such disorders. By embedding the notion of local temperature to represent metabolic activity within the Ising model, we aim to enhance our understanding of how metabolic disturbances impact the brain's structural and functional dynamics in neurodegeneration.

This refined model allows us to directly feed metabolic activity data into the simulation and could possibly facilitate a more accurate prediction of the complex interplay between brain structure and function in the context of neurological diseases. Through this methodology, we seek to unravel the intricate relationship between metabolic changes and their effects on neural connectivity and brain states, thereby providing a robust framework for exploring the pathophysiology of neurodegenerative conditions. Our adaptation of the Ising model to include metabolic inputs could represent a significant advancement in modeling the structure-function relationship in the brain, particularly in the study of neurodegenerative diseases. This approach not only could broaden our comprehension of the underlying mechanisms of these disorders but also open new avenues for research and potential therapeutic interventions.

Bio

Andrea Soddu is an Associate Professor at the University of Western Ontario in the Department of Physics and Astronomy and Principal Investigator at the Western Institute for Neuroscience. His expertise is in resting state fMRI and FDG-PET with application to severe brain injury with disorders of consciousness. Since appointed at Western in 2013 his Lab has been focusing on modelling functional connectivity. One of his Lab’s recent research findings states that the brain is flat and claims the concept of dimensionality could bring important insights in the understanding of the way information is transferred across the brain with possible implication for disorders of consciousness.

Abstract

Despite current explosive developments, most workers at the forefront of AI believe that consciousness is still far away for machines. I will argue the contrary. My work over the last 20 years shows that a coherent theory can be made of even the most basic aspect of consciousness, namely “feel”. This ultimate “what it’s like” to have sensations like the redness of red or the suffering of pain, is usually thought to be the primary obstacle to machine consciousness. But I will show how this supposedly “hard problem of consciousness” dissipates when you take what I call a “sensorimotor” approach. The sensorimotor approach is not just a philosophical trick. It makes empirical predictions about colour perception, sensory substitution, and change blindness, and is a scientific theory. It predicts that within the next 5-10 years machines will be as conscious as humans or even more so. Humans will realize they’re not so special and will be faced with severe ethical problems.

Short Bio

Kevin O'Regan was director of the Laboratoire Psychologie de la Perception, CNRS, Université Paris Descartes, and is now emeritus director of research at the Integrative Neuroscience and Cognition Center, CNRS & Université Paris Cité. After doing his PhD on eye movements in reading he became interested in visual stability and discovered the phenomenon of change blindness. His current work concerns the sensorimotor approach to phenomenal consciousness and its applications to robotics. See http://kevin-oregan.net and http://whatfeelingislike.net .

Abstract

Rapid eye movement (REM) sleep behavior disorder (RBD) is a parasomnia characterized by abnormal violent behaviors during REM sleep and the loss of normal REM sleep muscular atonia potentially arising from the locus subcoeruleus (LsC) which is part of the tiny pontine structure called locus coeruleus/subcoeruleus complex (LC/LsC). RBD is considered a prodromal stage of α-synucleinopathies. The main hallmark of isolated/idiopathic RBD (iRBD) is REM sleep without atonia. Further, iRBD patients are at high risk of developing overt parkinsonism such as Parkinson’s disease (PD), dementia with Lewy bodies, and multiple system atrophy. PD is considered the second most frequent neurodegenerative disorder in the world after Alzheimer's disease. Some idiopathic patients with PD manifest RBD after the onset of motor symptoms. The main neuropathological hallmark of PD is α-synuclein aggregation and the progressive depletion of neuromelanin-containing dopaminergic neurons and iron accumulation in the substantia nigra (SN). Neuromelanin-sensitive MRI technique can quantify neuromelanin changes and quantitative susceptibility maps (QSM) and R2* can detect brain iron distribution. Nonetheless, the regional progression of nigral iron changes in patients with iRBD and PD (with or without RBD) is debated. In this talk, I will give an overview of progression markers using quantitative MRI and the role of deep learning in understanding the brain changes in iRBD and PD.

Short Bio

Dr. Rahul Gaurav is a Scientist with more than a decade of research experience in Paris, France. He works with the Movement Investigations and Therapeutics (MOV'IT) team and the Center for NeuroImaging Research (CENIR) at the Paris Brain Institute (ICM), France. He holds a Doctorate in Neuroscience using Neuroimaging and AI from the Doctoral School of Brain Behavior and Cognition, Sorbonne Université. He has authored/co-authored over 30 scientific papers in conferences and journals and has given over 20 international guest lectures and talks.

This is a two-week scientific meeting organized by the bHealthyAge Lab and the CRM (Centre de Recherches Mathématiques). All the presentations will take place at CRM (Université de Montréal).

Overview

The brain contains 100 billions neurons and while original study of function focused on local coupling and function, a consensus has been reached that connectivity, graphs and networks are critical to understand brain function. Recent progress in network science to model and analyze the dynamics of the brain has been crucial to simulate brain functional connectivity and to predict network states.

While network models have been studied as independent systems, efforts in neuroscience in recent years has been put towards inferring networks of the brain from imaging data. Using probabilistic methods and inverse models, recurrent networks across populations and their change with disease have been characterized based on input from electrophysiological and functional imaging.

This sub-theme will cover the notions of network dynamics, discuss conceptual frameworks to model brain network topology and provide the latest progress in neural network inference from imaging data and associated methodological approaches at modeling networks of the brain.

Speakers

  • Hugues Berry (Inria)
  • Joana Cabral (University of Minho)
  • Maxime Descoteaux (Université de Sherbrooke)
  • Christophe Grova (Concordia University)
  • Yasser Iturria-Medina (Montreal Neurological Institute,)
  • Viktor Jirsa (Aix-Marseille University)
  • Randy McIntosh (Simon Fraser University)
  • Elkaïoum Moutuou (Université Concordia)
  • Andrea Soddu (Western University)
  • Christine Tardif (Montreal Neurological Institute)

Link

https://www.crmath.ca/en/activities/#/type/activity/id/3888

Abstract

During my talk, I will give an overview of my research into behavioral and neural correlates of procedural memories in humans. I will start by talking about brain networks involved in different aspects of motor sequence learning. Next, I will present evidence for the experience-driven plasticity and reorganization within the motor network. I will then focus on the role of sleep in memory consolidation and will report our new results that link temporal sleep spindle clustering to spontaneous reactivation of the task-related network.

Short bio

Ella Gabitov, Ph.D., is a Research Associate at the McConnell Brain Imaging Centre of the Montreal Neurological Institute affiliated with McGill University. In her research, she uses a multimodal approach combining psychophysics and cutting-edge technologies for non-invasive neuroimaging (fMRI, EEG) to study neural mechanisms underlying acquisition of procedural (“how-to”) knowledge in humans. This “how-to” knowledge is at the basis of cognitive and executive skills that allow adaptive behaviors necessary for effective everyday functioning. Areas of her special interest include motor skill consolidation, reactivation-based plasticity, and the role of sleep in these memory-related processes. Dr. Gabitov received her undergraduate degree in mathematics and philosophy. She holds an M.A. and Ph.D. in Neuroscience from the University of Haifa, Israel. Over the course of her graduate training, she worked under the supervision of Prof. Avi Karni studying neural substrates underlying encoding, consolidation and generalization of motor skills. In 2014, she joined the laboratory of Prof. Julien Doyon at the University of Montreal to study the role of sleep in procedural and declarative memories. In 2018, she was invited to continue her research at the Montreal Neurological Institute (The Neuro). During her research career, Dr. Gabitov contributed to major brain science initiatives such as the Human Brain Project (HBP) and Healthy Brains, Healthy Lives (HBHL). Funding agencies supporting her research include the ERA-NET NEURON, Canadian Institutes of Health Research (CIHR), and Fonds de recherche du Québec – Santé (FRQS).

Abstract

Pathological tau propagates in a stereotypical manner in Alzheimer’s Disease (AD), spreading from the somatodendritic compartment of neurons in the entorhinal cortex (EC) during the early stages. To date, much of the work examining tau propagation has been performed using mutated or pathological tau, transgenic animal models, or a combination of these. This is the first study comparing the propagation pattern of 2N4R human tau with that of truncated K18 P301L tau in a wild-type C57BL/6J mouse model. This mouse model has a well-defined connectome. 2N4R is of particular interest, as this isoform has the same 4R structure as the commonly used P301L mutated tau. At P56, C57BL/6J mice will be injected with 2 μg of 2N4R tau, K18 P301L tau, or a vehicle solution in the entorhinal cortex. Mice will be sacrificed 24hr after 1 injection, 24hrs after 5 days of injections, or 1, 2, 4, 10, or 13 weeks after 5 days of injections. To date, injections of nonmutated tau have been confirmed using the AT8 antibody. However, a high level of background signal is occurring during fluorescent staining. We have since shifted staining and microscopy techniques to DAB and brightfield imaging for the purposes of antibody validation. Results of the Val256 and Ser356 antibodies show high background and no signal in both staining techniques. AT8 staining is showing some staining following injections of the vehicle solution. However, staining levels are higher in the 2N4R injection groups. New antibodies targeting the amino acid regions 162-175 and 268-312 have been received and are being tested. We will also begin perfusing tissues, which is a technique that removes blood from the brain and may reduce background signals, during collection.

Short bio

My name is Daniel Lamontagne-Kam and I’m a pharmacology PhD student in the laboratory of Dr. Jonathan Brouillette at l’Université de Montréal in the Département de Pharmacologie et Physiologie. I’m currently working with Arsalan Rahimabadi and Dr. Habib Benali on a project investigating the propagation of tau in a C57BL/6J mouse model. As a team, we presented preliminary results this past May at the Canadian Association of Neuroscience conference in Montreal. The presentation was titled “Anterograde and retrograde propagation of tau between the hippocampus and entorhinal cortex in mice”.

Abstract

Simultaneous recording of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) is a very promising non-invasive neuroimaging technique. However, EEG data obtained from the simultaneous EEG-fMRI recording are strongly influenced by MRI-related artefacts, namely gradient artefacts (GA) and ballistocardiogram (BCG). When compared to the GA correction, the BCG correction is more challenging to remove due to its inherent variabilities and dynamic changes over time, especially for longer data acquisition of sleep and resting state EEG-fMRI. Conventionally, the BCG artefacts are corrected based on R-peaks detected from the electrocardiogram (ECG), but the ECG is also distorted in the MRI scanner, sometimes becoming problematic. I will discuss two potential solutions, which are software (spatial filtering beamforming technique) and hardware (Carbon-wired loop) solutions. Both solutions appear promising for the BCG corrections without relying on the ECG recordings. Furthermore, I will discuss how to implement these methods for the sleep EEG-fMRI studies at both 3T and 7T MRI, to better understand brain activity during sleep. These methodological developments and applications allow us to advance our understanding of how the sleeping brain works, the functions of sleep and the implications of insufficient sleep. Sleep EEG-fMRI could enable us to push the frontiers of sleep science and unravel the mysteries of sleep.

Short Bio

Dr Makoto Uji is a research scientist at Dr Masako Tamaki’s Cognitive Somnology lab in RIKEN Centre for Brain Science (CBS), Japan. He received his PhD in sport science in Liverpool John Moores University, UK. After his PhD, he has learned simultaneous electroencephalography (EEG) - functional magnetic resonance imaging (fMRI) method from Dr Stephen Mayhew and Dr Karen Mullinger when he was a postdoc at the University of Birmingham, UK. Especially, for the last 5 years, he has been applying this simultaneous EEG-fMRI method in sleep and sleep disorder studies in Dr Christophe Grova’s lab and Dr Thanh Dang-Vu’s lab at Concordia University, Canada and Dr Masako Tamaki’s lab at RIKEN CBS. His research interest is to better understand how the human sleeping brain works and the functional role of sleep in maintaining healthy cognitions and brain maintenance, and also to unveil a link between the healthy brain functions and aging brain during sleep.

Abstract

Imaging technologies are allowing neuroscientists to gain critical insights into the neural networks mediating a variety of cognitive processes. In this talk, I will review studies looking at the intricate relationship between the structure and function of brain networks. First, I will describe how the anatomical organization of the brain influences the patterns of static and dynamic functional interactions using MRI data. Then, I will expose the structural underpinnings of intrinsic coupling modes in the cerebral cortex using electrophysiological data. This research focuses on understanding how the structural connectivity within the cortex leads to the emergence of specific phase and envelope coupling patterns. These observations shed light on the complex interplay between the physical structure of the brain and its dynamic processes, demonstrating that brain function, mediated by neural networks, is intimately related to the anatomical scaffold.

Short Bio

Arnaud studied computer science at Polytech’Orléans, France, where he received an engineer diploma in 2006. Then, he did a master of science in medical imaging at Paris 12 University in 2007, where he got interested by neurosciences. In 2007-2010, Arnaud was a PhD student between the LIF in Paris (under the supervision of Habib Benali) and the INRIA in Sophia-Antipolis (under the supervision of Rachid Deriche). This is where he started to look at the relationship between brain structure and function. Between 2011 and 2013, Arnaud was a post-doctoral fellow at the LIB and also research engineer at the Brain and Spine Institute, where he provided expertise in MRI technics and analysis. From 2014, he is a post-doctoral fellow at the University Medical Center Hamburg-Eppendorf in Hamburg, Germany. Where he continues to develop ideas on computational analysis and modeling of brain network connectivity and dynamics.

References

https://doi.org/10.1371/journal.pcbi.1003530 https://doi.org/10.1016/j.neuroimage.2023.120212

Abstract

Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques. In this work we study first-order methods when the inner optimization problem is convex but non-smooth. We show that the forward-mode differentiation of proximal gradient descent and proximal coordinate descent yield sequences of Jacobians converging toward the exact Jacobian. Using implicit differentiation, we show it is possible to leverage the non-smoothness of the inner problem to speed up the computation. Finally, we provide a bound on the error made on the hypergradient when the inner optimization problem is solved approximately. Results on regression and classification problems reveal computational benefits for hyperparameter optimization, especially when multiple hyperparameters are required.

Paper: https://arxiv.org/pdf/2105.01637.pdf

Code: https://github.com/qb3/sparse-ho

Abstract

Traditional meta-analysis procedures were developed to integrate and evaluate multiple studies involving a small number of variables but current scientific practices systematically involve multiple variables often analyzed by multivariate factorial methods such as principal component analysis and its numerous variations. These methods typically generate maps that represent observations and variables as points or vectors in small dimensional spaces: a format that standard meta-analytic procedure cannot easily handle. Here, to “meta-analyze” a set of multivariate studies using the same variables (and/or same observations) we propose to use a multi-table 3-way extension of metric multidimensional scaling (called diSTATIS or covSTATIS) that can integrate the factorial information produced by a set of studies. DisTATIS/covSTATIS produces two set of maps: The first set reveals the similarity between the studies and the second set optimally analyzes the common and specific information of the variables in the studies. This approach is illustrated with

1- A study on the sensory evaluation of red, rosé, and white wines by wine experts and novices and

2 - a meta-analysis of a set of studies using the Survey of Autobiographical Memory (SAM)—a questionnaire that assesses self-reported remote mnemonic capacities comprising twenty-six variables organized in four subscales.

Abstract

In this talk, I will trace threads through our recent work, starting with how neurophysiological measures of sound musical pitch and vowels vary amongst individuals and according to musical and linguistic expertise. I will discuss results linking the quality of basic sound encoding to important functions like understanding speech in noisy conditions, discuss how brain states (like sleep) change brain responses, and describe our explorations of how the auditory system can be harnessed to influence brain state and memory consolidation. I will also describe a few tools our lab has developed that are 'open science', in the hopes that they will be useful to the community.

Short Bio

I am Assistant Professor in the Department of Psychology at Concordia University, in Montreal, Canada. I received a Ph.D. in Neuroscience in 2016 from McGill University (Prof. Robert Zatorre), an M.Sc. (Research) in Brain and Cognitive Science in 2009 from the University of Amsterdam, and a B.Sc. (Honours) in Psychology from the University of Ottawa in 2006. Between 2006 and 2009 I worked as a Human Behaviour and Performance specialist and trainer at the European Space Agency (European Astronaut Centre, Cologne, Germany), and between 2002 and 2005 I worked as a flight and theory instructor on light aircraft at several airports in Ottawa, Canada. Most recently, I was a Postdoctoral Fellow at the Eberhard Karls University of Tübingen, under the supervision of Prof. Jan Born.

My lab focuses on neuroplasticity associated with complex tasks, using musicianship (and its interaction with language) as a model. We use a variety of neuroimaging tools (i.e. MEG, EEG, fMRI, DWI, VBM) to study the neural bases of auditory processing, hearing-in-noise, and musician advantages, and their relation to training. We are also combining these areas with new techniques that can causally influence sleep-dependent memory consolidation, such as closed-loop auditory stimulation. Ultimately, our goals are to understand how training and sleep interventions can maintain auditory and language function, and improve learning and quality of life throughout the lifespan. As we gain experience with some of the new portable (neuro)physiology tools, we are also exploring ways to take cognitive neuroscience outside of the lab into people's homes and every lives, as well as unusual and extreme environments.

Abstract

Deep learning is having a profound impact on industry and scientific research. Yet, while this paradigm continues to show impressive performance in a wide variety of applications, its mathematical foundations are far from being well established. In this talk, I will present recent developments in this area by illustrating two case studies.

First, motivated by applications in cognitive science, I will present “rating impossibility" theorems. They identify frameworks where deep learning is provably unable to generalize outside the training set for the seemingly simple task of learning identity effects, i.e. classifying whether pairs of objects are identical or not.

Second, motivated by applications in scientific computing, I will illustrate “practical existence" theorems. They combine universal approximation results for deep neural networks with compressed sensing and high-dimensional polynomial approximation theory. As a result, they yield sufficient conditions on the network architecture, the training strategy, and the number of samples able to guarantee accurate approximation of smooth functions of many variables.

Time permitting, I will also discuss work in progress and open questions.

Short bio

Simone Brugiapaglia is Assistant Professor of Mathematics and Statistics at Concordia University. He studied mathematics at the University of Pisa, where he received his BSc in 2010 and his MSc in 2012 and where he held two scholarships from the Istituto Italiano di Alta Matematica. He received his PhD in Mathematical Models and Methods in Engineering from Politecnico di Milano in 2016, where he held a scholarship from the Italian Ministry of Education, Universities and Research. He was a post-doctoral fellow at Ecole polytechnique fédérale de Lausanne in Spring 2016 and at Simon Fraser University from 2016 to 2019. He held a Postdoctoral Training Centre in Stochastic Fellowship from the Pacific Institute for the Mathematical Sciences from 2016 to 2018. In 2019, he was awarded a Leslie Fox Prize for Numerical Analysis by the Institute of Mathematics and its Applications. He has co-authored one book and more than twenty scientific publications, including book chapters, peer-reviewed journal articles, and conference proceedings. His work has been published in venues such as Foundations of Computational Mathematics, Numerische Mathematik, Mathematics of Computation, Neural Computation, and IEEE Transactions on Information Theory. His research interests include mathematics of data science, computational mathematics, and numerical analysis.

Abstract

Neural synchronization is a basic functioning principle of the nervous system resulting from the coupling between the activities of different neurons. Recent advances in optical imaging have provided evidence that calcium activities measured in-vivo, as in motoneurons of zebrafish embryos, exhibit characteristic synchronized behaviors in the spinal cord. The data reveal a very strong synchronization of the oscillations in intracellular calcium levels between motoneurons located on the left and right side of the spinal cord respectively, while the activity profiles of these two clusters feature ‘‘antiphasic’’ properties: no calcium peak occur simultaneously in both clusters. A natural interpretation of these data points out the existence of an excitatory intra-cluster coupling and an inhibitory inter-cluster coupling. To study theoretically the synchronization properties in oscillatory patterns of intracellular calcium concentrations (ICC) between peripheral neurons, we have first built a compact slow-fast model of ICC dynamics in one cell taking particular advantage of the Mixed-Mode Oscillations feature for reaching the quiescence/active phase duration ratio and peak amplitude. While considering two coupled oscillators for representing the activity of clusters of strongly synchronized neurons is an oversimplification of the real network phenomenon, its simplicity allows us to perform singular perturbation analysis and study, at an aggregated level, the fundamental properties of the system in a formal way, both for inhibitory and excitatory coupling. I will present the analysis of the underlying dynamical mechanisms of synchronization of this coupled model, emphasize the role of MMOs and heterogeneity in parameter values among cells in the variability of behaviours and discuss the transition mechanisms between behaviors. Therefore I will explain a protocol of parameter estimation to reproduce all the qualitative properties observed in the experimental data with a clustered network model with mixed (inhibitory and excitatory) coupling types.