I obtained a M.Math in the computational mathematics program at the University of Waterloo, where I researched continuous optimization and was advised by Thomas Coleman. I obtained my B.Sc. in mathematics and physics from McMaster University.
Spatial cognition relies on an internal map-like representation of space provided by hippocampal place cells, which in turn are thought to rely on grid cells as a basis. Spatial Semantic Pointers (SSP) have been introduced as a way to represent continuous spaces and positions via the activity of a spiking neural network. In this work, we further develop SSP representation to replicate the firing patterns of grid cells. This adds biological realism to the SSP representation and links biological findings with a larger theoretical framework for representing concepts. Furthermore, replicating grid cell activity with SSPs results in greater accuracy when constructing place cells. Improved accuracy is a result of grid cells forming the optimal basis for decoding positions and place cell output. Our results have implications for modelling spatial cognition and more general cognitive representations over continuous variables.
@inproceedings{dumont2020,title={Accurate representation for spatial cognition using grid cells},author={Dumont, Nicole Sandra-Yaffa and Eliasmith, Chris},booktitle={42nd Annual Meeting of the Cognitive Science Society},year={2020},pages={2367--2373},publisher={Cognitive Science Society},address={Toronto, Canada},}
Frontiers in Neuroscience
Exploiting semantic information in a spiking neural SLAM system
Nicole Sandra-Yaffa Dumont, P Michael Furlong, Jeff Orchard, and Chris Eliasmith
To navigate in new environments, an animal must be able to keep track of its position while simultaneously creating and updating an internal map of features in the environment, a problem formulated as simultaneous localization and mapping (SLAM) in the field of robotics. This requires integrating information from different domains, including self-motion cues, sensory, and semantic information. Several specialized neuron classes have been identified in the mammalian brain as being involved in solving SLAM. While biology has inspired a whole class of SLAM algorithms, the use of semantic information has not been explored in such work. We present a novel, biologically plausible SLAM model called SSP-SLAM—a spiking neural network designed using tools for large scale cognitive modeling. Our model uses a vector representation of continuous spatial maps, which can be encoded via spiking neural activity and bound with other features (continuous and discrete) to create compressed structures containing semantic information from multiple domains (e.g., spatial, temporal, visual, conceptual). We demonstrate that the dynamics of these representations can be implemented with a hybrid oscillatory-interference and continuous attractor network of head direction cells. The estimated self-position from this network is used to learn an associative memory between semantically encoded landmarks and their positions, i.e., an environment map, which is used for loop closure. Our experiments demonstrate that environment maps can be learned accurately and their use greatly improves self-position estimation. Furthermore, grid cells, place cells, and object vector cells are observed by this model. We also run our path integrator network on the NengoLoihi neuromorphic emulator to demonstrate feasibility for a full neuromorphic implementation for energy efficient SLAM.
@article{dumont2023exploiting,title={Exploiting semantic information in a spiking neural SLAM system},author={Dumont, Nicole Sandra-Yaffa and Furlong, P Michael and Orchard, Jeff and Eliasmith, Chris},journal={Frontiers in Neuroscience},volume={17},year={2023},publisher={Frontiers Media SA},doi={10.3389/fnins.2023.1190515},dimensions={false},}
CogSci
A model of path integration that connects neural and symbolic representation
Nicole Sandra-Yaffa Dumont, Jeff Orchard, and Chris Eliasmith
In Proceedings of the Annual Meeting of the Cognitive Science Society, 2022
Path integration, the ability to maintain an estimate of one’s location by continuously integrating self-motion cues, is a vital component of the brain’s navigation system. We present a spiking neural network model of path integration derived from a starting assumption that the brain represents continuous variables, such as spatial coordinates, using Spatial Semantic Pointers (SSPs). SSPs are a representation for encoding continuous variables as high-dimensional vectors, and can also be used to create structured, hierarchical representations for neural cognitive modelling. Path integration can be performed by a recurrently-connected neural network using SSP representations. Unlike past work, we show that our model can be used to continuously update variables of any dimensionality. We demonstrate that symbol-like object representations can be bound to continuous SSP representations. Specifically, we incorporate a simple model of working memory to remember environment maps with such symbol-like representations situated in 2D space.
@inproceedings{dumont2022,title={A model of path integration that connects neural and symbolic representation},author={Dumont, Nicole Sandra-Yaffa and Orchard, Jeff and Eliasmith, Chris},booktitle={Proceedings of the Annual Meeting of the Cognitive Science Society},volume={44},number={44},year={2022},publisher={Cognitive Science Society},address={Toronto, ON},}
Brain Sciences
Biologically-Based Computation: How Neural Details and Dynamics Are Suited for Implementing a Variety of Algorithms
Nicole Sandra-Yaffa Dumont, Andreas Stöckel, P Michael Furlong, Madeleine Bartlett, Chris Eliasmith, and Terrence C Stewart
The Neural Engineering Framework (Eliasmith & Anderson, 2003) is a long-standing method for implementing high-level algorithms constrained by low-level neurobiological details. In recent years, this method has been expanded to incorporate more biological details and applied to new tasks. This paper brings together these ongoing research strands, presenting them in a common framework. We expand on the NEF’s core principles of (a) specifying the desired tuning curves of neurons in different parts of the model, (b) defining the computational relationships between the values represented by the neurons in different parts of the model, and (c) finding the synaptic connection weights that will cause those computations and tuning curves. In particular, we show how to extend this to include complex spatiotemporal tuning curves, and then apply this approach to produce functional computational models of grid cells, time cells, path integration, sparse representations, probabilistic representations, and symbolic representations in the brain.
@article{dumont2023biologically,title={Biologically-Based Computation: How Neural Details and Dynamics Are Suited for Implementing a Variety of Algorithms},author={Dumont, Nicole Sandra-Yaffa and St{\"o}ckel, Andreas and Furlong, P Michael and Bartlett, Madeleine and Eliasmith, Chris and Stewart, Terrence C},journal={Brain Sciences},volume={13},number={2},pages={245},year={2023},publisher={MDPI},doi={10.3390/brainsci13020245},dimensions={false},}