All Publications


  • Integration of new information in memory: new insights from a complementary learning systems perspective. Philosophical transactions of the Royal Society of London. Series B, Biological sciences McClelland, J. L., McNaughton, B. L., Lampinen, A. K. 2020; 375 (1799): 20190637

    Abstract

    According to complementary learning systems theory, integrating new memories into the neocortex of the brain without interfering with what is already known depends on a gradual learning process, interleaving new items with previously learned items. However, empirical studies show that information consistent with prior knowledge can sometimes be integrated very quickly. We use artificial neural networks with properties like those we attribute to the neocortex to develop an understanding of the role of consistency with prior knowledge in putatively neocortex-like learning systems, providing new insights into when integration will be fast or slow and how integration might be made more efficient when the items to be learned are hierarchically structured. The work relies on deep linear networks that capture the qualitative aspects of the learning dynamics of the more complex nonlinear networks used in previous work. The time course of learning in these networks can be linked to the hierarchical structure in the training data, captured mathematically as a set of dimensions that correspond to the branches in the hierarchy. In this context, a new item to be learned can be characterized as having aspects that project onto previously known dimensions, and others that require adding a new branch/dimension. The projection onto the known dimensions can be learned rapidly without interleaving, but learning the new dimension requires gradual interleaved learning. When a new item only overlaps with items within one branch of a hierarchy, interleaving can focus on the previously known items within this branch, resulting in faster integration with less interleaving overall. The discussion considers how the brain might exploit these facts to make learning more efficient and highlights predictions about what aspects of new information might be hard or easy to learn. This article is part of the Theo Murphy meeting issue 'Memory reactivation: replaying events past, present and future'.

    View details for DOI 10.1098/rstb.2019.0637

    View details for PubMedID 32248773

  • Different Presentations of a Mathematical Concept Can Support Learning in Complementary Ways JOURNAL OF EDUCATIONAL PSYCHOLOGY Lampinen, A. K., McClelland, J. L. 2018; 110 (5): 664–82

    View details for DOI 10.1037/edu0000235

    View details for Web of Science ID 000437721500004

  • Building on prior knowledge without building it in BEHAVIORAL AND BRAIN SCIENCES Hansen, S. S., Lampinen, A. K., Suri, G., McClelland, J. L. 2017; 40: e268

    Abstract

    Lake et al. propose that people rely on "start-up software," "causal models," and "intuitive theories" built using compositional representations to learn new tasks more efficiently than some deep neural network models. We highlight the many drawbacks of a commitment to compositional representations and describe our continuing effort to explore how the ability to build on prior knowledge and to learn new tasks efficiently could arise through learning in deep neural networks.

    View details for PubMedID 29342701