Proceedings of the First International Workshop on Self-Supervised Learning, PMLR
This paper proposes work that applies insights from meaning representation systems for in-depth natural language understanding to representations for self-supervised learning systems, which show promise in developing complex, deeply-nested symbolic structures through self-motivated exploration of their environments. The core of the representation system transforms language inputs into language-free structures that are complex combinations of conceptual primitives, forming a substrate for human-like understanding and common-sense reasoning. We focus on decomposing representations of expectation, intention, planning, and decision-making which are essential to a self-motivated learner. These meaning representations may enhance learning by enabling a rich array of mappings between new experiences and structures stored in short-term and long-term memory. We also argue that learning can be further enhanced when language interaction itself is an integral part of the environment in which the self-supervised learning agent is embedded.
Copyright © The authors and PMLR 2020. MLResearchPress
Macbeth, Jamie C., "Enhancing Learning with Primitive-Decomposed Cognitive Representations" (2020). Computer Science: Faculty Publications, Smith College, Northampton, MA.