This discussion of superposition in neural networks brings association with Brandom’s Between Saying and Doing. We can see common structure - finite basis, infinite expression. Practice often precedes explicit rules, and we learn to say what we are doing only after the fact. A less powerful vocabulary (like a grammar of the metalanguage) can describe, interpret, regulate a system, which appears more powerful.
The source, contextualization, mechanisms, and relation to practice (all these for meaning) can be interesting in themselves.
In the Brandom's work, normativity has a central role, requiring participants to take on commitments and entitlements in discourse. In this article, meaning seems to be not touched deeply; it is attributed externally as a functional property of the system, emerging from how input is related to output.
I’d be careful about saying LLMs generate infinite meanings. Their meanings come from combining finite elements to create context-specific interpretations. While this can seem vast, it’s still limited by the system’s design and training. Human observers play a key role in assigning meaning to LLM outputs, interpreting them based on context and external understanding (same of course for the outputs produced by other humans). LLMs don’t have inherent meanings; their outputs depend on how humans interpret and use them.
Sometimes "encoding infinite meanings" is lossy way of compressing knowledge...