I think Robert Brandom in his "Between Saying and Doing: Towards an Analytic Pragmatism" has shown, that "dynamics" can be achieved by interspersing "meaning" and "use". Thus, algorithmic elaboration (what we also see in the article for the implication) is just one part of it. Whether discovered or invented - it does not matter, because there are 16 possible logical binary functions (I am not even sure what was the point to prove it in the article?). And our brains use mostly NAND at the lowest level, in a sense emulating everything else with it.
What is being invented though time after time are usages of those operations. One should know how to apply cold logic to the situations at hand, and this is where it becomes practically interesting.
That is, discovery is in the links to reality. That is where it becomes stochastic.
As for learning environment, sure. Laws of boolean logic may be unobservable from concrete data sets (just because patterns never appear in the data). So AI system will never learn them. The question is whether it makes sense to somehow add that knowledge to the model?
There is probably no single answer.