Matching Patterns: The Nature of Intelligence

From the nature of the brain, through the nature of the mind, we now move on to the last of this particular triumvirate: the nature of intelligence.

A good definition of intelligence follows relatively cleanly from my previous two posts. Since the brain is a modelling subsystem of reality, it follows that some brains simply have more information-theoretic power than others. However, I believe that this is not the whole story. Certainly a strictly bigger brain will be able to store more complex abstractions (as a computer with more memory can do bigger computations), but the actual physical size of human brains is not strongly correlated with our individual intelligence (however you measure it).

Instead I posit the following: intelligence, roughly speaking, is related to the ability for the brain to match new patterns and derive new abstractions. This is information-theoretic compression in a sense. The more abstract and compact the ideas that one is able to reason with, the more powerful the models one is able to use.

The actual root of this ability is almost certainly structural with the brain somehow, but the exact mechanics are irrelevant. It is more important to note that the resulting stronger abstractions are not the cause of raw intelligence so much as an effect: the cause is the ability to take disparate data and factor out all the patterns, reducing it down to as close to raw Shannon entropy as possible.

The Emergence of Patterns

We kind of have a grasp of patterns and abstractions now; the last piece of that particular puzzle is the way such things emerge. Patterns and abstractions are not guaranteed to arise in any particular system (in particular, any apparent emergence in a purely stochastic system is likely to be nothing more than Poisson clumping) but as we have seen with gliders in Conway’s Game of Life, emergence does happen.

There are a few different ways emergence has been described, though for my purposes I will take my own stab at it. I shall say that:

Emergence is when the operation of the rules of a system produces a set of patterns in the system which form an abstraction whose inaccuracies (e.g. the case of colliding gliders from Monday’s example) are sufficiently contained that the abstraction can still be used as a reasonable model to predict the future state of the underlying system.

That’s rather long-winded, I know. To elaborate slightly on what I mean by “sufficiently contained inaccuracies”, consider the glider case. As long as the gliders don’t collide (and there are no other cells active) our abstract system of gliders perfectly models the underlying system of Life: starting in the same state and following the appropriate rules will produce the same subsequent state (if Life had probabilistic rules, the additional caveat would be needed that we assume the same random choices as well). However, in the corner cases of colliding gliders (or when the initial state has non-glider cells active) then the glider system diverges slightly from the underlying Life system. This is still an emergent model though, both because the divergence between the abstraction and the underlying system is relatively small in most cases, and because it is easy to catch; even if we don’t have rules for handling it, we can easily notice when two gliders collide and consequently know that the abstraction is no longer necessarily correct.

Patterns and Entropy

Our next foray into systems theory involves the definitions of patterns and the study of entropy (in the information-theoretical sense). Don’t worry too much about the math, I’m going to be working with a simple intuitive version for the most part, although if you have a background in computers or mathematics there are plenty of neat nooks and crannies to explore.

For a starting point, I will selectively quote Wikipedia’s opening paragraph on patterns (at time of writing):

A pattern, …is a discernible regularity… As such, the elements of a pattern repeat in a predictable manner.

I’ve snipped out the irrelevant bits, so the above definition is relatively meaty and covers the important points. First, a pattern is a discernible regularity. What does that mean? Well, unfortunately not a whole lot really, unless you’re hot on the concept of automata theory and recognizability. But it really doesn’t matter, since your intuitive concept of a pattern neatly covers all of the relevant facts for our purposes.

But what does this have to do with systems theory? Well, consider our reliable example, Conway’s Game of Life. A pattern in Life is a fairly obvious thing: a big long line of living cells is a pattern for example. This brings us to the second part of the above quote: the elements of a pattern repeat. This should be obvious from the example. Of course you can have other patterns in Life; a checkerboard grid is another obvious pattern, and the relatively famous glider is also a pattern.

It seems, on review, that I am doing a poor job of explaining patterns, however I will leave the above for lack of any better ideas at the moment. Just rest comfortable that your intuitive knowledge of what a pattern is should be sufficient.

For the more mathematically inclined, a pattern can be more usefully defined in terms of its information-theoretical entropy (also known as Shannon entropy after its inventor Claude Shannon). Technically anything that is at all non-random (aka predictable) is a pattern, though usually we are interested in patterns of particularly low entropy.

Apologies, this has ended up rather incoherent. Hopefully next post will be better. Reading the links may help, if you’re into that sort of thing.