Reality as a System

(Note: my roadmap originally had planned a post on Gödel’s Incompleteness Theorems, but that’s not going to happen. It’s a fascinating topic with some interesting applications, but it’s even more mathematically dense than a lot of my other stuff, and isn’t strictly necessary, so I’m skipping it, for now. Maybe I’ll come back to it later. Read the wiki page if you’re interested.)

This post marks the final cherry on top of this whole series on systems theory, and the part where we finally get to make practical philosophical use of the whole abstract structure we’ve been building up. I’ve telegraphed the whole thing in the roadmap, and the thesis is in the title, so let’s just dive right in: reality is a system. It’s layed out almost already right there in axioms #3 and #5.

We can also tie this in with our definitions of truth and knowledge. If the absolute underlying reality of what is (forming absolute truth) is a system, then the relative truth that we regularly refer to as “truth” is just a set of abstractions layered on top of the underlying reality.

Dogs and cats and chairs and tables are just abstractions on top of molecules. Molecules are just an abstraction on top of atoms. Atoms, on top of protons, electrons, and neutrons. Protons and neutrons on top of quarks and other fundamental particles I don’t understand. The absolute true underlying system is, in this view, not possible to know. In fact, since we as persons are inside the system (we can in fact be seen as subsystems of it), then we literally cannot model the entire thing with complete fidelity. It is fundamentally impossible. The best we can do is to model an abstraction within the bounds of the entropy of the system. This is in some distant sense a restatement of the circular trap.

Patterns and Entropy

Our next foray into systems theory involves the definitions of patterns and the study of entropy (in the information-theoretical sense). Don’t worry too much about the math, I’m going to be working with a simple intuitive version for the most part, although if you have a background in computers or mathematics there are plenty of neat nooks and crannies to explore.

For a starting point, I will selectively quote Wikipedia’s opening paragraph on patterns (at time of writing):

A pattern, …is a discernible regularity… As such, the elements of a pattern repeat in a predictable manner.

I’ve snipped out the irrelevant bits, so the above definition is relatively meaty and covers the important points. First, a pattern is a discernible regularity. What does that mean? Well, unfortunately not a whole lot really, unless you’re hot on the concept of automata theory and recognizability. But it really doesn’t matter, since your intuitive concept of a pattern neatly covers all of the relevant facts for our purposes.

But what does this have to do with systems theory? Well, consider our reliable example, Conway’s Game of Life. A pattern in Life is a fairly obvious thing: a big long line of living cells is a pattern for example. This brings us to the second part of the above quote: the elements of a pattern repeat. This should be obvious from the example. Of course you can have other patterns in Life; a checkerboard grid is another obvious pattern, and the relatively famous glider is also a pattern.

It seems, on review, that I am doing a poor job of explaining patterns, however I will leave the above for lack of any better ideas at the moment. Just rest comfortable that your intuitive knowledge of what a pattern is should be sufficient.

For the more mathematically inclined, a pattern can be more usefully defined in terms of its information-theoretical entropy (also known as Shannon entropy after its inventor Claude Shannon). Technically anything that is at all non-random (aka predictable) is a pattern, though usually we are interested in patterns of particularly low entropy.

Apologies, this has ended up rather incoherent. Hopefully next post will be better. Reading the links may help, if you’re into that sort of thing.