Constructing the Mind, Part 2

Whoops, it’s been over a month since I finished my last post (life got in the way) and so now I’m going to have to dig a bit to figure out where I wanted to go with that. Let’s see…

We ended up with the concept of a mechanical brain mapping complex inputs to physical reactions. The next obviously useful layer of complexity is for our brain to store some internal state, permitting the same inputs to produce different outputs based on the current situation. Of course this state information is going to be effectively analogue in a biological system, implemented via chemical balances. If this sounds familiar, it really should: it’s effectively a simple emotional system.

The next step is strictly Pavlovian. With the presence of one form of internal state memory, the growth of another complementary layer is not far-fetched. Learning that one input precedes a second input with high probability, and creating a new reaction for the first input is predictably mechanical, though now mostly beyond what modern AI has been able to accomplish even ignoring tokenized input. But here we must also tie back to that idea (which I discussed in the previous post). As the complexity of tokenized input grows, so does the abstracting power of the mind able to recognize the multitude of shapes, colours, sounds, etc. and turn them into the ideas of “animal” or “tree” or what have you. When this abstracting power is combined with simple memory and turned back on the tokens it is already producing, we end up with something that is otherwise very hard to construct: mimicry.

In order for an animal to mimic the behaviour of another, it must be able to tokenize its sense input in a relevant way, draw the abstract parallel between the animal it sees and itself, store that abstract process in at least a temporary way, and apply it to new situations. This is an immensely complex task, and yet it falls naturally out of the abilities I have so far layed out. (If you couldn’t tell, this is where I leave baseless speculation behind and engage in outrageous hand-waving).

And now I’m out of time, just as I’m getting back in the swing of things. Hopefully the next update comes sooner!

A Brief Detour: Constructing the Mind, Part 1

I now take a not-so-brief detour to lay out a theory of brain/mind, from a roughly evolutionary point of view, that will lay the foundation for my interrupted discussion of self-hood and identity. I tied in several related problems when working this out, in no particular order:

  • The brain is massively complex; given an understanding of evolution, what is at least one potential path for this complexity to grow while still being adaptively useful at every step?
  • “Strong” Artificial Intelligence as a field has failed again and again with various approaches, why?
  • All the questions of philosophy of identity I mentioned in my previous post.
  • Given a roughly physicalist answer to the mind-body problem (which I guess I’ve implied a few times but never really spelled out), how do you explain the experiential nature of consciousness?

Observant readers may note that I briefly touched this subject once before. What follows here is a much longer, more complex exposition but follows the same basic ideas; I’ve tweaked a few things and filled in a lot more blanks, but the broad approach is roughly the same.


Let’s start with the so-called “lizard hindbrain”, capable only of immediate, instinctual reactions to sensory input. This include stuff like automatically pulling away your hand when you touch something hot. AI research has long been able to trivially replicate this, it’s a pretty simple mapping of inputs to reactions. Not a whole lot to see here, a very basic and (importantly) mechanical process. Even the most ardent dualists would have trouble arguing that creatures with only this kind of brain have something special going on inside. This lizard hindbrain is a good candidate for our “initial evolutionary step”; all it takes is a simple cluster of nerve fibres and voila.

The next step isn’t so much a discrete step as an increase, specifically in the complexity of inputs recognized. While it’s easy to understand and emulate a rule matching “pain”, it’s much harder to understand and emulate a rule matching “the sight of another animal”. In fact it is this (apparently) simple step where a lot of hard AI falls down, because the pattern matching required to turn raw sense data into “tokens” (objects etc.) is incredibly difficult, and without these tokens the rest of the process of consciousness doesn’t really have a foundation. Trying to build a decision-making model without tokenized sense input seems to me a bit like trying to build an airplane out of cheese: you just don’t have the right parts to work with.

So now we have a nerve cluster that recognizes non-trivial patterns in sense input and triggers physical reactions. While this is something that AI has trouble with, it’s still trivially a mechanical process, just a very complex one. The next step is perhaps less obviously mechanical, but this post is long enough, so you’ll just have to wait for it 🙂

The Ghost in the Machine: The Nature of the Mind

Having just covered in summary the nature of the brain, we now turn to the much knottier issue of what constitutes the mind. Specifically I want to turn to the nature of self-awareness and true intelligence. Advances in modern computing have left most people with little doubt that we can simulate behavioural intelligence to within certain limits. But there still seems to be that missing spark that separates even the best computer from an actual human being.

That spark, I believe, boils down to recursive predictive self-modelling. The brain, as seen on Monday, can be viewed as a modelling subsystem of reality. But why should it be limited to modelling other parts of reality? Since from an information-theoretic perspective it must already be dealing in abstractions in order to model as much of reality as it can, there is nothing at all to prevent it from building an abstraction of itself and modelling that as well. Recursively, ad nauseum, until the resolution (in number of bits) of the abstraction no longer permits.

This self-modelling provides, in a very literal way, a sense of self. It also lets us make sense of certain idioms of speech, such as “I surprised myself”. On most theories of the mind, that notion of surprising oneself can only be a figure of speech, but self-modelling can actually make sense of it: your brain’s model of itself made a false prediction; the abstraction broke down.

The Nature of the Brain

Our little subsection on biology and genetics has covered the core points I wanted to mention, so now we take a sharp left turn and head back to an application of systems theory. Specifically, the next couple of posts will deal with the philosophy’s classic mind-body problem. If you haven’t already, I suggest you skim through my systems-theory posts, in particular “Reality as a System“. They really set the stage for what’s coming here.

As suggested in my last systems-theory post, if we view reality as a system then we can draw some interesting information-theoretic conclusions about our brains. Specifically, our brains must be seen as open (i.e. not closed), recursively modelling subsystems of reality.

Simply by being part of reality it must be a subsystem therein. Because it interacts with other parts of reality, it is open, not closed. The claim that it provides a recursive model of (part of) reality is perhaps less obvious, but should still be intuitive on reflection. When we imagine what it would be like to make some decision, what else is our brain doing but simulating that part of reality. Obviously it is not simulating the actual underlying reality (atoms or molecules or whatever) but it is simulating some relevant abstraction of that.

In fact, I will argue later that this is effectively all our brains do: they recursively model an abstraction of reality. But this is obviously a more contentious claim, so I will leave it for another day.