Everything Looks Like a Nail

I just finished (about ten minutes ago) Jonathan Haidt’s book The Righteous Mind. Everything looks like a nail now, because I have a bunch of new mental “hammers” to play with. I cannot recommend this book enough. Go, read it, I’ll wait.

I think the concepts here will likely influence several future proper essay posts, but I want to just dump an unsorted list of points on which this book has fundamentally changed my mind, added a whole new tool to my mental toolkit, or just articulated something that I’d never really been able to explain before.

  1. Moral relativism. Although I don’t think I’ve ever formally articulated it on this blog before, I used to philosophically identify as a moral relativist. At a certain abstract level this is still true; Hume’s Guillotine remains as sharp as ever. That said, relativism as commonly elaborated includes a particular claim which this book has changed my mind on. Specifically, I now believe that there are universal moral values, shared of necessity not just by all human beings, but potentially by all living beings with sufficient intelligence that have survived a few rounds of evolution.
  2. Group selection in evolution. Group selection is fairly widely rejected by evolutionary biologists, and so the popular view now (and one I used to hold) is that it just doesn’t happen. Haidt cites a number of more recent studies to argue that it is still conceptually useful, though in a much restricted sense from the original version that books like The Selfish Gene worked to demolish. (Interestingly, Dawkins et al. reject even this restricted version, but I don’t understand why; the rebuttal language gets very technical.)
  3. To any reader of some of my past writing, it should be clear that I am sometimes torn between a fairly liberal mindset and some conservative intuitions. Haidt neatly unpacks where those come from in terms of axiological values, and why. While I profess to value truth and beauty, the reality of my psyche is more complicated. (Interestingly in hindsight, I hit the nail on the head in an off-hand addendum to this Other Opinions link. I wish I’d recognized the power of that dichotomy sooner.)
  4. Speaking of unpacking moral intuitions, Haidt’s Moral Foundations Theory is a fantastically useful mental model for me to systematize a bunch more of human behaviour. I’m still exploring the implications, and making adjustments to my understanding.
  5. I have a very old, dear friend with whom I have had an on-again-off-again philosophical/political debate for several years now. While we’ve been respectful and have managed to resolve some of our differences, there has also always been a fairly substantial nugget of remaining disagreement. I believe I now have a far better and more charitable understanding of that friend’s positions and moral intuitions.
  6. I also believe I have a far better understanding of recent changes in political polarization. While I’ve always understood the basic nature of polarization (people are tribal, and reasoned debate gives way before team-membership-signalling), I’d never had a great explanation for why polarization has increased so much in the last couple of decades. The best I could do was make vague gestures at “the internet”. Haidt gives a much better explanation.

In summary: fantastic book, and I still have a bunch of it left to process.

Worrying

On Sunday evening, I sat down and wrote a thousand words on this blog baring my soul, confessing my deepest secrets and revealing at least two deeply personal things that I’d never told anyone before. As you may deduce by the fact that you haven’t read it: I never hit “publish”. In hindsight, at least some of it was a tad melodramatic, a sin of which I am more than occasionally guilty. But the essence was right.

Now, of course, I’m sitting here two days later writing a very confusing meta-post about something that none of you have read, or likely ever will. You’re welcome. Really, as the title would suggest, I want to talk about worry, since I think it was the thread that underlies my unpublished post.

I worry a lot (this is a stunning revelation to anyone who knows me in real life, I’m sure).

There are of course a lot of posts on the internet already about dealing with worry. I don’t want to talk about that, even though I could probably do to read a few more of them myself. Instead, I want to ramble for a while about the way that worries change our behaviour to create or prevent the things we worry about. This is the weird predictive causal loop of the human brain, so it should be fun.

First off, some evolutionary psychology, because that always goes well. From a strictly adaptive perspective, we would expect that worry would help us avoid the things we worry about, and indeed the mechanism here is pretty obvious. When we worry, it makes us turn something over in our head, looking for solutions, exploring alternatives. Perhaps we stumble upon an option we hadn’t considered, or we realize some underlying cause that lets us avoid the worry-inducing problem altogether. The people who worry like this have some advantage over the ones who don’t.

But of course, nothing is ever perfectly adaptive. The easy one is the immediate mental cost of worrying; worrying about tigers is less than helpful if in doing so you distractedly walk off a cliff. The slightly more subtle concern is the fact that we don’t always worry about the right things. Every time we choose to worry about some future event we are inherently making a prediction, that the event is probable enough and harmful enough to be worth worrying over. But humans make crappy predictions all the time. It’s an easy guarantee that some of the things people worry about just aren’t worth the extra mental effort.

These mis-worries still affect our behaviour though. We turn scenarios over in our mind, however unlikely or harmless, and we come up with solutions. We make changes to our behaviour, to our worldview. We make choices which would otherwise be suboptimal. Sometimes, in doing so, we create more problems for us to worry about. These things are sometimes bad, but even they are not the worst of what worrying can do to us.

The most terrible worries are the meta-worries: worries about our own emotional state. If you start to worry that maybe you’re emotionally fragile, then you’ve suddenly just proved yourself right! The constant worry over your emotional fragility has made you fragile, and reinforced itself at the same time. These worries aren’t just maladaptive, they’re also positive feedback loops which can rapidly spiral out of control.

With all of these terrible things that can come from mis-worry, we can make bad, hand-wavy assumptions that historically at least, worry has been more adaptive than not, else we wouldn’t have it. But certainly in the modern age, there is a plausible argument that worry is doing us far more harm than good. Instead of worrying about tigers, and cliffs, and what we’re going to eat tomorrow, we worry about sports teams, taxes, and nuclear war with North Korea. (If you’re me, you worry about all of the above, tigers included, and you also worry about that girl you think is cute and you meta-worry about all your worries and then you worry over how to stop meta-worrying and then your head explodes).

For about three years now I’ve been actively fighting my mis-worries (aka my anxieties) kind of one at a time, as I realized they were hurting me. This has involved regular visits to a therapist during some periods, and has been a generally successful endeavour. Despite this, I am not where I want to be, and in some respects my meta-anxieties have actually grown. So in the grand tradition of doing bad science to yourself in order to avoid ethics boards, I am going to do an experiment. The details are secret. Let’s see how it goes.

Constructing the Mind, Part 2

Whoops, it’s been over a month since I finished my last post (life got in the way) and so now I’m going to have to dig a bit to figure out where I wanted to go with that. Let’s see…

We ended up with the concept of a mechanical brain mapping complex inputs to physical reactions. The next obviously useful layer of complexity is for our brain to store some internal state, permitting the same inputs to produce different outputs based on the current situation. Of course this state information is going to be effectively analogue in a biological system, implemented via chemical balances. If this sounds familiar, it really should: it’s effectively a simple emotional system.

The next step is strictly Pavlovian. With the presence of one form of internal state memory, the growth of another complementary layer is not far-fetched. Learning that one input precedes a second input with high probability, and creating a new reaction for the first input is predictably mechanical, though now mostly beyond what modern AI has been able to accomplish even ignoring tokenized input. But here we must also tie back to that idea (which I discussed in the previous post). As the complexity of tokenized input grows, so does the abstracting power of the mind able to recognize the multitude of shapes, colours, sounds, etc. and turn them into the ideas of “animal” or “tree” or what have you. When this abstracting power is combined with simple memory and turned back on the tokens it is already producing, we end up with something that is otherwise very hard to construct: mimicry.

In order for an animal to mimic the behaviour of another, it must be able to tokenize its sense input in a relevant way, draw the abstract parallel between the animal it sees and itself, store that abstract process in at least a temporary way, and apply it to new situations. This is an immensely complex task, and yet it falls naturally out of the abilities I have so far layed out. (If you couldn’t tell, this is where I leave baseless speculation behind and engage in outrageous hand-waving).

And now I’m out of time, just as I’m getting back in the swing of things. Hopefully the next update comes sooner!

A Brief Detour: Constructing the Mind, Part 1

I now take a not-so-brief detour to lay out a theory of brain/mind, from a roughly evolutionary point of view, that will lay the foundation for my interrupted discussion of self-hood and identity. I tied in several related problems when working this out, in no particular order:

  • The brain is massively complex; given an understanding of evolution, what is at least one potential path for this complexity to grow while still being adaptively useful at every step?
  • “Strong” Artificial Intelligence as a field has failed again and again with various approaches, why?
  • All the questions of philosophy of identity I mentioned in my previous post.
  • Given a roughly physicalist answer to the mind-body problem (which I guess I’ve implied a few times but never really spelled out), how do you explain the experiential nature of consciousness?

Observant readers may note that I briefly touched this subject once before. What follows here is a much longer, more complex exposition but follows the same basic ideas; I’ve tweaked a few things and filled in a lot more blanks, but the broad approach is roughly the same.


Let’s start with the so-called “lizard hindbrain”, capable only of immediate, instinctual reactions to sensory input. This include stuff like automatically pulling away your hand when you touch something hot. AI research has long been able to trivially replicate this, it’s a pretty simple mapping of inputs to reactions. Not a whole lot to see here, a very basic and (importantly) mechanical process. Even the most ardent dualists would have trouble arguing that creatures with only this kind of brain have something special going on inside. This lizard hindbrain is a good candidate for our “initial evolutionary step”; all it takes is a simple cluster of nerve fibres and voila.

The next step isn’t so much a discrete step as an increase, specifically in the complexity of inputs recognized. While it’s easy to understand and emulate a rule matching “pain”, it’s much harder to understand and emulate a rule matching “the sight of another animal”. In fact it is this (apparently) simple step where a lot of hard AI falls down, because the pattern matching required to turn raw sense data into “tokens” (objects etc.) is incredibly difficult, and without these tokens the rest of the process of consciousness doesn’t really have a foundation. Trying to build a decision-making model without tokenized sense input seems to me a bit like trying to build an airplane out of cheese: you just don’t have the right parts to work with.

So now we have a nerve cluster that recognizes non-trivial patterns in sense input and triggers physical reactions. While this is something that AI has trouble with, it’s still trivially a mechanical process, just a very complex one. The next step is perhaps less obviously mechanical, but this post is long enough, so you’ll just have to wait for it 🙂

Potential Breakthrough Links Game Theory and Evolution

(Forgive my departure from the expected schedule, this was good enough to jump the queue).

It’s always nice to be validated by science. Only a week or so after finally wrapping up my series of posts on game theory and evolution, a serious scientific paper has been published titled “Algorithms, games, and evolution“. For those of you not so keen on reading the original paper, Quanta Magazine has an excellent summary. The money quote is this one from the first paragraph of the article:

an algorithm discovered more than 50 years ago in game theory and now widely used in machine learning is mathematically identical to the equations used to describe the distribution of genes within a population of organisms

Now the paper is still being picked apart by various other scientists and more details could turn up (for all I know it could be retracted tomorrow) but I doubt it. Even if the wilder claims floating around the net are false, the fundamental truth stands that evolution drives behaviour, and evolution is a probabilistic, game-theory-driven process. While it’s easy to see that link on an intuitive level, it looks like we’ve finally started discovering the formal mathematical connections as well.

Memes: Speedy Variation and Double Jeopardy

Random variation and natural selection are simple ideas in reference to genes, but memes don’t quite follow the same rules. Variation occurs, and new memes are born, but calling it random seems disingenuous. Selection also occurs, but calling it natural doesn’t quite fit. Memes can be consciously controlled, which makes them interesting things; unlike genes, they are capable of spreading and mutating amazingly quickly. The internet has made that spread and mutation even faster, to the point where an idea can make it all the way around the world faster than a human being.

Selective pressure, while different, is also much harsher on memes. Not only are boring ideas forgotten, but we can explicitly choose not to pass on ideas that we consider dangerous or wrong. This gives one meaning of the “double jeopardy” in the title. The other, fascinating meaning I was referring to is the interaction of selective pressure from genetics and memetics. The popularity of a particular meme can make a particular gene more or less useful, and vice versa.

This means that for a complete understanding, we cannot study genes and memes separately. Every genetic behavioural trait influences the memes we create and are willing to accept, and every meme we use affects the survival probabilities of our genes. They are tightly interwoven, and the selective pressures between them are therefore in a state of constant feedback.

Diversity, Competition, and Stable Strategies

Having covered a couple of conceptual building-blocks, we can start putting them together and seeing what effects they have.

Through the combination of random variation and inheritance, we know that sometimes children will have new or different genes from those of their parents, but that most of the time they will have very similar genes. Since genes are connected to actual properties of living things, this means that sometimes children will be born with new, different or unusual properties not shared by their parents. Over grand time scales, this leads to diversity, even if the starting population is relatively homogenous. Some people will end up with blue eyes, some with brown; some people will end up with red hair, some with black hair.

Now note that in general, living beings are in competition with each other for resources (human beings count here too, though the competition is much more subtle in modern society; I will deal with this point more in later posts). Survival of the fittest comes into play here, and we know that genetics has an impact on physical properties. Together, this means (for example) that a giraffe with a gene for extra tallness may be able to eat off taller trees that the other giraffes can’t reach, thus surviving and passing on that gene.

Putting those two points together, this leads to an interesting situation. Random variation provides natural diversity, and survival of the fittest trims that diversity so that only the best genetic variants survive. The result tends statistically into what are called “stable strategies“. After some period of time, a combination of genes naturally occurs which produces properties that make the animals particularly well-suited to their environment. They don’t just survive, they begin to thrive. Their offspring may have random variations on this set of genes, but effectively all major variations end up being worse than the original. As such, the same set of near-optimal genes gets passed down stably, generation after generation.