A Cautionary Note on Unlocking the Emotional Brain

[Follows from Mental Mountaineering]

In children’s stories, the good guys always win, the hero vanquishes the villain, and everyone lives happily ever after. Real life tends to be somewhat messier than this.

The world of therapy presented by Unlocking the Emotional Brain reads somewhat like a children’s story. Loosely, it presents a model of the brain where your problems are mostly caused by incorrect emotional beliefs (bad guys). The solution to your problems is to develop or discover a correct emotional belief (good guy) that contradicts your incorrect beliefs, then force your brain to recognize the contradiction at an emotional level. This causes your brain to automatically resolve the conflict and destroy the incorrect belief, so you can live happily ever after.

Real life tends to be somewhat messier than this.

After about a month of miscellaneous experimentation on myself based on this book, my experiences match the basic model presented, where many psychological problems are caused by incorrect emotional beliefs (I don’t think this part is particularly controversial in psychological circles). It also seems to be true that if I force my brain to recognize a contradiction between two emotionally relevant beliefs, it will resolve the conflict and destroy one of them. Of course, as in real life where the good guy doesn’t always win, it seems that when I do this my brain doesn’t always destroy the right belief.

I have had several experiences now where I have identified an emotional belief which analytically I believe to be false or harmful. Per UtEB I have identified or created a different experience or belief that contradicts it, and smashed them together in my mind. A reasonable percentage of the time, the false belief emerges stronger than before, and I find myself twisting the previous “good” belief into some horrific experience to conform with the existing false belief.

In hindsight this shouldn’t be particularly surprising. Whatever part of your brain is used to resolve conflicting emotional beliefs and experiences, it doesn’t have special access to reality. All it has to work with are the two conflicting pieces and any other related beliefs you might have. It’s going to pick the wrong one with some regularity. As such, my recommendation for people trying this process themselves (either as individuals or as therapists) is to try and ensure that the “good” belief is noticeably stronger and more immediate than the false one before you focus on the contradiction. If this doesn’t work and you end up in a bad way, I’ve had a bit of luck “quarantining” the newly corrupted belief to prevent it from spreading to even further beliefs, at least until I can come up with an even stronger correct belief to fight it with.

Milk as a Metaphor for Existential Risk

[I don’t believe this nearly as strongly as I argue for it, but I started to pull on the thread and wanted to see how far I could take it]

The majority of milk sold in North America is advertised as both “homogenized” and “filtered“. This is actually a metaphor created by the dairy industry to spread awareness of existential risk.

There has been a lot of chatter over the last few years on the topic of political polarization, and how the western political system is becoming more fragile as opinions drift farther apart and people become more content to simply demonize their enemies. A lot of causes have been thrown around to explain the situation, including millennials, boomers, free trade, protectionism, liberals, conservatives, economic inequality, and the internet… There’s a smorgasbord to choose from. I’ve come to believe that the primary root cause is, in fact, the internet, but the corollary to this is far more frightening than simple cultural collapse. Like milk, humanity’s current trend toward homogenization will eventually result in our filtration.

The Law of Cultural Proximity

Currently, different human cultures have different behavioural norms around all sorts of things. These norms cover all kinds of personal and interpersonal conduct, and extend into different legal systems in countries around the globe. In politics, this is often talked about in the form of the Overton window, which is the set of political positions that are sufficiently “mainstream” in a given culture to be considered electable. Unsurprisingly, different cultures have different Overton windows. For example, Norway and the United States have Overton windows that tend to overlap on some policies (the punishment of theft) but not on others (social welfare).

Shared norms and a stable, well-defined Overton window are important for the stable functioning of society, since they provide the implicit contract and social fabric on which everything else operates. But what exactly is the scope of a “society” for which that is true? We just talked about the differences between Norway and the U.S., but in a fairly real sense, Norway and the U.S. share “western culture” when placed in comparison with Iran, China, or North Korea. In the other direction, there are distinct cultures with different norms around things like gun control, entirely within the U.S. Like all categorizations, the lines are blurry at times.

The key factor in drawing cultural lines is interactional proximity. This is easiest to see in a historical setting because it becomes effectively identical to geographic proximity. Two neolithic tribes on opposite ends of a continent are clearly and unambiguously distinct, where-as two tribes that inhabit opposite banks of a local river are much more closely linked in every aspect: geographically, economically, and of course culturally. Because the two local tribes interact so much on a regular basis, it is functionally necessary that they share the same cultural norms in broad strokes. There is still room for minor differences, but if one tribe believes in ritual murder and the other does not, that’s a short path to disagreement and conflict.

Of course, neolithic tribes sometimes migrated, and so you could very well end up with an actual case of two tribes coming into close contact while holding very different cultural norms. This would invariably result in conflict until one of the tribes either migrated far enough away that contact became infrequent, became absorbed into the culture of the other tribe, or was wiped out entirely. You can invent additional scenarios with different tribes and different cultures in different geographies and economic situations, but the general rule that pops out of this is as follows: in the long run, the similarity between two cultures is proportional to the frequency with which they interact.

The Great Connecting

Hopefully the law of cultural proximity is fairly self-evident in the simplified world of neolithic tribes. But now consider how it applies to the rise of trade, and technology over the last several millennia. The neolithic world was simple because interactions between cultures were heavily mediated by simple geographic proximity, but the advent of long-distance trade started to wear away at that principle. Traders would travel to distant lands, and wouldn’t just carry goods back and forth; they would carry snippets of culture too. Suddenly cultures separated by great distances could interact more directly, even if only infrequently. Innovations in transportation (roads, ship design, etc) made travel easier and further increased the level of interaction.

This gradual connecting of the world led to a substantial number of conflicts between distant cultures that wouldn’t have even know about each other in a previous age. The victors of these conflicts formed empires, developed new technologies, and expanded their reach even farther afield.

Now fast-forward to modern day and take note of the technical innovations of the last two centuries: the telegraph, the airplane, the radio, the television, the internet. While the prior millennia had seen a gradual connecting of the world’s cultures, the last two hundred years have seen a massive step change: the great connecting. On my computer today, I could easily interact with people from thirty different countries around the globe. Past technologies metaphorically shrank the physical distance between cultures; the internet eliminates that distance entirely.

But now remember the law of cultural proximity: the similarity between two cultures is proportional to the frequency with which they interact. This law still holds, over the long run. However the internet is new, and the long run is long. We are currently living in a world where wildly different cultures are interacting on an incredibly regular basis via the internet. Unsurprisingly, this has led to a lot of cultural conflict. One might even call it cultural war.

Existential Risk

In modern times, the “culture war” has come to refer to the conflict between the left/liberal/urban and right/conservative/rural in North American politics. But this is just the most locally obvious example of different cultures with different norms being forced into regular interaction through the combination of technology and the economic realities that technology creates. The current tensions between the U.S. and China around trade and intellectual property are another aspect of the same beast. So are the tensions within Europe around immigration, and within Britain around Brexit. So was the Arab Spring. The world is being squished together into a cultural dimension that really only has room for one set of norms. All wars are culture wars.

So far, this doesn’t seem obviously bad. It’s weird, maybe, to think of a world with a single unified culture (unless you’re used to sci-fi stories where the unit of “culture” is in fact the planet or even the solar system – the law of cultural proximity strikes again!) but it doesn’t seem actively harmful as long as we can reach that unified state without undue armed conflict. But if we reframe the problem in biological and evolutionary terms then it becomes much more alarming. Species with no genetic diversity can’t adapt to changing conditions, and tend to go extinct. Species with no cultural diversity…

Granted, the simplest story of “conditions change, our one global culture is not a fit, game over humanity” does seem overly pessimistic. Unlike genetics, culture can change incredibly rapidly, and the internet does have an advantage in that it can propagate new memes quite quickly. However, there are other issues. A single global culture only works as long as that culture is suitable for all the geographic and economic realities in which people are living. If the internet forces us into a unified global culture, but the resulting culture is only adaptive for people living far from the equator… at best that creates a permanent underclass. At worst it results in humanity abandoning large swaths of the planet, which again looks a lot like putting all our eggs in one basket.

Now that I’ve gotten this far, I do conclude that the existential risk angle was maybe a bit overblown, but I am still suspicious that our eventual cultural homogeneity is going to cause us a lot more problems than we suspect. I don’t know how to stop it, but if there were a way to maintain cultural diversity within a realm of instant worldwide communication, that seems like a goal worth pursuing.


Bonus: I struggled to come up with a way to work yogurt (it’s just milk with extra “culture”!) into the metaphor joke, but couldn’t. Five internet points to anybody who figures out how to make that one work.

Mental Mountaineering

Back in November, Scott Alexander wrote a post called Mental Mountains, referring to the book Unlocking the Emotional Brain and this discussion of it over at Less Wrong. I’m halfway through the book itself, and I’ve read both discussions of it including some of the follow-up conversations that happened in the comments. It’s a fascinating model and definitely worth reading if you’re into that kind of thing. I’ve been reading a lot of therapy/psychology books recently and this one does seem to tie a lot of things together very nicely.

One partial comment that stood out to me from the Less Wrong discussion was the following by PJ Eby:

…I didn’t realize yet that hard part 1 (needing to identify the things to change) and hard part 2 (needing to get past meta issues), meant that it is impossible to mass-produce change techniques.

That is, you can’t write a single document, record a single video, etc. that will convey to all its consumers what they need in order to actually implement effective change.

I don’t mean that you can’t successfully communicate the ideas or the steps. I just mean that implementing those steps is not a simple matter of following procedure, because of the aforementioned Hard Parts. It’s like expecting someone to learn to bike, drive, or debug programs from a manual.

Let it never be said that I didn’t like a challenge.


I’ve been working on my own brain fairly intentionally for several years now. This process has included traditional therapy with a licensed psychologist, a bunch of reading, and of course just a lot of my own time spent thinking and introspecting and running various thought experiments to see how different hypothetical worlds would make me feel. In this time I have made substantial progress on some problems, and very little progress on others. I’m always looking for more tools to add to my toolbox, and when I first read Scott’s article I added Unlocking the Emotional Brain to my short-list of books to get out of the library.

I’ve read the first three chapters of the book now, and I’ve already paused my reading several times to try and put various pieces of it into practice inside my mind. It’s far too early to draw any reliable conclusions from that, but preliminary results appear promising. I should, however, note that I’m likely to be an outlier in this respect. I’m an introspective and generally self-aware person to begin with, this is an area of general interest for me anyway, and of course I’ve already spent a substantial amount of time articulating and discussing my problems with the help of a real psychologist (though not one who is aware of UtEB). In other words, I have a fairly substantial set of advantages over the average person who might read UtEB and try and self-inflict its particular form of therapy.

At this point it’s too early to know if the internal process I’m going to follow is even going to generate substantive long-term results. If it does however, then I may very well take a crack at generalizing that into a series of posts for do-it-yourself therapy. PJ’s reservations are well-founded but I firmly believe I can explain just about anything to a general audience, and this sure seems like it would be valuable enough to try.

Link #81 – The Story of Us

https://waitbutwhy.com/2019/08/story-of-us.html

Warning: very, very, very long. As of writing it’s not even done yet (10 of a putative 12 posts have been published). That said, it’s a fascinating read so far and highly recommended. If you’re a long-time reader of a certain part of the internet (this blog, Slate Star Codex, 538, etc) then it retreads a lot of familiar ground at first. However chapter 10 (and from the sounds of it the as-yet-unpublished chapter 11) contains more interesting and new thoughts. I’m not sure if it’s possible to just start there, since it builds on a lot of metaphors introduced in earlier chapters, but it would be interesting to try.

One point really stood out to me since I’ve been assuming the opposite. Previously I would have drawn on Haidt and argued that the competing factions of the current American culture war have fundamentally different values, but the linked articles make an interesting claim that they actually share a pretty mixed bag of values – the real conflict is because they share fundamentally different empirical beliefs about reality thanks to increasing media polarization, The Big Sort, etc.

Disclaimer: I don’t necessarily agree with or endorse everything that I link to. I link to things that are interesting and/or thought-provoking. Caveat lector.

Two More Weird Moral Rules

In my previous post I unpacked a number of moral rules I’d developed as a child trying to be clever and hack adult morality. What I didn’t quite realize when I published it was that the list incomplete – now that I’m actively paying attention to my moral intuitions I keep running across additional things which belong on the list. Here are more things that are still part of my psyche in some way.

Weigh the long-term more than the short-term. I’d originally just edited this into the previous post after the fact, but now that I’ve found more rules it deserves a proper write-up too. This one is really interesting because in practice I’m sure I still hyperbolically discount my choices a lot of the time. However it has led to some weirder personal choices which I’m still not sure are entirely wrong. For example, I don’t drink coffee for largely the same reason I don’t do heroin: the long-term costs of an addiction seem to outweigh the temporary benefits. Clearly most people don’t think this way (or at least don’t bother to think this way), and the cost-benefit analysis for coffee is not as clearly one-sided as it is for heroin, but… it still makes sense in my head. It’s also worth noting that I do drink coffee occasionally, as a tool to stay awake when e.g. driving long distances late at night. But this is reasonable because caffeine is much less addictive than heroin, so it can be more safely used as a tool in certain situations without developing a habit.

Another weird one this short-term-long-term rule has affected is how I listen to music. I’ve noticed that I tend to listen to my music at a much lower volume than other people, I never use earbuds (in-ear headphones) if I can avoid it, and if I’m in an environment that is noisy such as an airplane, I tend to prefer turning my music off rather than turning it up to compensate. My brain tells me I do this because I strongly value my future hearing much more than whatever marginal enjoyment I’d get from slightly louder music. I imagine this is mediated in part because, as a fairly musical person, half the music I “listen to” is entirely in my head anyway.

Never seek status or be seen to be seeking status. My brain argues that it’s a waste of resources since it actually lowers your status among the people who do the real work. I need to get my hair cut right now (it is getting sufficiently shaggy to start being a problem) and I was avoiding it because it felt wrong. Digging into this made me realize that the barber I’ve been going to was “too fancy”, and that I was actively making myself feel guilty for spending money on “status” services that weren’t “practical” enough. There’s a clear kernel of truth behind this one; “shallow”, “vain”, etc. are all pejorative for a reason. And I’m sure a lot of it can be traced back to this Paul Graham essay which I have probably referenced way too much in the history of this blog now. But still, I’m clearly taking this rule too far. A haircut is a haircut.

Beyond those two additions, I want to leave one more thought on a group that showed up in my previous post: don’t cheat, don’t lie, don’t double down, don’t learn things the hard way. These four rules are all underpinned by a pretty fundamental intuition which is: you are not as smart as the system. Other people know what’s what, and if you try and cheat them (or even just ignore their advice) it will go badly for you. What’s weird about this one is how false it seems to be in practice now. It was certainly true when I developed it (I was a kid, my parents are both very smart, and my mother at least is also very perceptive (hey dad!)) but now I’m fairly certain that I could lie and cheat circles around most people without getting caught. I don’t. And anyway the people I actively spend time with tend to be just as clever as me and unlikely to be fooled. But it’s weird to think of an alternative evil version of myself that has a very different social circle and is a creepy manipulative bastard, and gets away with it. I don’t want that life, but it seems achievable, which is scary enough.

A Meta-Morality Tale

As a child, you hear a lot of fables and morality tales. Most stories aimed at children have a moral of some sort, and even stories that aren’t explicitly aimed at kids typically have some sort of morality baked in. It’s hard to avoid when writing.

As a child, I noticed this and thought I was being very clever by trying to pattern-match my way from the collection of these morality tales to “general rules for life”. I didn’t frame it in quite this way at the time, but it seemed obvious that adults were trying to teach kids certain things about the world using repetition and variation on a theme, and I didn’t understand why they couldn’t just formulate the rules into English and tell me them already. But I liked puzzles and so if they wouldn’t tell me I’d just figure it out myself. As I formulated my rules, I promised myself that I would follow them unconditionally. After all, I was being clever and unlocking the secrets to life “early” somehow, so if I just always did the right thing that should clearly be an advantage. Spoiler: it wasn’t.

Considerably rephrased for clarity, this is what I remember coming up with:

  • Always put the tribe first (I was later delighted when I found out that Star Trek did in fact state this explicitly as “the good of the many outweighs the good of the few”).
  • Always default to trust. Many more problems are caused by good people not trusting each other than are caused by bad actors.
  • Never try to cheat any system, you will be found out and punished.
  • Never lie, you will be found out and punished.
  • Never double down on a sin. Fess up and accept the smaller punishment instead of having to deal with the bigger punishment that inevitable comes when your house of cards collapses.
  • Never learn things the hard way (In other words always trust other peoples’ tales of their own experiences and lessons learned. If they say it was a bad idea, it really was a bad idea).
  • Weigh the long-term more than the short-term. [edited to add, then just moved to a whole new post]

Seeing them written out like this I’m still kinda impressed with young me. Some of these are actually pretty solid and most of them I still follow to some degree (I was and still am more of a deontologist than a utilitarian). But I’ve run into enough problems with them that of course I was not nearly as clever as I thought I was. In particular the issues I’ve run into most are:

  • “Put the tribe first” has led me down a fairly guilt-ridden self-sacrificing route a few too many times. If I had to pick a better alternative I’d hazard a guess that “Always cooperate” would address the same kinds of morality tales and prisoner’s dilemmas without casting as wide a net.
  • “Never lie” hasn’t caused me so many direct problems, but mostly because I did figure out pretty early that in fact there are higher ethical concerns. I’d still wager that I lie a lot less than the average person, but I am capable.
  • “Never learn things the hard way” has been a big problem in practice, though fairly subtly. The problems are that a) Not everyone has the same set of values, so what may be a bad idea for you might be a good idea for me, and b) Second-hand knowledge may substitute well for first-hand knowledge in abstract decision making, but it really doesn’t substitute at all in terms of life skills or self-actualization.

In summary: ethics is hard. If my parents had known this was going through my head at the time they probably could have saved a lot of trouble by just giving me Kant and Hume to read.

P.S. Now that I’ve given this a title I wish I had the energy to go back and rewrite it in the actual structure of a morality tale. Alas it is late and I am lazy.