Narrative Direction and Rebellion

This is the fourth post in what has been a kind of accidental series on life narratives. Previously: Narrative Dissonance, Where the Narrative Stops, and Narrative Distress and Reinvention.

In Where the Narrative Stops I briefly mentioned the hippie revolution as a rebellion against the standard narrative of the time. This idea combined in my brain a while ago with a few other ideas that had been floating around, and now I’m finally getting around to writing about it. So let’s talk about narrative rebellions.

I’ve previously defined narratives as roughly “the stories we tell about ourselves and others that help us make sense of the world”. As explored previously in the series, these stories provide us with two things critical for our lives and happiness: a sense of purposeful direction, and a set of default templates for making decisions. So what happens when an individual or a demographic group chooses to rebel against the narrative of the day? It depends.

Rebellions are naturally framed in the negative: you rebel against something. With a little work you can manage to frame them positively, as in “fighting for a cause”, but the negative framing comes more naturally because it’s more reflective of reality. While some rebellions are kicked off by a positive vision, the vast majority are reactionary; the current system doesn’t work, so let’s destroy it. Even when there is a nominally positive vision (as in the Russian Revolution, which could be framed as a “positive” rebellion towards communism) there is usually also a negative aspect intermingled (the existing Russian army was already ready to mutiny against Russia’s participation in the First World War) and it can be difficult to disentangle the different causes.

In this way, narrative and socio-cultural rebellions are not that different from militaristic and geo-political ones. You can sometimes attach a positive framing, but the negative framing is both default, and usually dominant.

We’ll come back to that. For the moment let’s take a quick side-trip to Stephen Covey’s Principle-centered Leadership. One of the metaphors he uses in that book (which I didn’t actually include in my post about it, unfortunately) is the idea of a compass and a map. Maps can be a great tool to help you navigate, but Covey really hammers on the fact that it’s better to have a compass. Maps can be badly misleading if the mapmaker left off a particular piece of information you’re interested in; they can also simply go stale as the landscape shifts over time. A compass on the other hand (meaning your principles, in Covey’s metaphor), always points due North, and is a far more reliable navigational tool.

This navigational metaphor is really useful when extended for talking about narratives and rebellions. One of the most important things a narrative gives us is that “sense of purposeful direction” which carries us through life. Without it, as in Where the Narrative Stops, narratives tend to peter out after a while or even stop abruptly on a final event (the way a “student” narrative may stop on graduation if you don’t know what you actually want to do with the degree).

The problem is that rebelling against a narrative doesn’t automatically generate a fully-defined counter-narrative (roughly analogous to how reversed stupidity isn’t intelligence). If you don’t like the direction things are going, you can turn around and walk the other way. But there’s no guarantee the other way actually goes anywhere, and in fact it usually doesn’t; a random walk through idea-space is very unlikely to generate a coherent story. Even when you have a specific counter-narrative in mind, there’s good odds it still doesn’t actually work. See again the Russian Revolution for an example; they ended up with a strong positive vision for communism, but that vision ultimately collapsed under the weight of economic and political realities.

This lack of destination seems to me the likely candidate for why the hippie rebellion petered out. They had a strong disagreement with the status quo, and chose to walk in the direction of “free love”, and similar principles instead. But this new direction mostly failed to translate into a coherent positive vision, and even when it did that vision didn’t work. Most stories I’ve been able to find of concrete hippie-narrative experiments end up sounding a lot like the Russian revolution; they ultimately collapse under the weight of reality.

Given the high cost of a rebellion, be it individual or societal, militaristic or narrative, it seems prudent to set yourself up for success as much as possible before-hand. In practice, this seems to mean having a concrete positive vision with strong evidence that it will actually work in reality. Otherwise tearing down the system will just leave you with rubble.

What We Owe to Ourselves

“You can never make the same mistake twice because the second time you make it, it’s not a mistake, it’s a choice.”

Steven Denn

Something that has been kicking around my mind for the last little while is the relationship between responsibility and self-compassion. A couple of people recently made some very pointed observations about my lack of self-compassion, and it provoked a strange sadness in me. Sadness because while I know that their point is true – I am often very hard on myself, to the detriment of my happiness – those thought patterns seem so philosophically necessary that I have been unable to change them. This post is my attempt to unpack and understand that philosophical necessity.

Our ability to feel compassion is intimately tied to our judgement of responsibility in a situation. If you get unexpectedly laid off from your job, that’s terrible luck and most people will express compassion. However, if you were an awful employee who showed up late and did your job poorly, then most people aren’t going to be as sympathetic when you finally get fired. As the saying goes: you made your bed, you lie in it. More abstractly, we tend to feel less compassion for someone if we think that they’re responsible for their own misfortune. This all tracks with my lack of self-compassion, as I have also been told that I have an overdeveloped sense of personal responsibility. If I feel responsible for something, I’m not going to be very compassionate towards myself when it goes wrong; I’m going to feel guilt or shame instead.

Of course, this raises the question of what we’re fundamentally responsible for; the question of compassion is less relevant if I actually am responsible for the things that are causing me grief. People largely assume that we’re responsible for our own actions, and this seems like a reasonable place to start. It makes sense, because our own actions are where we have the clearest sense of control. While we can control parts of the outside world, that process is less direct and less exact. Our control over ourselves is typically much clearer, though still not always perfect.

If we assume that we have control over ourselves and our actions, this means that we also have responsibility for ourselves and our actions. If we avoid or ignore that responsibility and we’re not happy with the consequences, we don’t deserve much, if any, compassion: it’s our own damn fault. This all seems… normal and fairly standard, I think, but it’s an important foundation to build on.

Freedom, and Responsibility over Time

Now let’s explore what it means to be responsible for our actions, because that can be quite subtle. Sometimes our choices are limited, or we take an action under duress. Even ignoring those obvious edge cases, our narrative dictates the majority of our day-to-day decisions. What responsibility do we bear in all of these cases? Ultimately I believe in a fairly Sartrean version of freedom, where we have a near-limitless range of possible actions every day, and are responsible for which actions we take. Obviously some things are physically impossible (I can’t walk to Vancouver in a day), but there are a lot of things that make no sense in the current framework of my life that are still theoretically options for me. If nothing else, I could start walking to Vancouver today.

Assuming that we’re responsible for all of our actions in this fairly direct way, we also end up responsible for the consequences of actions not taken on a given day. There is a sense in which I am responsible for not walking to Vancouver today, because I chose to write this essay instead. I am responsible for my decision to write instead of walk, and thus for the consequence of not being on my way to Vancouver. This feels kind of weird and a bit irrelevant, so let’s recast it into a more useful example.

A few hours from now when I finish this essay, I’ll be hungry to the point of lightheadedness because I won’t have eaten since breakfast. Am I responsible for my future hunger? There’s a certain existential perspective in which I’m not, since it’s a biological process that happens whether I will it or not. But it’s equally true that I could have stopped writing several hours earlier, put the essay on hold, and had lunch at a reasonable time. I am definitely responsible for my decision to keep writing instead of eating lunch, and so there is a pretty concrete way in which I am at least partly responsible for my hunger.

This isn’t to say that I’m necessarily going to be unhappy with that decision; even in hindsight I may believe that finishing the essay in one fell swoop was worth a little discomfort. But it does mean that I can’t avoid taking some responsibility for that discomfort. And, since I’m responsible for it, it’s not something I can feel much self-compassion over; if I decide in hindsight that it was a terrible decision, then it was still a decision that I freely made. If I experience a predictable consequence of that decision, it’s my own damn fault.

This conclusion still feels pretty reasonable to me, so let’s take a weirder concrete example, and imagine that in three months I get attacked on the street.

Predictability in Hindsight

I don’t have a lot of experience with physical violence, so if I were to get attacked in my current state then I would likely lose, and be badly hurt, even imagining my attacker does not have a weapon. To what degree am I responsible for this pain? Intuitively, not at all, but again, the attack happens three months from now. I could very well decide to spend the next three months focused on an aggressive fitness and self-defence regimen (let’s assume that this training would be effective, and that I would not get hurt in this case). Today, I made the decision to write this essay instead of embarking on such a regimen; in the moment today, this decision to write is clearly one that I am responsible for. Does this mean I’m responsible for the future pain that I experience? I’d much rather avoid that pain than finish this essay, so maybe I should stop writing and start training!

The flaw in this argument, of course, is that I don’t know that I’m going to get attacked in three months. In fact, it seems like something that’s very unlikely. In choosing not to train today, I can’t accept responsibility for the full cost of that future pain. I should only take responsibility for the very small amount of pain that is left when that future is weighted by the small probability it will actually occur. If I did somehow know with perfect accuracy that I was going to be attacked, then the situation seems somewhat different: I would feel responsible for not preparing appropriately in that case, in the same way I would feel responsible for not preparing for a boxing match that I’d registered for.

All of this seems to work pretty well when looking forward in time. We make predictions about the future, weight them by their probability, and take action based on the result. If an action leads to bad results with high probability and we do it anyway then we are responsible for that, and don’t deserve much sympathy. We rarely go through the explicit predictions and calculations, but this seems to be the general way that our subconscious works.

But what about looking backward in time? Let’s say I decide not to train because I think it is unlikely I will be attacked, and then I get attacked anyway. Was this purely bad luck, or was my prediction wrong? How can we tell the difference?

Depending on the perspective you take you can get pretty different answers. Random street violence is quite rare in my city, so from that perspective my prediction was correct. Hindsight is a funny thing though, because in hindsight I know that I was attacked; in hindsight probabilities tend to collapse to either 0 or 1. And knowing that I was attacked, I can start to look for clues that it was predictable. Maybe I realize that, while street violence is rare in the city overall, I live in a particularly bad neighbourhood. Or maybe I learn that while it’s rare most of the time, it spikes on Tuesdays, the day when I normally go for a stroll. If I’d known these things initially, then I would have predicted a much higher probability of being attacked. Perhaps in that case I would have decided to train, or even take other mitigating steps like moving to a different neighbourhood. Who knows.

What this ultimately means, is that I can only possibly be responsible for being hurt in the attack if I’m somehow responsible for failing to predict that attack. I’m only responsible if for some reason I “should have known better”.

Taking Responsibility for our Predictions

[I should take this opportunity to remind people that this is all hypothetical. I was not attacked. I’m just too lazy to keep filling the language with extra conditional clauses.]

At this point I’ve already diverged slightly from the orthodox position. A large number of people would argue that my attacker is solely responsible for attacking me, and that I should accept no blame in this scenario. This certainly seems true from the perspective of law, and justice. But in this essay I’m focused on outcomes, not on justice; ultimately I experienced significant pain, pain that I could have avoided had I made better predictions and thus taken better actions.

Let’s return to the issue of whether or not I can be held responsible for failing to predict the attack. There is a continuum here. In some scenarios, the danger I was in should have been obvious, for example if my real-estate agent warned me explicitly about the neighbourhood when I moved in. In these scenarios, it seems reasonable to assign some blame to me for making a bad prediction. In other scenarios, there was really no signal, no warning; the attack was truly a stroke of random bad luck and even in hindsight I don’t see a way that I could have done better. In these scenarios, I take no responsibility; my prediction was reasonable, and I would make the same prediction again.

As with most things, practical experience tells me that real-world situations tend to fall somewhere in the middle. Nobody’s hitting you over the head with a warning, but neither is the danger utterly unpredictable; if you look closely enough, there are always signs. It is these scenarios where I think that my intuition fundamentally deviates from the norm. When something bad happens to me, then I default to taking responsibility for it, and I think that’s the correct thing to do.

Of course, there are times when whatever it is is my fault in an uncontroversial way, but I’m not talking about those. I’m talking about things like getting attacked on the street, or being unable to finish a work project on time because somebody unexpectedly quit. These are the kind of things that I expect most people would say are “not my fault”, and I do understand this position. However, I think that denying our responsibility for these failures is bad, because it causes us to stop learning. Every time we wave away a problem as “not our fault” we stop looking for the thing we could have done better. We stop growing our knowledge, our skills, our model of the world. We stagnate.

This sounds really negative, but we can frame it in a much more positive way: that there’s always something to learn, and that we should always try to be better than we were. What I think gets lost for a lot of people is that this not a casual use of “always”. Even in failures that are not directly our fault, there is still something to learn, and we should still use it as an opportunity to grow. Unless we perfectly predicted the failure and did everything we could to avoid or mitigate it, there is still something we could have done better. Denying our responsibility for our bad predictions is abdicating our ability to grow, change, or progress in life. Where does this leave us?

Any good Stoic will tell you that when something goes wrong, the only thing we have control over is our reaction, and this applies as much to how we assign responsibility as to anything else. We are responsible for the fate of our future self, and the only way to discharge that responsibility is to learn, grow, and constantly get better. The world is full of challenges. If we do not strive to meet them, we have no-one to blame but ourselves.

Winning vs Truth – Infohazard Trade-Offs

This post on the credibility of the CDC has sparked a great deal of discussion on the ethics of posts like it. Some people claim that the post itself is harmful, arguing that anything which reduces trust in the CDC will likely kill people as they ignore or reject important advice for dealing with SARS-CoV-2 and (in the long-run) other issues like vaccination. This argument has been met with two very different responses.

One response has been to argue that the CDC’s advice is so bad that reducing trust in it will actually have a net positive effect in the long run. This is an ultimately empirical question which somebody should probably address, but I do not have the skills or interest to attempt that.

The other response is much more interesting, arguing that appeals to consequences are generally bad, and that meta-level considerations mean we should generally speak the truth even if the immediate consequences are bad. I find this really interesting because it is ultimately about infohazards: those rare cases where there is a conflict between epistemic and instrumental rationality. Typically, we believe that having more truth (via epistemic rationality) is a positive trait that allows you to “win” more (thus aligning with instrumental rationality). But when more truth becomes harmful, which do we preference: truth, or winning?

Some people will just decide to value truth more than winning as an axiom of their value system. But for most of us, ultimately I think this also boils down to an empirical question of just how bad “not winning” will end up being. It’s easy to see that for sufficiently severe cases, natural selection takes over: any meme/person/thing that prefers truth over winning in those cases will die out, to be replaced by memes/people/things that choose to win. I personally will prefer winning in those cases. It’s also true that most of the time, truth actually helps you win in the long run. We should probably reject untrue claims even if they provide a small amount of extra short-term winning, since in the long run having an untrue belief is likely to prevent us from winning in ways we can’t predict.

Figuring out where the cut-over point lies between truth and winning seems non-trivial. Based on my examples above we can derive two simple heuristics to start off:

  • Prefer truth over winning by default.
  • Prefer winning over truth if the cost of not winning is destruction of yourself or your community. (It’s interesting to note that this heuristic arguably already applies to SARS-Cov-2, at least for some people in at-risk demographics.)

What other heuristics do other people use for this question? How do they come out on the CDC post and SARS-CoV-2?

An Open Critique of Common Thought

[I was going through a bunch of old files and found this gem of an essay. If the timestamp on the file is accurate it’s from February 2010, which means it’s almost exactly ten years old and predates this blog by about three years. Past me was very weird, so enjoy!]

I am writing this essay as a critique of a fundamental and unsolvable problem in philosophy today. Our greatest minds refuse to acknowledge this problem, so I have humbly taken it upon myself to explore more fully this hidden paradox.

Amongst all of the different philosophies, religions, and world-views, there is one common theme, so utterly pervasive that it has never before been questioned, yet so utterly false upon deeper inspection that it boggles the mind. It is my hope that this short essay will act as a call to arms for the oppressed masses in the field of higher thought, and prompt them to action demanding an end to this conspiracy.

The problem, ladies and gentlemen, in long and in short, is that of existence. Every thought, every idea, every concept that humankind has ever had rests on the central pillar, the core belief, that we exist. Not content, of course, with this simpler sophistry, humankind has embarked on an even more heinous error of logic – we assume not only that we exist, but that other things exist as well.

It is at this point, of course, that your conditioning takes over – “Of course we exist”, you say, “how could it be otherwise”? This is the knee-jerk reaction typical of an oppressed thinker today, and the prevalence of this mindless assertion – calling it a failure of an argument would be too kind – worries me more than I can say about the future of our society. Beyond the obvious lack of critical thinking evidenced by such lemming-like idiocies, this simple error is the cause of deeper, more dangerous problems as well.

But I digress. I will leave the deeper analysis of this crisis to the historians who survive it, and turn my own meagre talents to the task of alerting the public of this travesty. It is with heart-felt distress that I type my final plea to you, the thinking public – “Do you believe”?

Milk as a Metaphor for Existential Risk

[I don’t believe this nearly as strongly as I argue for it, but I started to pull on the thread and wanted to see how far I could take it]

The majority of milk sold in North America is advertised as both “homogenized” and “filtered“. This is actually a metaphor created by the dairy industry to spread awareness of existential risk.

There has been a lot of chatter over the last few years on the topic of political polarization, and how the western political system is becoming more fragile as opinions drift farther apart and people become more content to simply demonize their enemies. A lot of causes have been thrown around to explain the situation, including millennials, boomers, free trade, protectionism, liberals, conservatives, economic inequality, and the internet… There’s a smorgasbord to choose from. I’ve come to believe that the primary root cause is, in fact, the internet, but the corollary to this is far more frightening than simple cultural collapse. Like milk, humanity’s current trend toward homogenization will eventually result in our filtration.

The Law of Cultural Proximity

Currently, different human cultures have different behavioural norms around all sorts of things. These norms cover all kinds of personal and interpersonal conduct, and extend into different legal systems in countries around the globe. In politics, this is often talked about in the form of the Overton window, which is the set of political positions that are sufficiently “mainstream” in a given culture to be considered electable. Unsurprisingly, different cultures have different Overton windows. For example, Norway and the United States have Overton windows that tend to overlap on some policies (the punishment of theft) but not on others (social welfare).

Shared norms and a stable, well-defined Overton window are important for the stable functioning of society, since they provide the implicit contract and social fabric on which everything else operates. But what exactly is the scope of a “society” for which that is true? We just talked about the differences between Norway and the U.S., but in a fairly real sense, Norway and the U.S. share “western culture” when placed in comparison with Iran, China, or North Korea. In the other direction, there are distinct cultures with different norms around things like gun control, entirely within the U.S. Like all categorizations, the lines are blurry at times.

The key factor in drawing cultural lines is interactional proximity. This is easiest to see in a historical setting because it becomes effectively identical to geographic proximity. Two neolithic tribes on opposite ends of a continent are clearly and unambiguously distinct, where-as two tribes that inhabit opposite banks of a local river are much more closely linked in every aspect: geographically, economically, and of course culturally. Because the two local tribes interact so much on a regular basis, it is functionally necessary that they share the same cultural norms in broad strokes. There is still room for minor differences, but if one tribe believes in ritual murder and the other does not, that’s a short path to disagreement and conflict.

Of course, neolithic tribes sometimes migrated, and so you could very well end up with an actual case of two tribes coming into close contact while holding very different cultural norms. This would invariably result in conflict until one of the tribes either migrated far enough away that contact became infrequent, became absorbed into the culture of the other tribe, or was wiped out entirely. You can invent additional scenarios with different tribes and different cultures in different geographies and economic situations, but the general rule that pops out of this is as follows: in the long run, the similarity between two cultures is proportional to the frequency with which they interact.

The Great Connecting

Hopefully the law of cultural proximity is fairly self-evident in the simplified world of neolithic tribes. But now consider how it applies to the rise of trade, and technology over the last several millennia. The neolithic world was simple because interactions between cultures were heavily mediated by simple geographic proximity, but the advent of long-distance trade started to wear away at that principle. Traders would travel to distant lands, and wouldn’t just carry goods back and forth; they would carry snippets of culture too. Suddenly cultures separated by great distances could interact more directly, even if only infrequently. Innovations in transportation (roads, ship design, etc) made travel easier and further increased the level of interaction.

This gradual connecting of the world led to a substantial number of conflicts between distant cultures that wouldn’t have even know about each other in a previous age. The victors of these conflicts formed empires, developed new technologies, and expanded their reach even farther afield.

Now fast-forward to modern day and take note of the technical innovations of the last two centuries: the telegraph, the airplane, the radio, the television, the internet. While the prior millennia had seen a gradual connecting of the world’s cultures, the last two hundred years have seen a massive step change: the great connecting. On my computer today, I could easily interact with people from thirty different countries around the globe. Past technologies metaphorically shrank the physical distance between cultures; the internet eliminates that distance entirely.

But now remember the law of cultural proximity: the similarity between two cultures is proportional to the frequency with which they interact. This law still holds, over the long run. However the internet is new, and the long run is long. We are currently living in a world where wildly different cultures are interacting on an incredibly regular basis via the internet. Unsurprisingly, this has led to a lot of cultural conflict. One might even call it cultural war.

Existential Risk

In modern times, the “culture war” has come to refer to the conflict between the left/liberal/urban and right/conservative/rural in North American politics. But this is just the most locally obvious example of different cultures with different norms being forced into regular interaction through the combination of technology and the economic realities that technology creates. The current tensions between the U.S. and China around trade and intellectual property are another aspect of the same beast. So are the tensions within Europe around immigration, and within Britain around Brexit. So was the Arab Spring. The world is being squished together into a cultural dimension that really only has room for one set of norms. All wars are culture wars.

So far, this doesn’t seem obviously bad. It’s weird, maybe, to think of a world with a single unified culture (unless you’re used to sci-fi stories where the unit of “culture” is in fact the planet or even the solar system – the law of cultural proximity strikes again!) but it doesn’t seem actively harmful as long as we can reach that unified state without undue armed conflict. But if we reframe the problem in biological and evolutionary terms then it becomes much more alarming. Species with no genetic diversity can’t adapt to changing conditions, and tend to go extinct. Species with no cultural diversity…

Granted, the simplest story of “conditions change, our one global culture is not a fit, game over humanity” does seem overly pessimistic. Unlike genetics, culture can change incredibly rapidly, and the internet does have an advantage in that it can propagate new memes quite quickly. However, there are other issues. A single global culture only works as long as that culture is suitable for all the geographic and economic realities in which people are living. If the internet forces us into a unified global culture, but the resulting culture is only adaptive for people living far from the equator… at best that creates a permanent underclass. At worst it results in humanity abandoning large swaths of the planet, which again looks a lot like putting all our eggs in one basket.

Now that I’ve gotten this far, I do conclude that the existential risk angle was maybe a bit overblown, but I am still suspicious that our eventual cultural homogeneity is going to cause us a lot more problems than we suspect. I don’t know how to stop it, but if there were a way to maintain cultural diversity within a realm of instant worldwide communication, that seems like a goal worth pursuing.


Bonus: I struggled to come up with a way to work yogurt (it’s just milk with extra “culture”!) into the metaphor joke, but couldn’t. Five internet points to anybody who figures out how to make that one work.

Success over Victory: Some Thoughts on Conflict Resolution

One afternoon several years ago, I was busy coding away at my software job when I noticed a disagreement spiralling out of control on our internal chat system. Conflict is stressful, and this one had nothing to do with me, so it would have been easy to ignore. But I’m a nosy do-gooder at heart, so instead of ignoring it I did what I always do: I made it my business to resolve, much to the surprise of the initial participants (note: I had already earned enough trust that I could insert myself into the conversation without ruffling too many feathers; it’s not always recommended).

After the dust had settled, a junior developer on my team approached me to ask how I had accomplished the minor miracle of getting everyone back together pulling in the same direction. This turned into an extended conversation about conflict resolution, during which I was forced to organize my many thoughts on that topic into words (and more than one whiteboard diagram) for the first time. I am forever grateful to that person for pushing me to work through my thoughts and express myself.

By the end of the conversation, I had a substantial amount of material in my head, and I promised to write a blog post explaining it all. Several years later (oops), this is that post.

Introduction

We encounter conflict every day. Perhaps you’re having an unavoidable Thanksgiving-dinner conversation about politics, or maybe you’re chatting with your neighbour when you realize that you have very different views on a recent change by the local sports team. For me, as for many people, a lot of these conflicts tend to arise at work: dealing with unreasonable customers, unreasonable coworkers, or unreasonable managers is just part of the job. Whether your work is blue-collar, white-collar, retail, or even raising chickens, conflict happens whenever two people want or believe different things, and that isn’t exactly rare.

With so many conflicts in our day-to-day lives, resolving them becomes an important life skill. Typically, we do this using communication; it’s thankfully rare that minor conflicts devolve into violence. And yet, communicating well is one of the hardest parts of modern life. Technology has created a number of new ways to communicate our ideas, those ideas grow more complex every day, we spend less and less time face to face, and partisan political bias seems to be driving us further and further apart. Even so, communication is still our main approach for resolving most of the conflicts we encounter.

Given its importance, it shouldn’t be surprising that conflict resolution is a topic already rich in conventional wisdom, academic studies, and self-help books; practically speaking I don’t have much that is new to contribute. However, at the heart of many great innovations is the combination of multiple ideas in different fields, and that’s what I’m going to try and do here. I’ll be mixing together insights collected from a number of places, including the epistemological debates that I went through as part of my religious journey, the online “rationalist” community (a group with a particular focus on tools and processes for more effectively seeking the truth), a brief but interesting career as a manager of people, and of course several self-help books which touch on conflict resolution in some fashion. Anchoring all of these is my unusually intense dislike of interpersonal conflict. It’s just a part of who I am, so I’ve spent a lot more time resolving conflicts and thinking about this in my own life than I think is normal, or probably healthy.

I debated leaving out the parts that are truly unoriginal to focus on “the good stuff”, but I think that would be doing a disservice to the topic. It’s all important, and just because some of it has been covered elsewhere, doesn’t mean it’s not required to be successful. I’ve broken the material into four sections which I call Attitude, Communication, Comprehension, and Resolution, and if you’re starting fresh I recommend reading them all, in that order. That said, most of the material that is in any way “new” is in the last section on Resolution.

Finally, I want to note three things up front. First, that all of this is focused on conflict resolution via communication. If a conflict has reached the point of physical violence then the rules are very different; some of what’s in here might still be applicable, but I make no warranty to that effect. Second, that this is partly written from the perspective of a neutral third-party moderator. Everything here is just as applicable if you’re involved in the conflict, but it becomes harder to use effectively. And third, that this essay is entirely focused on resolving conflicts, not making decisions. Effective decision-making and consensus-building in the context of an unresolved (or unresolvable) disagreement is a whole other problem deserving of its own essay.

OK, here we go…

Attitude

War is merely the continuation of politics by other means.

Carl von Clausewitz

The tools of conflict resolution bear a striking resemblance to the tools of conflict. In practice, this means that one of the most important parts of successful conflict resolution is your attitude. Otherwise you’re liable to misuse the tools you have available. A good attitude will naturally guide you to the right decisions, smooth out minor miscommunications, and build trust. It’s the foundation on which all the other parts of this essay are built. But what does “a good attitude” actually mean? While people have many different attitudes toward different parts of their life, there are four specific ones which I think are important for conflict resolution.

The first attitude has to do with what you’re aiming for. Human beings have an unfortunate tendency to try and “win” arguments in a purely social sense (e.g. via ad hominem attacks), but victory of that kind is typically not the success you’re interested in if you’re reading this essay. Sometimes, the success we seek is truly as simple as “resolving the conflict”, though usually it’s not. More often, “success” really means uncovering the truth, or finding a solution where everyone gets what they want, or clearing up some underlying miscommunication. Know what it is you’re really aiming for, and set your attitude accordingly. Aim for success, not for victory.

The second attitude is far simpler since it already has name: humility. Accept that sometimes you just don’t have all the information. Accept that sometimes you makes mistakes. Accept that sometimes you honestly just change your mind (or have it changed by someone else). Our ego doesn’t like admitting to these things, but they do happen, and digging in your heels to protect your ego is the fastest way to unnecessarily prolong a conflict.

The third attitude is an attitude toward others. Just like we have an unfortunate tendency to try and “win” arguments, we also have an unfortunate tendency to view other people in a conflict as “enemies”. Instead, it is far better to respect and trust your conversational partners, and always assume they are operating in good faith. That one is so important I’m just going to repeat it: assume good faith.

People often object that this is a naive or dangerous assumption, and in some settings it certainly can be. Past experience with a particular person certainly trumps any possible generalized advice. But I would argue that true bad faith is far, far rarer than most people realize. I see numerous conflicts every year which could have been trivially resolved if everyone involved had assumed good faith instead of jumping to “you’re a terrible person”.

Finally, on a somewhat different tack from the others, I want to talk a bit about emotions. There is often an attitude (especially among programmers and other more analytically-inclined folks) that emotions are somehow irrelevant to a debate and should be ignored. I’m certainly guilty of this belief myself sometimes. But most of the time in practice I find this to be both false, and quite unhelpful in dealing with conflict. This is probably worth an entire post to itself, but I’ll keep it brief: your emotions carry real, valuable information about what you believe and what you value. You shouldn’t let them rule you, and quite often they’re incorrect or haven’t caught up to the moment yet, but they are still both important and useful. Pay them their due.

In this section I’ve covered four key attitudes which I find helpful and which are foundational in how I deal with conflict resolution. I hope they’re as useful to you as they are to me:

  1. Aim for success, not victory.
  2. Be humble.
  3. Assume good faith.
  4. Pay attention to your emotions (but don’t let them rule you).

Communication

The medium is the message.

Marshall McLuhan

Attitudes are general things, and if you’re anything like me you crave more specific advice. Say this. Say it in this way. Don’t say that. Don’t use words that are longer than 10 letters when translated into Brazilian Portuguese. That sort of thing. But instead of focusing on what or what not to say, the most useful specific advice I can give is actually to focus on where you communicate.

Marshall McLuhan coined the famous phrase “the medium is the message”, and oh boy was he ever right. Every medium has different characteristics which impact how we communicate, and how conflict will spread or resolve. Here are just a few of the characteristics that matter:

  • Speed of communication, aka bandwidth. Most people can speak much faster than they can type.
  • Speed of response, aka latency. This can be anywhere from snail mail, which takes days per message, to instant messaging which is usually real-time.
  • Ability to absorb cross-talk. Laggy video-conferencing is particularly bad at this.
  • Audience size. Compare an in-person conversation to an email list with hundreds of subscribers.
  • Participant size. Are those hundred subscribers just reading, or can they add their own opinions to the mix?
  • Available side-channels. In-person communication gives you a whole bunch of important extra communication channels like tone of voice, facial expression and posture.
  • Norms. Most media are bound to specific codes of behaviour, either explicitly or implicitly.

With all of these variables, it’s no surprise that picking the right venue for your conflict is hugely valuable. Of course, you often don’t have a choice of where the conflict starts; they just do. But you always have the opportunity to move it, and it’s usually pretty easy when everyone involved is operating in good faith. Just go “hey, this would be easier to talk about in-person, do you mind if I swing by your desk?” and you’d be amazed at how easy it gets. Changing to a better venue is often both the easiest and the most effective thing you can do to resolve a conflict.

That said, with so many possible characteristics to consider it can be pretty daunting to figure out which one to suggest. Fortunately there’s an easy rule of thumb: in-person trumps everything, always. If in-person isn’t possible because people are physically too far apart, video-conferencing can be a decent substitute as long as it isn’t too laggy. If neither of those are realistic, I’ve had pretty good luck aiming for whatever venue has the highest bandwidth, lowest latency, and smallest audience.

There is one major caveat to the in-person rule however, which is on the number of participants. If you have more than six people involved then the value of in-person conversation falls off pretty sharply, and you might be better off with a venue that can handle that better. Of course, it’s pretty rare that more than six people really need to be there; usually you can pick representatives from each group or otherwise cut the participants down to a reasonable size.

This section is short enough it doesn’t necessarily need a summary, but I wrote one anyway:

  1. Use the best available venue or communication medium.
  2. In-person trumps everything.
  3. Keep the number of participants small.

Comprehension

Seek first to understand, then to be understood.

Stephen Covey

A good attitude and a good venue will carry you a surprisingly long way, but of course they’re not always sufficient on their own. The next thing I try is to temporarily ignore whatever I believe and work to understand both sides of the argument equally. I called this section “comprehension”, but it could equally just be called “listening”, or maybe more precisely “active listening”. Honestly, Stephen Covey has already said most of this far better than I can in his book “The 7 Habits of Highly Effective People”. That book is of course the source of the quote that opens this section; “seek first to understand” is habit number five.

The value of truly understanding both sides of a conflict cannot be overstated. Even when I’ve nominally resolved a conflict, I get antsy if I still don’t really grok one side or the other. More than once, trying to scratch that itch “after the fact” has turned up a hidden requirement or pain point which would have just caused more grief down the road. Remember, success is rarely as simple as just making the conflict go away; you can’t know if you’ve truly found success (not just victory) unless you properly understand both sides.

But understanding both sides isn’t just an after-the-fact thing. It also has concrete value in guiding the resolution of a conflict when you’re caught in the middle of it, because it allows you to properly apply the principle of charity. The principle of charity says that you should try and find the best possible interpretation for people’s arguments, even when they aren’t always clear or coherent. It goes back to assuming good faith; maybe an argument sounds crazy, but it makes sense to the person saying it. The only way to apply the principle of charity in many cases is to start by understanding the argument, and the person making it.

Understanding both sides is also a key part of something called “steelmanning“, which is the process of actively finding better versions of another person’s arguments. This may seem like an odd thing to do in a conflict, but only if you’ve accidentally slipped back into the habit of aiming for victory instead of success. Assume good faith, and work with both sides to fully develop the points they’re trying to make. Doing this brings clarity to the discussion which can often illuminate the crux of the conflict.

Of course sometimes being charitable is hard. People may make arguments which just seem… wrong. Crazy. Even harmful. (The topic of whether an argument can be harmful in and of itself is a fascinating one I don’t have space for here. Whatever you believe, it isn’t relevant to the point I’m trying to make). A lot of people would suggest that trying to understand or improve an argument like that is a waste of time, or even ethically wrong. I disagree. I believe that truly understanding both sides of a conflict is fundamentally valuable, no matter what that conflict is. It clarifies. It builds empathy. It expands your knowledge of the world. And even if by the end you still deeply disagree, understanding the argument will let you articulate a better response.

The principle is all well and good, but getting to that level of understanding in practice can also be really hard. It’s a skill that gets easier with repetition, so I would encourage you to practice it as much as possible, even for small conflicts where it might not seem necessary. Build that habit when it’s easy, and you’ll find that it becomes automatic even when it’s hard. Still, if you’re trying and you’re really stuck, I’ve got a trick which helps me when I just can’t seem to connect with what somebody is saying.

To better understand a different perspective, try splitting an argument up into the separate pieces of a problem and a solution. A lot of arguments fit into this pattern quite naturally, and I often find that while I couldn’t quite grasp the argument as a whole, I both understand and even agree with the problem; it’s the solution that’s causing me issues. Even then, having the problem separated out and well-defined can lead me to understanding the solution too, because it frequently highlights some unstated premise which I wasn’t aware of. This is also a great way to practice steelmanning, since making implied premises explicit is a great way to improve an argument; people are pretty bad at this by default.

I should also note that if this trick kind of works for a situation, but doesn’t quite, you should try making the problem even more general. For example, if the argument is “Mexicans are taking our jobs, so we should stop immigration from Mexico”, it’s tempting to define the problem as just “Mexicans are taking our jobs”, but it’s probably more productive to define it as something like “something is taking our jobs” or even “our economic prospects suck”. This pulls out an implied premise (that the cause is Mexican immigrants) which may be the real point of disagreement, but even apart from that, finding a problem which you can be sympathetic to is worth its weight in gold. With this kind of problem in hand, you can reframe the conflict as a cooperative mission, working together to find the best solution to the problem. You can start to look for success, not victory.

It’s often said that the real acid test for truly understanding somebody’s argument is the ability to explain it back to them in a way they will agree with. This is good, and you should definitely aim for this (trying to explain it back is also a useful trick for conflict resolution in general), but sometimes I find it useful to use a slightly higher standard. I consider myself to really properly understand an argument when I can not only explain it to the person who made it, but can also explain (to myself, not to them) how they came to believe it. Both sides of the conflict are part of the universe, so to understand the universe you have to know how both sides came to be.

This may seem like an esoteric or excessively demanding standard, and it isn’t necessary all the time. But there are interesting and practical sources of conflict where this is a really useful approach that provides a lot of insight. Religion is my favourite example of this; most theistic worldviews can pretty naturally explain the existence of atheists, but a lot of atheists have a hard time explaining the existence of theists. “People are dumb” may be emotionally satisfying, but doing the work of constructing a real explanation builds a lot of empathy and ends up sharpening the resulting argument.

I’ve covered a lot of different ground in this section, but I think I can boil it down to four key points to take away:

  1. Seek always to understand.
  2. Actively look for the best version of everyone’s arguments.
  3. Separate the problem and the solution.
  4. To truly understand, you must explain how both sides came to be.

Resolution

Now, finally, we get to the meat of this post. You’ve got the right attitude, you’re in a good venue or communication medium, and you think you’ve got a pretty good grasp of what both sides are saying. How do you actually get to a successful resolution? For me, it all boils down to understanding the building blocks of how we argue, and how we disagree.

Philosophers and linguists have spent millennia studying the nature of logic, rhetoric, and argument, all the way from Aristotle through to predicate logic and beyond (the Wikipedia articles are unfortunately technical, but this Stanford site seems to have a more accessible introduction). This body of work is another great tool that can be helpful in the previous section on understanding both sides of an argument.

While rhetoric and disagreement are obviously related, the nature of disagreement is much less studied. The rationalist community has recently started to dig into it, coming up with some interesting ideas like double-cruxing, but I don’t know of any comprehensive theory from that group.

In a very brief post in 2017 (my first failed attempt at what would become this post) I sketched out a basic categorization of disagreements with almost no explanation. Two years later, my core model remains almost the same. While there can be many forms of valid argument and many kinds of propositions to slot into those arguments, there are in fact only four kinds of atomic disagreement: fact, value, meaning, and “empty”. As far as I can tell every disagreement must belong to one of these categories, or be a complex combination of smaller disagreements. I’ll tackle them one at a time, including tips for resolving each type, and then talk about how to understand and break down more complex combinations.

Disagreements of Fact

Disagreements of fact are disagreements over how the world was, is, or will be. They are fundamentally empirical in nature: if I believe that there are only ten chickens on the planet and you believe that there are more, that’s something we can physically check; we just have to go out and count enough chickens. Disagreements about historical facts are often harder to resolve (we can’t just count the chickens alive in the year 1500 to see how many there were) but the factual nature of the disagreement remains; there is a single right answer, and we just have to find it.

Resolving disagreements of fact is the specialty of science and the scientific method. When a disagreement of fact is not directly resolvable through empirical observation, hunt for places where the core disagreement results in differing predictions about something that is directly observable. Maybe if there were as many chickens as you believe, the nutrient content of human skeletons from that era will back you up (I really don’t know, historical chicken population is not my specialty and this example is getting out of hand).

Of course, some disagreements of fact may not be perfectly resolvable with the technology we have available to us. The nutrient content of skeletons may give some indication of chicken population, but it’s not going to give us a precise count. In these cases, it’s best to fall back on reasoning based on Bayesian statistics. What are your prior confidence levels, and how do the various pieces of evidence affect them? What else can you easily empirically check which will impact those confidence levels?

Even then, there are some cases where there just doesn’t seem to be any checkable predictions that come out of a conflict of fact (the various debates around string theory were like this for a while). The nice thing is that when you hit a disagreement like this, it somehow stops mattering. If there are no differences in the predictions that can be tested with current technology, then until that technology exists, the two possible worlds are by definition indistinguishable.

Finally, for cases about the future, it’s important to distinguish between disagreements about how the world will be (for example whether there will be more or fewer chickens tomorrow), and disagreements about how the world should be (for example whether we ought to breed more chickens). Disagreements about how the world will be can sometimes be resolved like historical facts, by looking for more immediately checkable predictions. They can also be resolved just by waiting until the future comes to pass. On the other hand, disagreements about how the world should be take us into our next type of disagreement: disagreements of value.

Disagreements of Value

Disagreements of value are disagreements over what we ought to value. This tends to play out more concretely in disagreements over how the world ought to be, and what we ought to do to get there. For example, if I believe that we should value chickens’ lives as much as human lives and you believe we should value them less, that is obviously a disagreement over value. There’s no checkable fact or testable prediction, now or in the future; the disagreement is fundamentally about what is important. Of course in practice you’re unlikely to see a direct disagreement over the value of chicken lives; you’re more likely to see a disagreement over whether humans should eat chickens or not, but it’s often the same thing.

Disagreements of value are difficult to deal with. This is often because there is actually a complex multi-part disagreement masquerading as a simple value disagreement (for example a disagreement over whether we “ought” to be vegetarian may be about environmental factors as much as it is about the value of a chicken’s life). The key thing to pay attention to is whether the values under debate are instrumental or terminal.

If the values under debate are instrumental (for example vegetarianism as a means to value chicken life), then things are by definition complex, as there are at least two possible underlying disagreements. The root cause could be a disagreement over the terminal value (whether a chicken’s life should be valued) or a disagreement over the best way to achieve that terminal value (our consumption of chicken has caused a great increase in the total number of chickens, which might be a more effective way to value their lives). When you see a debate over an instrumental value, apply Hume’s guillotine to slice apart the pieces and find the more fundamental disagreement. Keep in mind that there’s nothing to stop both pieces from being sources of disagreement at once, in which case you should at least try and take them one at a time.

Recognizing instrumental value debates can be tricky, as can breaking them down into their constituent parts. In practice, one of the best ways to do both of these things is to simply try the question “Why does that matter?”, and not accept “it just does” as an answer. When pressed, most people will be able to articulate that, for example, they actually value vegetarianism because they value the lives of animals.

The other way to recognize many instrumental value debates is to look for two apparently-unrelated values being traded off against one another. Imagine we’re building a coop for all of these chickens; if one person thinks we should prioritize security against foxes, while the other thinks we should prioritize the number of chickens it can hold, it might seem like they’re at an impasse. But this is actually an instrumental value debate that can easily be resolved; all we have to do is “normalize” the units under debate. Fox-security and number-of-chickens are not directly comparable values, but in practice they’re probably both backed by the same terminal value: maximizing the number of eggs we can collect per day. By normalizing the two sides into a single terminal value unit, we’re left with a simple disagreement of fact which can be resolved via experimentation: which approach results in more eggs?

Unfortunately, if the values under debate are truly terminal (back to whether chickens’ lives should be valued as human lives) then there isn’t a good way to resolve this conflict. The conflict will exist until somebody changes their core values, and that’s incredibly hard to do. The best “hack” I’ve found is to come up with an unrelated value or problem which both participants agree is more important, and thus makes the current conflict either irrelevant or at least not worth arguing over. Whether a chicken’s life is worth a human life tends to take a backseat when the human’s house is on fire.

(note: I am not advocating arson as a means of avoiding debates about vegetarianism)

Disagreements of Meaning

The third kind of disagreement is a disagreement over meaning. This is best understood by examining the classic question: if a chicken tree falls in the forest and nobody hears it, does it make a sound? While on the surface a disagreement on this point may seem to be a disagreement of fact, it’s almost always instead a disagreement of meaning.

Most reasonable people will agree to the same core facts of what happens when a tree falls in the forest. First, they’ll agree that it produces vibrations in the air, also known as sound waves. Second, they’ll agree that those sound waves dissipate before reaching anybody’s ears, as stipulated in the question. These two points actually cover all of the questions of fact relevant to the disagreement; the conflict is really over the meaning of the word “sound”. Does it refer to the simple production of sound waves (in which case the tree makes a sound) or does it refer to the sensation created by sound waves heard by a person (in which case it does not).

The nice thing about disagreements of meaning is that they almost never matter. Language is socially negotiated, and at the end of the day word meanings are entirely arbitrary. The only thing you need to do to resolve a conflict like this is be very clear about your definitions, and the conflict magically evaporates. Replacing problem words with new nonsense words that have clear definitions is a great trick for this (borrowed from this Less Wrong post on the same topic).

The one case where the meaning of words does legitimately matter is in law. As a friend of mine so nicely put it, “laws are stored in words”, and interpreting the meaning of those words can impact how the law is applied, who goes to jail, etc. Ultimately though, word definitions are still arbitrary and will even shift over time, meaning that these disagreements are not resolvable without getting really deep into the philosophy of law (the question of literal meaning vs author’s intent, just to start). Fortunately we have a standard method for making these decisions anyway: judges and juries. The result is that the law evolves over time, just like the people that interpret it, and the language that stores it.

The other case where people like to argue that word meanings matter is when certain words are offensive, disrespectful, or even harmful (if that’s a thing you believe words can be). Fortunately this one is a bit more clear-cut: the use of these words is a thing people can disagree about, but it’s not a disagreement of meaning. It actually has two parts, tying up an instrumental or potentially terminal value (we should not offend or harm people) with a factual claim (some proportion or group of people are offended or harmed by a given word). The meaning of the word no longer matters at all.

Empty Disagreements

Empty disagreements are a late addition to this essay, and are quite different from the other three types. In a certain sense they are not real disagreements at all, and are merely what happens when disagreement becomes disconnected from any tangible point. But in practice they are fairly common, and my goal with this essay is ultimately a practical one.

Empty disagreement happens when there is no fundamental disagreement of fact, value, or meaning between two parties, but something in the situation causes them to start or continue a conflict regardless. This is usually related either to social status (when someone knows they’re wrong but won’t back down to avoid losing face), or to internal emotional state (when someone is caught up in the heat of the moment). In both cases, it is ideas from the prior sections of this essay that are the key to a successful resolution.

Status-based conflicts are frequently best-solved by changing venue, usually to one with a smaller audience. In most cases people are happy to resolve the conflict themselves once doing so would no longer cost them status. Things become trickier if this isn’t possible, or if the status issue is actually between the two people involved in the conflict. You can try to build enough trust to overcome the status issue, or compensate for it by making an unrelated concession, but ultimately you’ll have to resolve the status issue to resolve the conflict.

Similarly, heat-of-the-moment conflicts are usually best solved by committing more strongly to the four attitudes I described in the first section of this essay. Breathe deep, and aim for success instead of victory. Use humility to build the trust necessary to reach that point, and never lose sight of the fact that both sides are operating in good faith (mistakes in the heat of the moment are still fundamentally different from malice). If necessary, suggest taking a five-minute break to go to the washroom or get a drink of water; time away is often all that is really needed for people to cool down.

Complex Disagreements

As we’ve gone through the four atomic types, we’ve seen a couple of examples of complex disagreements masquerading as simpler forms of disagreement. This is typically how they show up in practice, since if the complexity is obvious the participants will break it apart themselves without thinking about it. The fact that instrumental values show up frequently in this way is also not a coincidence; the combination of a value with a fact to produce an instrumental value is one of the easiest signs of a complex disagreement that needs to be split up.

The other major sign of a complex disagreement is the use of the forms of propositional and predicate logic (another great reason to study those topics). Argument forms like modus ponens are how complex arguments get built up, and thus naturally how complex disagreements can be broken down. Of course, people rarely phrase their arguments in pure logical form, so you’ll probably have to do some steelmanning along the way, but if you’re lucky somebody will make their arguments in roughly the right shape.

As mentioned in the section on comprehension, regular practice is the best way to build these skills. Even when an argument is really trivial, (for example “A five ounce bird could not carry a one pound coconut!” while talking about the carrying capacity of swallows) it can be worth breaking down. In its pure logical form, that example becomes something like:

  • P1: If a bird weighs five ounces, it cannot carry a coconut.
  • P2: Swallows weigh five ounces.
  • C: Swallows cannot carry coconuts.

Just like with instrumental values, we now have two different pieces (P1 and P2) where either could be the source of disagreement. By narrowing in on the root cause, or at least taking them one at a time, you’ve made the conflict smaller and more focused. Once you’ve gone down a few layers you’ll usually end up either at a testable disagreement of fact or a shared terminal value, and will be able to resolve it appropriately. The goal with a complex disagreement is always to break it down and deal with the pieces, not to swallow it whole.

Conclusion

Wow. What started as a quick blog post has turned into a six-thousand-word essay, and I still feel like there’s more I could say. Since I like bullet points, I’ll try and summarize all of my recommendations into a nice little list to leave you with.

  • Aim for success, not victory.
    • Be humble.
    • Assume good faith.
    • Pay attention to your emotions (but don’t let them rule you).
  • Use the best available venue or communication medium.
    • In-person trumps everything.
    • Keep the number of participants small.
  • Seek always to understand.
    • Actively look for the best version of everyone’s arguments.
    • Separate the problem and the solution.
    • To truly understand, you must explain how both sides came to be.
  • Use the right tool for the right conflict.
    • Use science and Bayesian statistics to resolve disagreements of fact.
    • Use overriding values to avoid disagreements of terminal value (but watch out for values that are actually instrumental).
    • Use clear definitions to resolve disagreements of meaning.
    • Use trust and communication to resolve empty disagreements.
    • Use logic to break down complex disagreements into simpler parts.

I hope reading this essay proves as helpful to you as writing it was for me. I want to once again thank the person who prompted me to write it, as well as all the other people who read early drafts and provided invaluable feedback. You make me better.

The Efficient Meeting Hypothesis

This is a minor departure from my typical topics, but was something I wrote for work and wanted to share more widely.

Meeting efficiency drops off sharply as the number of people in attendance climbs. A meeting with two or three people is almost always a good use of everyone’s time. If it’s not, the people involved simply stop meeting. Meetings with 4-6 people are worse, but are still generally OK. Meetings with more than 6 people in attendance (counting the organizer) are almost universally awful.

Why are meetings inefficient?

People do not exchange opinions the way machines exchange information. As the number of people grows, so does the number of different opinions, the number of social status games being played (consciously or not), the number of potential side conversations, etc. Achieving consensus gets harder.

In my experience, six people is the limit for anything resembling a useful caucus-style meeting. Above six people, it’s less likely that a given topic (at a given level of abstraction) is of sufficient interest to everyone present. Tangential topics drift so far that by the time everyone has had their say it’s hard to get back on track. Side-conversations start to occur regularly. People who naturally think and speak slowly simply won’t get to speak at all since there will always be somebody else who speaks first.

Why don’t people exit useless meetings?

People mainly stay in useless meetings for two reasons:

  • a variation of the bystander effect where everyone assumes that somebody else must be getting value from the meeting, and nobody wants to be the first to break rank
  • a fear of missing out, because the topics discussed at useless meetings are often so variable (due to tangents and side conversations) it’s hard to know if maybe this will be the moment where something relevant is discussed

How to run an efficient meeting

Keep it as small as possible, and always under 6 people.

How to run an efficient meeting with more than 6 people

You can’t. But if you really think you *have* to…

Give your meeting a rigid structure. Note that this does not just mean “have an agenda document that people can add to ahead of time”. At the minimum you need:

  • A moderator whose only job in the meeting is to moderate (either the meeting organizer or somebody explicitly appointed by them).
  • A talking stick or some digital equivalent. Basically: an explicit process for deciding who gets to speak, and when. A good moderator can manage this entirely verbally for medium-sized groups, but it’s hard. Something explicit is much better.
  • A formal meeting structure and topic, set in advance.

Again, a structure does not just mean “an agenda” or “a slide deck” but some common conversational rules. Here is a list (definitely not exhaustive) of common or useful meeting structures:

  • Stand-Up: each person in turn gets a fixed amount of time (enforced by the moderator) to present to the group.
  • Presentation: one person presents for the majority of the meeting, and then (optionally) holds a question/answer session afterwards.
  • Ask-Me-Anything: the moderator works through a list asking pre-curated questions to specific people.
  • Parliamentary Procedure: this would typically be Robert’s Rules of Order.

Some common pitfalls:

  • Never try to make consensus-based decisions in a meeting with more than 6 people. If a decision has to be made then you must either:
    • Have a smaller meeting. OR
    • Appoint one person the decision-maker in advance, in which case the meeting is actually about presenting and arguing to that person, not the actual making of the decision. OR
    • Use a majority-rules process (typically a vote), in combination with a more parliamentary structure (Robert’s Rules of Order or others).
  • The moderator absolutely cannot talk about anything other than the meta-level (moderating) unless they also hold the talking stick. Ideally the moderator has no stake in the actual topic of the meeting to begin with.
  • The moderator cannot be “nice”. Shut down tangents and off-topics aggressively.
  • Avoid automatically-recurring large meetings like the plague. They shouldn’t be frequent enough to bother auto-booking them to begin with, and the manual process will make it much easier to stop holding them when they are no longer useful.