The FRACTAL Model

I was thinking about relationships and playing around with silly acronyms and came up with the following. It is by no means true, or useful, but I thought I’d share. One could say that a good relationship is fractal, meaning that it is built on:

Fun
Respect
Alignment
Care
Trust
Arousal
Limerence

Don’t read anything into the order, fractal was just a much better word than… cratfal. Or catrafl. A cat-raffle, now there’s an idea.

Pop quiz! What would you say the fractal model misses?

It’s Not About The Nail

[This is hardly original; I’m documenting for my own sake since it took so long for me to understand.]

There’s an old saw, that when a women complains she wants sympathy, but when a man hears a complaint, he tries to solve the problem. This viral YouTube video captures it perfectly:

Of course it’s not strictly limited by gender, that’s just the stereotype. And the underlying psychological details are fairly meaty; this article captures a lot of it pretty well for me.

I’ve known about all this for a long time now, and it’s always made sense at a sort of descriptive level of how people behave and what people need. But despite reading that article (and a good reddit thread) I’ve never really understood the “why”. What is the actual value of listening and “emotional support” in these kind of scenarios? Why do people need that? Well I finally had it happen to me recently when I was aware enough to notice the meta, and thus write this post.

I now find it easiest to think about in terms of the second-order psychological effects of bad things happening. When a bad thing happens to you, that has direct, obvious bad effects on you. But it also has secondary effects on your model of the world. Your mind (consciously or subconsciously) now has new information that the world is slightly less safe or slightly less predictable than it thought before. And of course the direct, obvious bad effects make you vulnerable (not just “feel” vulnerable, although normally that too – they make you actually vulnerable because you’ve just taken damage, so further damage becomes increasingly dangerous).

Obviously sometimes, and depending on the scenario, the first-order effect dominates and you really should just solve that problem directly. This is what makes the video so absurd – having a nail in your head is hard to beat in terms of the first-order effects dominating. But in real versions of these cases, sometimes the second-order effects are more significant, or more urgent, or at the least more easily addressable. In these cases it’s natural to want to address the second-order effects first. And the best way to do that is talking about it.

Talking about a problem to somebody you have a close relationship with addresses these second-order effects in a pretty concrete way: it reaffirms the reliability of your relationship in a way that makes the world feel more safe and predictable, and it informs an ally of your damage so that they can protect you while you’re vulnerable and healing. But of course you don’t accomplish this by talking directly about the second-order problem. The conversation is still, at the object level, about the first-order problem, which is why it’s so easy to misinterpret. To make it worse, the second-order problems are largely internal, and thus invisible, so it’s easy for whoever you’re talking to to assume they’re “not that bad” and that the first-order problem dominates, even when it doesn’t.

Working through this has given me some ideas to try the next time this happens to me. At a guess, the best way to handle it is to open the conversation with something like “I need you to make me feel safe” before you get into the actual first-order problem, but I guess we’ll see.

Fast Takeoff in Biological Intelligence

[Speculative and not my area of expertise; probably wrong.]

One of the possible risks of artificial intelligence is the idea of “fast” (exponential) takeoff – that once an AI becomes even just a tiny bit smarter than humans, it will be able to recursively self-improve along an exponential curve and we’ll never be able to catch up with it, making it effectively a god in comparison to us poor humans. While human intelligence is improving over time (via natural selection and perhaps whatever causes the Flynn effect) it does so much, much more slowly and in a way that doesn’t seem to be accelerating exponentially.

But maybe gene editing changes that.

Gene editing seems about as close as a biological organism can get to recursively editing its own source code, and with recent advances (CRISPR, etc) we are plausibly much closer to functional genetic manipulation than we are to human-level AI. If this is true, humans could reach fast takeoff in our own biological intelligence well before we build an AI capable of the same thing. In this world we’re probably safe from existential AI risk; if we’re both on the same curve, it only matters who gets started first.

There are a bunch of obvious objections and weaknesses in this analogy which are worth talking through at a high level:

  • The difference between hardware and software seems relevant here. Gene editing seems more like a hardware-level capability, whereas most arguments about fast takeoff in AI talk about recursive improvement of software. It seems easy for a strong AI to recompile itself with a better algorithm, where-as it seems plausibly more difficulty for it to design and then manufacture better hardware.

    This seems like a reasonable objection, though I do have two counterpoints. The first is that, in humans at least, intelligence seems pretty closely linked to hardware. Software also seems important, but hardware puts strong upper bounds on what is possible. The second counterpoint is that our inability to effectively edit our software source code is, in some sense, a hardware problem; if we could genetically build a better human, capable of more direct meta-cognitive editing… I don’t even know what that would look like.
  • Another consideration is generation length. Even talking about hardware replacement, a recursively improving AI should be able to build a new generation on the order of weeks or months. Humans take a minimum of twelve years, and in practice quite a bit more than that most of the time. Even if we end up on the curve first, the different constant factor may dominate.
  • We don’t really understand how our own brains work. Even if we’re quite close to functional genetic editing, maybe we’re still quite far from being able to use it effectively for intelligence optimization. The AI could still effectively get there first.
  • Moloch. In a world where we do successfully reach an exponential take-off curve in our own intelligence long before AI does, Moloch could devour us all. There’s no guarantee that the editing required to make us super-intelligent wouldn’t also change or destroy our values in some fashion. We could end up with exactly the same paperclip-maximizing disaster, just executed by a biological agent with human lineage instead of by a silicon-based computer.

Given all these objections I think it’s fairly unlikely that we reach a useful biological intelligence take-off anytime soon. However if we actually are close, then the most effective spending on AI safety may not be on AI research at all – it could be on genetics and neuroscience.

Every system seems random from the inside

I’ve been working on a post on predictions which has rather gotten away from me in scope. This is the first of a couple of building-block posts which I expect to spin out so I have things to reference when I finally make it to the main point. This post fits neatly into my old (2014!) sequence on systems theory and should be considered a belated addition to that.

Systems can be deterministic or random. A system that is random is, of course… random. I’m glad the difficult half of this essay is out of the way! Kidding aside, the interesting part is that from the inside, a system that is deterministic also appears random. This claim is technically a bit stronger than I can really argue, but it guides the intuition better than the more formal version.

Because no proper subsystem can perfectly simulate its parent, every inside-the-system simulation must ultimately exclude information, either via the use of lossy abstractions or by choosing to simulate only a proper, open subsystem of the parent. In either case, the excluded information effectively appears in the simulation as randomness: fundamentally unpredictable additional input.

This has some interesting implications if reality is a system and we’re inside it, as I believe to be the case. First it means that we cannot ever conclusively prove whether the universe is deterministic (a la Laplace’s Demon) or random. We can still make some strong probabilistic arguments, but a full proof becomes impossible.

Second, it means that we can safely assume the existence of “atomic randomness” in all of our models. If the system is random, then atomic randomness is in some sense “real” and we’re done. But if the system is deterministic, then we can pretend atomic randomness is real, because the information necessary to dispel that apparent randomness is provably unavailable to us. In some sense the distinction doesn’t even matter anymore; whether the information is provably unavailable or just doesn’t exist, our models look the same.

Narrative Direction and Rebellion

This is the fourth post in what has been a kind of accidental series on life narratives. Previously: Narrative Dissonance, Where the Narrative Stops, and Narrative Distress and Reinvention.

In Where the Narrative Stops I briefly mentioned the hippie revolution as a rebellion against the standard narrative of the time. This idea combined in my brain a while ago with a few other ideas that had been floating around, and now I’m finally getting around to writing about it. So let’s talk about narrative rebellions.

I’ve previously defined narratives as roughly “the stories we tell about ourselves and others that help us make sense of the world”. As explored previously in the series, these stories provide us with two things critical for our lives and happiness: a sense of purposeful direction, and a set of default templates for making decisions. So what happens when an individual or a demographic group chooses to rebel against the narrative of the day? It depends.

Rebellions are naturally framed in the negative: you rebel against something. With a little work you can manage to frame them positively, as in “fighting for a cause”, but the negative framing comes more naturally because it’s more reflective of reality. While some rebellions are kicked off by a positive vision, the vast majority are reactionary; the current system doesn’t work, so let’s destroy it. Even when there is a nominally positive vision (as in the Russian Revolution, which could be framed as a “positive” rebellion towards communism) there is usually also a negative aspect intermingled (the existing Russian army was already ready to mutiny against Russia’s participation in the First World War) and it can be difficult to disentangle the different causes.

In this way, narrative and socio-cultural rebellions are not that different from militaristic and geo-political ones. You can sometimes attach a positive framing, but the negative framing is both default, and usually dominant.

We’ll come back to that. For the moment let’s take a quick side-trip to Stephen Covey’s Principle-centered Leadership. One of the metaphors he uses in that book (which I didn’t actually include in my post about it, unfortunately) is the idea of a compass and a map. Maps can be a great tool to help you navigate, but Covey really hammers on the fact that it’s better to have a compass. Maps can be badly misleading if the mapmaker left off a particular piece of information you’re interested in; they can also simply go stale as the landscape shifts over time. A compass on the other hand (meaning your principles, in Covey’s metaphor), always points due North, and is a far more reliable navigational tool.

This navigational metaphor is really useful when extended for talking about narratives and rebellions. One of the most important things a narrative gives us is that “sense of purposeful direction” which carries us through life. Without it, as in Where the Narrative Stops, narratives tend to peter out after a while or even stop abruptly on a final event (the way a “student” narrative may stop on graduation if you don’t know what you actually want to do with the degree).

The problem is that rebelling against a narrative doesn’t automatically generate a fully-defined counter-narrative (roughly analogous to how reversed stupidity isn’t intelligence). If you don’t like the direction things are going, you can turn around and walk the other way. But there’s no guarantee the other way actually goes anywhere, and in fact it usually doesn’t; a random walk through idea-space is very unlikely to generate a coherent story. Even when you have a specific counter-narrative in mind, there’s good odds it still doesn’t actually work. See again the Russian Revolution for an example; they ended up with a strong positive vision for communism, but that vision ultimately collapsed under the weight of economic and political realities.

This lack of destination seems to me the likely candidate for why the hippie rebellion petered out. They had a strong disagreement with the status quo, and chose to walk in the direction of “free love”, and similar principles instead. But this new direction mostly failed to translate into a coherent positive vision, and even when it did that vision didn’t work. Most stories I’ve been able to find of concrete hippie-narrative experiments end up sounding a lot like the Russian revolution; they ultimately collapse under the weight of reality.

Given the high cost of a rebellion, be it individual or societal, militaristic or narrative, it seems prudent to set yourself up for success as much as possible before-hand. In practice, this seems to mean having a concrete positive vision with strong evidence that it will actually work in reality. Otherwise tearing down the system will just leave you with rubble.