Abstractions on Inconsistent Data

[I’m not sure this makes any sense – it is mostly babble, as an attempt to express something that doesn’t want to be expressed. The ideas here may themselves be an abstraction on inconsistent data. Posting anyway because that’s what this blog is for.]

i. Abterpretations

Abstractions are (or at least are very closely related to) patterns, compression, and Shannon entropy. We take something that isn’t entirely random, and we use that predictability (lack of randomness) to find a smaller representation which we can reason about, and predict. Abstractions frequently lose information – the map does not capture every detail of the territory – but are still generally useful. There is a sense in which some things cannot be abstracted without loss – purely random data cannot be compressed by definition. There is another sense in which everything can be abstracted without loss, since even purely random data can be represented as the bit-string of itself. Pure randomness is in this sense somehow analogous to primeness – there is only one satisfactory function, and it is the identity.

A separate idea, heading in the same direction: Data cannot, in itself, be inconsistent – it can only be inconsistent with (or within) a given interpretation. Data alone is a string of bits with no interpretation whatsoever. The bitstring 01000001 is commonly interpreted both as the number 65, and as the character ‘A’, but that interpretation is not inherent to the bits; I could just as easily interpret it as the number 190, or as anything else. Sense data that I interpret as “my total life so far, and then an apple falling upwards”, is inconsistent with the laws of gravity. But the apple falling up is not inconsistent with my total life so far – it’s only inconsistent with gravity, as my interpretation of that data.

There is a sense in which some data cannot be consistently interpreted – purely random data cannot be consistently mapped onto anything useful. There is another sense in which everything can be consistently interpreted, since even purely random data can be consistently mapped onto itself: the territory is the territory. Primeness as an analogue, again.

Abstraction and interpretation are both functions, mapping data onto other data. There is a sense in which they are the same function. There is another sense in which they are inverses. Both senses are true.

ii. Errplanations

Assuming no errors, then one piece of inconsistent data is enough to invalidate an entire interpretation. In practice, errors abound. We don’t throw out all of physics every time a grad student does too much LSD.

Sometimes locating the error is easy. The apple falling up is a hallucination, because you did LSD.

Sometimes locating the error is harder. I feel repulsion at the naive utilitarian idea of killing one healthy patient to save five. Is that an error in my feelings, and I should bite the bullet? Is that a true inconsistency, and I should throw out utilitarianism? Or is that an error in the framing of the question, and No True Utilitarian endorses that action?

Locating the error is meaningless without explaining the error. You hallucinated the apple because LSD does things to your brain. Your model of the world now includes the error. The error is predictable.

Locating the error without explaining it is attributing the error to phlogiston, or epicycles. There may be an error in my feelings about the transplant case, but it is not yet predictable. I cannot distinguish between a missing errplanation and a true inconsistency.

iii. Intuitions

If ethical frameworks are abterpretations of our moral intuitions, then there is a sense in which no ethical framework can be generally true – our moral intuitions do not always satisfy the axioms of preference, and cannot be consistently interpreted.

There is another sense in which there is a generally true ethical framework for any possible set of moral intuitions: there is always one satisfactory function, and it is the identity.

Primeness as an analogue.

The Stopped Clock Problem

[Unusually for me, I actually wrote this and published it on Less Wrong first. I’ve never reverse-cross-posted something to my blog before.]

When a low-probability, high-impact event occurs, and the world “got it wrong”, it is tempting to look for the people who did successfully predict it in advance in order to discover their secret, or at least see what else they’ve predicted. Unfortunately, as Wei Dai discovered recently, this tends to backfire.

It may feel a bit counterintuitive, but this is actually fairly predictable: the math backs it up on some reasonable assumptions. First, let’s assume that the topic required unusual levels of clarity of thought not to be sucked into the prevailing (wrong) consensus: say a mere 0.001% of people accomplished this. These people are worth finding, and listening to.

But we must also note that a good chunk of the population are just pessimists. Let’s say, very conservatively, that 0.01% of people predicted the same disaster just because they always predict the most obvious possible disaster. Suddenly the odds are pretty good that anybody you find who successfully predicted the disaster is a crank. The mere fact that they correctly predicted the disaster becomes evidence only of extreme reasoning, but is insufficient to tell whether that reasoning was extremely good, or extremely bad. And on balance, most of the time, it’s extremely bad.

Unfortunately, the problem here is not just that the good predictors are buried in a mountain of random others; it’s that the good predictors are buried in a mountain of extremely poor predictors. The result is that the mean prediction of that group is going to be noticeably worse than the prevailing consensus on most questions, not better.


Obviously the 0.001% and 0.01% numbers above are made up; I spent some time looking for real statistics and couldn’t find anything useful; this article claims roughly 1% of Americans are “preppers”, which might be a good indication, except it provides no source and could equally well just be the lizardman constant. Regardless, my point relies mainly on the second group being an order of magnitude or more larger than the first, which seems (to me) fairly intuitively likely to be true. If anybody has real statistics to prove or disprove this, they would be much appreciated.

Extracting Value from Inadequate Equilibria

[Much expanded from my comment here. Pure speculation, but I’m confident that the bones of this make sense, even if it ends up being unrealistic in practice.]

A lot of problems are coordination problems. An easy example that comes to mind is scientific publishing: everybody knows that some journal publishers are charging ridiculous prices relative to what they actually provide, but those journals have momentum. It’s too costly for any individual scientist or university to buck the trend; what we need is coordinated action.

Eliezer Yudkowsky talks about these problems in his sequence Inadequate Equilibria, and proposes off-hand the idea of a Kickstarter for Coordinated Action. While Kickstarter is a great metaphor for understanding the basic principle of “timed-collective-action-threshold-conditional-commitment”, I think it’s ultimately led the discussion of this idea down a less fruitful path because Kickstarter is focused on individuals, and most high-value coordination problems happen at the level of institutions.

Consider journal publishing again. Certainly a sufficient mass of individual scientists could coordinate to switch publishers all at once. But no matter what individual scientists agree to, this is not a complete or perfect solution:

  • Almost no individual scientists are paying directly for these subscriptions – their universities are, often via long-term bulk contracts.
  • University hiring decisions involve people in the HR and finance departments of a university who have no interest in a coordinated “stop publishing in predatory journals” action. They only care about the prestige and credentials of the people they hire. Publications in those journals would still be a strong signal for them.
  • Tenure decisions involve more peer scientists than hiring, but would suffer at least partly from the same issue as hiring.

What’s needed for an action like this isn’t a Kickstarter-style website for scientists to sign up on – it’s coordinated action between universities at an institutional level. Many of the other examples discussed in Inadequate Equilibria fit the same pattern: the problems with healthcare in the U.S. aren’t caused by insufficient coordination between individual doctors, they’re caused by institutional coordination problems between hospitals, the FDA, and government.

(Speaking of government, there’s a whole host of other coordination problems [climate change comes to mind] that would be eminently more solvable if we had a good mechanism for coordinating the various institutions of government between countries. The United Nations is better than nothing, but doesn’t have enough trust or verification/enforcement power to be truly effective.)


The problem with the Kickstarter model is that institutions qua institutions are never going to sign up for an impersonal website and pledge $25 over a 60-day campaign to switch publishing models. The time scale is wrong, the monetary scale is wrong, the commitment level is wrong, the interface is wrong… that’s just not how institutions do business. Universities and hospitals prefer to do business via contracts, and lawyers, and board meetings. Luckily, there’s still value to be extracted here, which means that it should be possible to make a startup out of this anyway; it just won’t look anything like Kickstarter.

Our hypothetical business would employ a small cadre of lawyers, accountants, and domain experts. It would identify opportunities (e.g. journal publishing) and proactively approach the relevant institutions through the proper channels. These institutions would sign crafted, non-trivial contracts bound to the success of the endeavour. The business would provide fulfillment verification and all of the other necessary components, and would act as a trusted third-party. The existence of proper contracts custom-written by dedicated lawyers would let the existing legal system act as an enforcement mechanism. Since the successful execution of these contracts would provide each institution with significant long-term value, the business can fund itself over the long haul by taking a percentage of these savings off the top, just like Kickstarter.

This idea has a lot of obvious problems as well (the required upfront investment, the business implications of having its income depend on one or two major projects each year, the incentives it would have to manufacture problems, etc) but with a proper long-term-focused investor on board it seems like this could turn into something quite useful to humanity as a whole. Implementing it is well outside of my current skillset, but I would love to see what some well-funded entrepreneur with the right legal chops could make of something like this.

Thoughts?

Going Full Walden

[A couple of years ago I was feeling pretty misanthropic and sketched out some ideas for a post which has sat in my drafts folder ever since. It’s suddenly kinda relevant because of the pandemic, so I found the motivation to dust it off and finish it. Enjoy?

No, of course I don’t actually believe any of this. Sheesh.

I feel like this maybe needs a further disclaimer: this is an idea which should not be taken seriously. Treat it as a writing exercise instead. Caveat lector.]

Other people suck. A lot.

Not you of course. You, dear reader, are the exception that proves the rule. But you know who I’m talking about – all those other people you know who are lazy, or inconsiderate, or rude. The ones who promise they’ll do something and then… don’t. The so-called “friends” who are anything but. The people who lie, or cheat, or steal. The “everybody else” in the world you just can’t trust.

It’s enough to make you want to escape civilization entirely, go off on your own in the woods. To be like Thoreau, and find your own personal Walden. After all, we don’t actually need other people do we? Sure our lives right now depend on supply chains and infrastructure and all that jazz, but robots can do most of that now, and yelling at the delivery guy to leave it on the porch doesn’t really count as human interaction. Or something like that.

But enough with the moping about, let’s take an actual look at what it would be like to… oh wait. That’s what we’re already all doing right now, more or less. Social distancing, social isolation, po-tay-to, po-tah-to. Hmm…

Next question then: what are the pros and cons of human interaction in the modern world? Obviously, historically, we really did need each other in a concrete way. Tribes provided food, and shelter, and protection. Going it alone had really bad odds, and it wasn’t typically possible to convince a tribe to support you without you supporting them back in some way. Whether you wanted to or not, you were pretty much forced into taking the bad of the tribe along with the good.

Today, however, we’ve abstracted a lot of that messy need away, behind money, and economics, and the internet. I can make money on Mechanical Turk without ever interacting with a person, and I can spend that money on food (UberEats), shelter (AirBnB), and protection (taxes) the same way. We can truly be homo solitarius. So what would it take to convince you that really, the benefits of other people don’t outweigh the costs? That, from a utilitarian perspective, we should all go Full Walden?

Well to start, other people suck. A lot.

I feel like I’m repeating myself, so let’s skip forward. Even when other people don’t actively suck, they’re still really messy. Human relationships are constantly shifting arenas of politics, dominance hierarchies (insert obligatory ironic lobster metaphor), and game theory, and trying to stay on top of all of that can be exhausting. This may seem counter-intuitive, but if you’re working on a project that will really help other people, then imagine how much more time and energy you’ll have for that project when you don’t have other people in your life anymore!

Now, maybe you’re willing to put up with that mess because you think that people, and human relationships, have some intrinsic value. Fine. But people are weird about that. In surveys, people Americans consistently rate family (which typically consists of the other people we’re closest to) as the most important source of meaning in their lives. And yet revealed preferences tell a different story. Americans are working more than ever. Every day they spend eight hours working, three hours on social media, and a measly 37 minutes with their family. Maybe we say they’re valuable, but the way we spend our time doesn’t back that up.

If other people really aren’t that valuable to us, as our revealed preferences would attest, and their suckiness costs us a non-trivial amount of energy and creates risk, then the default position should be that other people are threats. They’re unpredictable, might seriously hurt us, and probably won’t help us much if at all… sounds like the description of a rabid dog, not our ideal of a human being. Going Full Walden starts to seem like a good deal. In this world, we should assume until proven otherwise that interacting with another person will be a net negative. People are dangerous and not useful, and so avoiding them is just a practical way to optimize our time and our lives.

The counter-argument, of course, is that we’re not quite that advanced yet. Sure you can kinda make it work with Mechanical Turk and UberEats and all the rest, but as soon as you have to call a plumber or a doctor, you’re back to dealing with other people. You can get remarkably far with no human contact, but you still can’t get all the way, and if you try then you’ll be woefully underprepared when you do have to enter the real world again. Even Thoreau didn’t spend his two years at Walden completely alone.

And besides, even if it is temporarily optimal to go full Walden, it’s not clear what the psychological implications would be. For better or for worse we seem to have evolved to live in social communities, and total isolation seems to drive people crazy. Weird.

Anywho, this is kinda rambly, seems like a good place to stop.

What is a “Good” Prediction?

Zvi’s post on Evaluating Predictions in Hindsight is a great walk through some practical, concrete methods of evaluating predictions. This post aims to be a somewhat more theoretical/philosophical take on the related idea of what makes a prediction “good”.

Intuitively, when we ask whether some past prediction was “good” or not, we tend to look at what actually happened. If I predicted that the sun will rise with very high probability, and the sun actually rose, that was a good prediction, right? There is an instrumental sense in which this is true, but also an epistemic sense in which it is not. If the sun was extremely unlikely to rise, then in a sense my prediction was wrong – I just got lucky instead. We can formally divide this distinction as follows:

  • Instrumentally, a prediction was good if believing it guided us to better behaviour. Usually this means it assigned a majority probability to the thing that actually happened regardless of how likely it really was.
  • Epistemically, a prediction was good only if it matched the underlying true probability of the event in question.

But what do we mean by “true probability”? If you believe the universe has fundamental randomness in it then this idea of “true probability” is probably pretty intuitive. There is some probability of an event happening baked into the underlying reality, and like any knowledge, our prediction is good if it matches that underlying reality. If this feels weird because you have a more deterministic bent, then I would remind you that every system seems random from the inside.

For a more concrete example, consider betting on a sports match between two teams. From a theoretical, instrumental perspective there is one optimal bet: 100% on the team that actually wins. But in reality, it is impossible to perfectly predict who will win; either that information literally doesn’t exist, or it exists in a way which we cannot access. So we have to treat reality itself as having a spread: there is some metaphysically real probability that team A will win, and some metaphysically real probability that team B will win. The bet with the best expected outcome is the one that matches those real probabilities.

While this definition of an “epistemically good prediction” is the most theoretically pure, and is a good ideal to strive for, it is usually impractical for actually evaluating predictions (thus Zvi’s post). Even after the fact, we often don’t have a good idea what the underlying “true probability” was. This is important to note, because it’s an easy mistake to make: what actually happened does not tell us the true probability. It’s useful information in that direction, but cannot be conclusive and often isn’t even that significant. It only feels conclusive sometimes because we tend to default to thinking about the world deterministically.


Eliezer has an essay arguing that Probability is in the Mind. While in a literal sense I am contradicting that thesis, I don’t consider my argument here to be incompatible with what he’s written. Probability is in the mind, and that’s what is usually more useful to us. But unless you consider the world to be fully deterministic, probability must also be in the world – it’s just important to distinguish which one you’re talking about.

The FRACTAL Model

I was thinking about relationships and playing around with silly acronyms and came up with the following. It is by no means true, or useful, but I thought I’d share. One could say that a good relationship is fractal, meaning that it is built on:

Fun
Respect
Alignment
Care
Trust
Arousal
Limerence

Don’t read anything into the order, fractal was just a much better word than… cratfal. Or catrafl. A cat-raffle, now there’s an idea.

Pop quiz! What would you say the fractal model misses?

It’s Not About The Nail

[This is hardly original; I’m documenting for my own sake since it took so long for me to understand.]

There’s an old saw, that when a women complains she wants sympathy, but when a man hears a complaint, he tries to solve the problem. This viral YouTube video captures it perfectly:

Of course it’s not strictly limited by gender, that’s just the stereotype. And the underlying psychological details are fairly meaty; this article captures a lot of it pretty well for me.

I’ve known about all this for a long time now, and it’s always made sense at a sort of descriptive level of how people behave and what people need. But despite reading that article (and a good reddit thread) I’ve never really understood the “why”. What is the actual value of listening and “emotional support” in these kind of scenarios? Why do people need that? Well I finally had it happen to me recently when I was aware enough to notice the meta, and thus write this post.

I now find it easiest to think about in terms of the second-order psychological effects of bad things happening. When a bad thing happens to you, that has direct, obvious bad effects on you. But it also has secondary effects on your model of the world. Your mind (consciously or subconsciously) now has new information that the world is slightly less safe or slightly less predictable than it thought before. And of course the direct, obvious bad effects make you vulnerable (not just “feel” vulnerable, although normally that too – they make you actually vulnerable because you’ve just taken damage, so further damage becomes increasingly dangerous).

Obviously sometimes, and depending on the scenario, the first-order effect dominates and you really should just solve that problem directly. This is what makes the video so absurd – having a nail in your head is hard to beat in terms of the first-order effects dominating. But in real versions of these cases, sometimes the second-order effects are more significant, or more urgent, or at the least more easily addressable. In these cases it’s natural to want to address the second-order effects first. And the best way to do that is talking about it.

Talking about a problem to somebody you have a close relationship with addresses these second-order effects in a pretty concrete way: it reaffirms the reliability of your relationship in a way that makes the world feel more safe and predictable, and it informs an ally of your damage so that they can protect you while you’re vulnerable and healing. But of course you don’t accomplish this by talking directly about the second-order problem. The conversation is still, at the object level, about the first-order problem, which is why it’s so easy to misinterpret. To make it worse, the second-order problems are largely internal, and thus invisible, so it’s easy for whoever you’re talking to to assume they’re “not that bad” and that the first-order problem dominates, even when it doesn’t.

Working through this has given me some ideas to try the next time this happens to me. At a guess, the best way to handle it is to open the conversation with something like “I need you to make me feel safe” before you get into the actual first-order problem, but I guess we’ll see.

Fast Takeoff in Biological Intelligence

[Speculative and not my area of expertise; probably wrong.]

One of the possible risks of artificial intelligence is the idea of “fast” (exponential) takeoff – that once an AI becomes even just a tiny bit smarter than humans, it will be able to recursively self-improve along an exponential curve and we’ll never be able to catch up with it, making it effectively a god in comparison to us poor humans. While human intelligence is improving over time (via natural selection and perhaps whatever causes the Flynn effect) it does so much, much more slowly and in a way that doesn’t seem to be accelerating exponentially.

But maybe gene editing changes that.

Gene editing seems about as close as a biological organism can get to recursively editing its own source code, and with recent advances (CRISPR, etc) we are plausibly much closer to functional genetic manipulation than we are to human-level AI. If this is true, humans could reach fast takeoff in our own biological intelligence well before we build an AI capable of the same thing. In this world we’re probably safe from existential AI risk; if we’re both on the same curve, it only matters who gets started first.

There are a bunch of obvious objections and weaknesses in this analogy which are worth talking through at a high level:

  • The difference between hardware and software seems relevant here. Gene editing seems more like a hardware-level capability, whereas most arguments about fast takeoff in AI talk about recursive improvement of software. It seems easy for a strong AI to recompile itself with a better algorithm, where-as it seems plausibly more difficulty for it to design and then manufacture better hardware.

    This seems like a reasonable objection, though I do have two counterpoints. The first is that, in humans at least, intelligence seems pretty closely linked to hardware. Software also seems important, but hardware puts strong upper bounds on what is possible. The second counterpoint is that our inability to effectively edit our software source code is, in some sense, a hardware problem; if we could genetically build a better human, capable of more direct meta-cognitive editing… I don’t even know what that would look like.
  • Another consideration is generation length. Even talking about hardware replacement, a recursively improving AI should be able to build a new generation on the order of weeks or months. Humans take a minimum of twelve years, and in practice quite a bit more than that most of the time. Even if we end up on the curve first, the different constant factor may dominate.
  • We don’t really understand how our own brains work. Even if we’re quite close to functional genetic editing, maybe we’re still quite far from being able to use it effectively for intelligence optimization. The AI could still effectively get there first.
  • Moloch. In a world where we do successfully reach an exponential take-off curve in our own intelligence long before AI does, Moloch could devour us all. There’s no guarantee that the editing required to make us super-intelligent wouldn’t also change or destroy our values in some fashion. We could end up with exactly the same paperclip-maximizing disaster, just executed by a biological agent with human lineage instead of by a silicon-based computer.

Given all these objections I think it’s fairly unlikely that we reach a useful biological intelligence take-off anytime soon. However if we actually are close, then the most effective spending on AI safety may not be on AI research at all – it could be on genetics and neuroscience.

COVID-19

Just in case you’ve been living under a rock (but checking my blog?), the worst pandemic in a generation is gripping the world. If you’re looking for the bare minimum of what you should do:

  • Stay home. Do not leave your home except to buy food or medication.
  • Wash your hands regularly. Properly. With soap.
  • Don’t touch your face.
  • Take it seriously. People you know will be dead before it’s over.

That’s pretty much it really.


I wanted that version to be punchy, so I simplified a little bit. Here’s a few elaborations:

  • Technically it’s fine to leave your home as long as you:
    • Stay 6 feet away from other people at all times.
    • Avoid enclosed or poorly ventilated spaces.
    • Don’t touch anything that other people have touched.
  • It’s possible that nobody you know will die from this, if:
    • You are a hermit who doesn’t know anybody to begin with.
    • You live in China, South Korea, or Japan. Those three countries are the only ones who have successfully contained the outbreak.

For a more in-depth look at the situation we’re in and possible outcomes I would recommend The Hammer and the Dance.

For statistics I would recommend WorldOMeter. Though be aware that with delays in incubation and delays in testing, any numbers are likely to be a week or more out of date. At least 4x any number you see.

For more information on your local situation and laws, check with your local government; I don’t know where you live. But do be aware that government response has been really really bad in most parts of the world (again excepting China, South Korea, and Japan). Take it more seriously than your government does.

For general advice on planning for disasters, I recommend this fantastic guide. It’s a bit late for a lot of the advice now, but some of it is still useful, and a lot of it will be useful if you survive this round.

International Conflict X-Risk in the Era of COVID-19

Jeremy Hussel had a great comment pointing out something which is easy to forget – major disasters often have multiple quasi-independent causes. Many things go wrong all at once, and any safeguards are overwhelmed by the repeated issues. COVID-19 could clearly be one of those root causes. What might be others?

Another clear source of turmoil for the western world right now is domestic politics. America has a historically unpredictable president and is heading into a divisive election year where the two candidates are both likely to be very old. The UK is finally going to leave the EU and hasn’t yet struck a deal to determine what that actually means. Canada (where I live, though less critical on the world stage) was in the middle of its own domestic crisis around Native American land rights and infrastructure projects before that got overshadowed by COVID-19 – our railroads and as such some parts of our supply chain had been shut down for weeks already by protesters.

A third source of problems might be the “oil war” between OPEC and Russia, but I don’t know enough about that to really write about it usefully.

With all that said, the thing that I am most afraid of right now is China. China has been very aggressive on the world stage in the last couple of days, and I fully expect them to continue that pattern. Why wouldn’t they? Just as their country is recovering from the virus and starting to pick back up, the crisis in America and Europe is still growing. They are feeling strong while Western democracies are weak, divided, and looking inwards, and we should fully expect them to take advantage of that power imbalance in the short term to do things like finally and properly annexing Hong Kong (predict 50% that by the time COVID-19 has run its course in North America, Hong Kong has lost whatever quasi-independence it might have had).

The question is how far they will go, and how will we (our governments) react? In normal times I would expect them to be cautious but I would also expect a cautious response from western governments. With the current volatility in the American system and the antagonism built up over the previous Chinese-American trade war, there is substantial risk of something escalating out of control. A full military conflict between world powers at this point in time would truly be something else going terribly, terribly wrong.