-
Offense and defense - meditations on Khorne
Technology makes humans really good at killing other humans. Your average baseline human is already quite good at killing things, but tool-usage really makes us dangerous. However, humans are not nearly as good at not being killed, and technology hasn’t done nearly as well at improving our defensive abilities, even against individuals. This is a problem.
With no tools at all, an attacker needs to catch you and physically overpower you in order to kill you. You can defend against this by running away, or by being physically stronger.
With a sword, an attacker just needs to catch you. You can defend against this by running away; or if you have your own sword, or a shield, or armour.
With a bow, an attacker can kill you from a distance. You can defend against this by running away further; or having a shield, or armour.
With a gun, the range increases, and many kinds of armour become less good (although bullet-proof vests aren’t bad).
So even today defense isn’t doing great. You can wear a bullet-proof vest at all times and live in a state of hyper-alertness to threats, but for most of us our main defense is that nobody is trying to kill us right now. Even for the most paranoid, all a determined attacker has to do is hit you in an unprotected area and attack when you are not expecting it (which is made easier by long-range weapons).
Even worse, most of the improvements in offense also make it easier to kill lots of people. One person with a gun can kill a lot more people more easily than one person with a sword. And someone is going to try and do that.
-
Linear preferences and Pascal's Wager
Expected value is the de-facto standard way of extending preferences over outcomes to preferences over choices under uncertainty. If you have some cardinal-valued valuation of outcomes, then you just multiply that by the probability and you have the expected value of the choice.
That’s great and all, but it’s philosophically interesting to think about whether we need to directly justify preferring choices with higher expected value, or whether we can deduce it from a simpler set of axioms.
I want to focus here on a claim that I think I heard Amanda MacAskill make1: if you accept “dominating” improvements as preferable, you also have to prefer improvements in expected value.
-
Giving What We Can Charities Update 1
I’ve recently been helping Giving What We Can update their charity recommendations. We’re going to publish the results as a series of blog posts, and the first, on DMI (Development Media International), is up now!
TDLR: they’re an excellent organization who’ve actually been running a randomized controlled trial on their interventions, and it’s looking pretty good.
-
Types and readability
(or: why I won’t write anything longer than a page in a dynamically-typed langauge)
Reading and understanding code is a huge part of being a developer.
When I’m at work, I probably spend most of my time reading code, rather than writing it. Pretty much anything you do that involves existing code will require you to read some of it. At some point, you’re going to need to answer a question like:
- What can I do with X?
- What kind of thing is X?
- How do I do X with Y?
and then you’re going to need to read some code. If you use, fix, or interface with existing code, then these questions are always going to crop up.1
The amount of reading that you need to do will vary depending on what you’re trying to do and how much you already know about the code in question. If you’re fixing a small bug in code that you wrote yourself, chances are that you won’t need to do much reading. If you’re making extensive changes to someone else’s code, then you may well need to read and understand a significant fraction of it.
Even when you’re writing totally fresh code, you’re likely to need to do some reading. Once you’ve put down a project for a while, it’s pretty easy to forget the details of even code that you wrote yourself. Hopefully it comes back to you a bit quicker, but there will still be corners that are as fresh to you as if they were written by a stranger.
Given that reading code is such a big part of my experience of being a programmer, I think it’s pretty important to figure out the factors make reading and understanding code easier or harder. Let’s start with an example.
-
Obviously, that’s not an exhaustive list! ↩
-
Exploiting coordination problems for fun and profit
Coordination problems1 are hard to solve. We know this, but how can we use that to make money?
Well, a classic coordination problem presents people with a choice between conforming and defecting. You pay a small cost for conforming, and a large cost for defecting unless all the other participants defect as well. Structures with “network effects” behave like this. If you leave but everyone else stays, then you suffer from being excluded from the network; but if everyone leaves then you’re not going to miss out on the latest gossip/music/gerbil video.
So, a recipe for exploiting coordination problems goes like this:
- Make a network
- Encourage people to join
- Unilaterally impose a small cose for participating in the network
Then members are faced with a choice between paying up (and retaining the network benefits) or leaving - which is only profitable if everyone else does the same.
In modern parlance, this translates to:
- Start a social media company
- Offer it for free to get users
- Start charging once you have a lot of users
I think this makes partial sense of the otherwise baffling mania for startups with no revenue, but a large user-base. What they sell is not a revenue stream, but a network of users who are ripe to be extorted via this sort of coordination problem.
-
A coordination problem is any problem where the optimal solution for all participants requires some or all of the participants to choose the same strategy. ↩