Sam Gentle.com

Yes, and

There's a fun improv theatre game called "yes, and", designed to teach people the right way to influence a scene without ruining it. It's tempting, when you start, to try to control a scene; your partner says "let's go to space", but you had a great idea for a medical scene, so you say "no way, space is too cold, let's stay here in the hospital" and now you've ruined their idea. Worse still, maybe they respond with "but space is great" and now you're having a boring argument on stage.

Instead, you want to build on what is there already. They say "let's go to space" and you respond "yes, and quickly, doctor, because those astronauts will die without our help!" Thanks to both of your ideas, you now have an interesting scene about space doctors that both of you can contribute to. You can't control the scene, it's more like you guide it in a direction you want it to go; always forward, never backwards; always adding, never taking away.

You do this to make a good scene, but it's worth wondering why that is a quality of a good scene. Do our preferences have something more universal to say about adding vs removing? I wrote before about how it's hard to remove an association, and also the way that exhaustive knowledge is required to disprove anything. I think these effects combine to make negatives unpalatable, whether or not they are useful. Not just for people, but as a general rule. Nature abhors a negative.

Another interesting example of this is distributed systems. It's very easy to write a gossip protocol where peers just spam facts at each other and merge in the facts they receive. In that sense, you can say that no peer has incorrect information, they just don't have all the correct information yet. Eventually, as the system converges, correct information will spread to every peer, and peace and correctness will reign throughout your distributed system.

But that's only true when your information is additive, when it contributes more facts rather than taking some away. The weather at 9am was sunny. Yes, and the weather at 9:30 was cloudy. But how do you delete information? How do you say "wait, no, the weather at 9am was actually rainy"? Your system is no longer correct vs more correct, it's correct vs incorrect, and that's much harder to deal with. Whether it's un-committing something from Git, actually deleting a document from CouchDB, or taking information off the internet, distributed deletion isn't like distributed addition. It's hard.

And the most gossipy distributed system of all is, of course, people. We copy ideas like crazy from each other, constantly broadcasting and absorbing everything we learn. But what about something we un-learn? How often do you go around telling people "wow! I just realised I'm wrong about something"? Probably never, or at least very rarely. Un-learning isn't fun, negative information isn't interesting, and telling someone they're wrong is the absolute opposite of "yes, and".

Which brings us back to why we don't like negatives, and why "no, but" isn't a fun improv game. I believe this falls from our brains' own structure of storing information being fundamentally additive. We can't unassociate, so we have to associate a negative. We can't easily forget, so we just learn to feel bad about remembering. Our mechanism for forgetting is thus essentially painful. The same mechanism that teaches us to recoil from a hot iron teaches us to recoil from a bad memory. It hurts to be wrong.

This is why atheism, skepticism, and other negative movements struggle to gain acceptance. You can't tell a religious person there's no God and have that mean anything, because the closest additive analogue is "keep your existing beliefs and also feel bad about them". And that's why the modern fight against post-truth is so hard, because it's always easier to add falsehoods than fight them, it's always easier to "yes, and Obama is a secret Kenyan poisoning us with autism vaccines" than it is to "no, nothing about what you have said is remotely true in any way".

What's the solution? Well, if you want to change minds perhaps the best way is to add conflicting information. Instead of saying "there's no God", you say "here's some compelling ideas that will eventually come into conflict with a belief in God". The rationality movement is, in a sense, an attempt to do this. This is a long game that relies on setting up an eventual paradox that you hope will be resolved in your favour.

But it's worth wondering if you always need to change minds. If your issue isn't with religion, but with current religious practice, it would be far easier to replace "no, but the universe is interesting anyway" with "yes, and God is in all things, and all our religions are imperfect attempts to understand that true God, who is the universe itself". Perhaps ideologically atheism is preferable to pantheism, but from a utilitarian perspective it seems pretty clear that a world full of pantheists would be pretty similar and probably easier to achieve.

So for long-term convincing, I believe adding conflicting information is the best way to achieve an eventual subtraction of ideas. But for important things on a short timeline, perhaps it's best to swallow your pride and find a way to say "yes, and" to bad beliefs, and thereby gain some measure of influence over them.

Being vs doing

I've been thinking about the nature of programming, specifically the abstractions that you build everything else out of. I've argued before that computers are special because they represent an abstract unit of action. But the formalisms of computer science often quickly turn into mathematics, and nowhere is this more true than in functional programming.

In functional programming, the core primitive is the function. Church's lambda calculus is capable of expressing any computation in terms of functions, and is perfectly viable as a model of computation. However, a function isn't a unit of action. It doesn't do something, it describes things and the mappings between them. Functional programming takes computation and turns it into another kind of mathematics; nouns rather than verbs, definitions rather than actions.

You can see how far from a unit of action the function is by looking at how functional programming approaches sequencing. How do you say "do this 10 times"? You simply define your 10th step in terms of your 9th step, your 9th step in terms of your 8th step and so on. Thus, to turn your question into your answer the 10 steps must be evaluated in order. Anything essentially non-definitional is handled by widening your definitions, up to and including defining the world outside of your program as part of your program.

That's not to say these functional abstractions don't work, or can't be extremely elegant when your problem is definitional in nature. However, what if your problem is not definitional? What if it is essentially actional, and best modelled in terms of steps or sequences? In that case, these abstractions become nothing better than glue code designed to pave over the impedence mismatch between your problem and the tools you're using to solve it. It's easy to mock Java programmers for working around every deficiency in their language with more patterns and more classes, but why is doing the same with more functions any better?

The Church–Turing thesis tells us that imperative and functional programming are equivalently expressive. And yet when we turn to theory, functional programming is seen as more rigorous, more serious, somehow higher and better than imperative programming. Why? I believe it's just more compatible with existing mathematics. It's easier to define things about computation if your computation is also made of definitions. It's not the only way, though, and more actional formalisms exist, like process calculus, state machines, and the original Turing machines.

It's interesting to ponder about what the equivalent of a modern high-level functional programming language built around actions would look like. Of course, many popular programming languages are imperative, but even so one of the first things introduced to imperative programming languages is functions, and with them recursion, statelessness, and other functional concepts. Even without functional programming, it's easy to end up in a kingdom of nouns. What would it look like to be fundamentally verby?

I think one of the key hints that you have found such a language is that, much as you need extra layers of abstraction to express actions in functional programming, a truly actional language would take extra steps to express functions. It is telling, perhaps, that in Forth and other concatenative languages, the most difficult thing to express is mathematics. Could it be that concatenative programming is the path to a language of action?

Small bravery

Well I'm about halfway through my month of honesty, and so far it has been far more difficult than getting up early. Why is this?

One reason is that, unlike getting up early, which is fairly simple as far as rules go, this requires constant vigilance. Not volunteering information except when it suits me is a habit, and breaking it requires creating a newer, stronger habit of truth-telling. But forming that habit takes continuous attention. I have to listen carefully to hear that quiet voice saying "hmm, I wonder if I should say..." and jump in before it concludes "nah, better not" and disappears.

But the other thing is that, even when I can think of what to say, honesty is very challenging. When I started, I wrote that the opposite of honesty isn't deceit, but cowardice. Which is another way of saying that honesty is a kind of bravery. Even when I know exactly the truth that I want to speak, I can't help thinking about the consequences. Even if they're not big consequences, even if they're as simple as knowing that the conversation will be marginally more complicated as a result. Continuing despite those consequences requires bravery, even if just a little.

This kind of small bravery is very hard to maintain. Oh, sure, we all have glorious acts of one-off courage in us. Maybe we'll jump in front of a train to save a child, but what if we're just in the carriage watching someone get hassled? It's so much easier when there is a big decision with big consequences, something we can build up to and conquer once and for all. But most things are just lots of little decisions, each one an opportunity for small bravery, and some of them you have to make over and over again.

And that's the difficulty I have. One act of bravery is easy, constant vigilance and the courage to act on it is hard.

Archivism

This honesty thing has got me thinking. Why write? Why put prototypes online? Why tweet about my feelings or observations? Why bother with any of it?

For finished projects, the answer is fairly simple: I put things out there because I think they're valuable, I want people to appreciate them or I want them to achieve some result in the outside world. But there's a lot of things that don't fall into one of those categories. Some of it I do, not because I have an outcome in mind, or because I want people to see it, or even because I think it's particularly good. Some stuff I do just because I want to.

Writing is like this when I just have something in my head that wants to get out into words. Prototypes are like this when I have something I want to try, some little problem to solve, or just when I feel like causing trouble with my keyboard. When I do things for an audience, I keep the audience in mind. When I do them for myself, I'm only thinking about what suits my needs.

So why bother to put them online at all if they're for me? Well, if I only publish things that are valuable to me, I could be missing a lot that's valuable to others. The classic Twitter criticism is that it's just a bunch of people talking about what they had for breakfast, and who cares? Well, someday there's going to be a breakfastologist who desperately needs to track 21st century cereal trends, and when they find Twitter they're going to fall to their knees and weep with joy. Faced with the immeasurable breadth of human interest, why not publish everything you have, just in case?

That's archivism. Put it all out there and let the future sort it out. Found something interesting? Publish it! Learned something new? Publish it! Working on something? Publish it! Sure, it might seem useless to you, but don't give in to the hubris of thinking that your understanding of uselessness is universal. I guarantee that for every unanswered help thread there's someone who found the answer, but thought it wouldn't be interesting to anyone else.

Prototype wrapup #39

Last week was mostly milights, and this week was even more milights. I got a new remote and wifi bridge that I'm hoping to use to finish descrambling the protocol, but in the mean time I wanted to use the wifi bridge to do some fun stuff. So mostly this was code to get colour data from OS X onto my lights.

Monday

The first thing I wanted was to match the colour temperature of my lights to that of my screen. I use f.lux to set the colour temperature for my monitors, so I was hoping to get the information out of that. Unfortunately, f.lux doesn't have an API, so I went looking for ways to get the information from OS X directly. My first attempt was to read the ICC profile out of the display using Python/PyObjC and parse it using PIL's ICC support. Unfortunately, the image profile didn't seem to contain the information I was looking for.

Tuesday

Attempt 2 involved a lot of looking through ICC data. I eventually realised that what I was looking for wasn't the "media white point" or the "chromatic adaptation", which are both ways in which the white point can be set, but another, third way, called the "vcgt", or Video Card Gamma Table. I pulled a parser with vcgt support out of pypng, used that to read the data, had to parse the data manually anyway (there are two different kinds, table-based and formula-based), but finally I had my data. I had to convert the rgb values to color temperatures, which I did using Python's super complicated color library. I think there's probably an easier way to do that, but it worked well enough and I managed to get a value I could pipe to another command which set the display temperature. The updating was still a bit strange, though, I think because f.lux doesn't always update the ICC profile when it makes changes.

Wednesday

Cleanup time. I suspected that there was a way to get the vcgt without having to parse the ICC data. I eventually found CGGetDisplayTransferByTable. There's also CGGetDisplayTransferByFormula but it straight up doesn't work. I only wanted the max gamma values anyway, so the table version was fine. I then proceeded as before but, to cut down on Python's ferocious startup time, I made the scripts stream updates instead of sending one per invocation. Finally, I could click around in f.lux and watch my lights update. Victory!

Friday

Fresh off my high from making colour temperatures work, I forged ahead with part 2 of the plan: getting RGB values from my monitor onto my lights. With a bit more background in using PyObjC it was easier this time, though I still spent a while figuring out how to make graphics contexts work. My approach was to take a screenshot, draw it onto a 1x1 canvas, and then read the pixel data from that canvas. This worked really well, and was fast enough to pull data at close to realtime when I tested it. I actually ran into more issues with the lights than with OS X, because they have a fairly limited bandwidth via the bridge. I added some optimisations to avoid resending repeated colour commands and that helped a bit. Also the hue settings on the lights are, uh, only vaguely related to actual hues. I ended up doing a quadratic regression to approximate what the colours should actually look like, which worked okay but I probably need to do it again with more data. Still, broadly speaking, it was a success.