Find something you love to do and you’ll never have to work a day in your life.
— author unknown
I've spent a long time pursuing work I love, and one thing I can say unequivocally is that quote is full of shit. In fact, if anything, I've learned that doing what you love, far from making the work/life distinction irrelevant, makes it crucially important. Unlike if you have an office job, or even work that follows you home, when you do what you love, your work is your life, and that's enormously dangerous.
The biggest problem is not expecting it to feel like work. Long ago, I assumed that if I just let myself do whatever I wanted, I would naturally gravitate towards things I love to do. While this might be true in the large, it was a disaster in the small. There are too many small distractions and disturbances. I would wake up wanting to work on my project, but then I got an email that put me in a bad mood, or a new video game came out, or one of my housemates wanted to chat. Pow, day ruined. It took me a long time to realise that I needed more consistency.
But this, too, has flaws. I had longstanding reservations about stability and its effect on creativity, as well as the way obligation spreads and the feeling of never being done. In practical terms, my writing habit, which after a year and a half had devolved into repeatedly falling behind and catching up, finally lead me to accept defeat. A heavy blow, given that it was my longest running and, at least in raw output, most successful habit.
Ultimately, I think the source of these problems is that I had designed most if not all of these systems without any provision for not doing them. "Write every day" also meant making a sacrifice every day. Sometimes that sacrifice was small, other times it was big. Sometimes I wanted to write, and other times I was more or less forcing my hands onto the keyboard. In other words, it was hard work with no time off.
Okay, so don't work hard with no time off, news at 11. But behind every retroactively obvious solution is a problem-solving weakness worth learning from. You wouldn't think it would take me so long to figure out I needed a better work-life balance, and the fact that I did indicates some secondary problem that made this problem harder to solve. In this case, the secondary problem was believing that work I love would make balance less necessary. Unfortunately, if the work you love still feels like work (which I believe it must if you want to be consistent), then you still need time off from it.
Worse still, the power of habit isn't a magic bullet. A habit makes work easier, but you also have to put effort into maintaining the habit. Any time you get knocked off course and your habit falters, it requires extra effort to get back into it. That's still less effort in the long run, but only if your habits are fixed. But what if every time your habits stabilise, you add new ones? What if you do something really nuts and try a new habit each month? Habits can do a lot, but they are also work, and you also need time off from them.
Which brings me to the last question: what does time off look like when you do what you love? After all, isn't it the kind of thing you would do after hours if you had a different job? In fact, that's one of the reasons it's so easy to believe you don't need time off, because you would also do it for fun. But it's important to consider that doing the same thing for different reasons can lead to different results.
Your work is something you push yourself to do even if you don't feel like it at the time. It's something you strive towards as being greater than your transient moods or random distractions throughout the day. I've heard it said that truly great work performs itself through you, which seems like a good metaphor. You serve it, not the other way around. But that's exactly what makes it work, not leisure; you're optimising for the work, not for yourself. Of course, the work is what you love, but that doesn't mean it will always make you happy.
So time off when you're doing what you love is nothing more complex than forgetting all the rules and let yourself do whatever you want. Sure, maybe that means you'll spend the day coding anyway, but maybe you won't be in the right mood, or a new video game will have come out, or one of your housemates will want to chat. And that's okay too. After all, it's your time off, not from doing things you love, but from the discipline of doing them whether you feel like it or not.
The relay board was pretty simple once I had everything wired up. I wanted to use it to turn my TV on and off, so I really just needed the stock ESP8266 webserver. Unfortunately, I wasted a ridiculous amount of time on what turned out to be a bug in its HTTP handling. It expects camel-cased HTTP headers, but the spec says they're case-insensitive. Everything else sends them camel-case anyway, except the fetch API. The mDNS/Zeroconf/Bonjour implementation also seems a bit buggy. Also it has CORS on by default, which is a bit of a security hole. I got it going, but can't escape the sinking feeling that I'm yet another contributor to the Internet of Shitty Things.
For lights this week, I just wanted to make a simple command line utility so I could control my room lights from my computer. This was a relatively by-the-numbers job but I did get to have some fun realising that, yes, my quadratic fit from last week was actaully necessary to get the hues right. Ugh. Still, it's pretty fun to type lights rgb #FF0000 and yell "ROXANNE" as loudly and off-key as possible.
There's a fun improv theatre game called "yes, and", designed to teach people the right way to influence a scene without ruining it. It's tempting, when you start, to try to control a scene; your partner says "let's go to space", but you had a great idea for a medical scene, so you say "no way, space is too cold, let's stay here in the hospital" and now you've ruined their idea. Worse still, maybe they respond with "but space is great" and now you're having a boring argument on stage.
Instead, you want to build on what is there already. They say "let's go to space" and you respond "yes, and quickly, doctor, because those astronauts will die without our help!" Thanks to both of your ideas, you now have an interesting scene about space doctors that both of you can contribute to. You can't control the scene, it's more like you guide it in a direction you want it to go; always forward, never backwards; always adding, never taking away.
You do this to make a good scene, but it's worth wondering why that is a quality of a good scene. Do our preferences have something more universal to say about adding vs removing? I wrote before about how it's hard to remove an association, and also the way that exhaustive knowledge is required to disprove anything. I think these effects combine to make negatives unpalatable, whether or not they are useful. Not just for people, but as a general rule. Nature abhors a negative.
Another interesting example of this is distributed systems. It's very easy to write a gossip protocol where peers just spam facts at each other and merge in the facts they receive. In that sense, you can say that no peer has incorrect information, they just don't have all the correct information yet. Eventually, as the system converges, correct information will spread to every peer, and peace and correctness will reign throughout your distributed system.
But that's only true when your information is additive, when it contributes more facts rather than taking some away. The weather at 9am was sunny. Yes, and the weather at 9:30 was cloudy. But how do you delete information? How do you say "wait, no, the weather at 9am was actually rainy"? Your system is no longer correct vs more correct, it's correct vs incorrect, and that's much harder to deal with. Whether it's un-committing something from Git, actually deleting a document from CouchDB, or taking information off the internet, distributed deletion isn't like distributed addition. It's hard.
And the most gossipy distributed system of all is, of course, people. We copy ideas like crazy from each other, constantly broadcasting and absorbing everything we learn. But what about something we un-learn? How often do you go around telling people "wow! I just realised I'm wrong about something"? Probably never, or at least very rarely. Un-learning isn't fun, negative information isn't interesting, and telling someone they're wrong is the absolute opposite of "yes, and".
Which brings us back to why we don't like negatives, and why "no, but" isn't a fun improv game. I believe this falls from our brains' own structure of storing information being fundamentally additive. We can't unassociate, so we have to associate a negative. We can't easily forget, so we just learn to feel bad about remembering. Our mechanism for forgetting is thus essentially painful. The same mechanism that teaches us to recoil from a hot iron teaches us to recoil from a bad memory. It hurts to be wrong.
This is why atheism, skepticism, and other negative movements struggle to gain acceptance. You can't tell a religious person there's no God and have that mean anything, because the closest additive analogue is "keep your existing beliefs and also feel bad about them". And that's why the modern fight against post-truth is so hard, because it's always easier to add falsehoods than fight them, it's always easier to "yes, and Obama is a secret Kenyan poisoning us with autism vaccines" than it is to "no, nothing about what you have said is remotely true in any way".
What's the solution? Well, if you want to change minds perhaps the best way is to add conflicting information. Instead of saying "there's no God", you say "here's some compelling ideas that will eventually come into conflict with a belief in God". The rationality movement is, in a sense, an attempt to do this. This is a long game that relies on setting up an eventual paradox that you hope will be resolved in your favour.
But it's worth wondering if you always need to change minds. If your issue isn't with religion, but with current religious practice, it would be far easier to replace "no, but the universe is interesting anyway" with "yes, and God is in all things, and all our religions are imperfect attempts to understand that true God, who is the universe itself". Perhaps ideologically atheism is preferable to pantheism, but from a utilitarian perspective it seems pretty clear that a world full of pantheists would be pretty similar and probably easier to achieve.
So for long-term convincing, I believe adding conflicting information is the best way to achieve an eventual subtraction of ideas. But for important things on a short timeline, perhaps it's best to swallow your pride and find a way to say "yes, and" to bad beliefs, and thereby gain some measure of influence over them.
I've been thinking about the nature of programming, specifically the abstractions that you build everything else out of. I've argued before that computers are special because they represent an abstract unit of action. But the formalisms of computer science often quickly turn into mathematics, and nowhere is this more true than in functional programming.
In functional programming, the core primitive is the function. Church's lambda calculus is capable of expressing any computation in terms of functions, and is perfectly viable as a model of computation. However, a function isn't a unit of action. It doesn't do something, it describes things and the mappings between them. Functional programming takes computation and turns it into another kind of mathematics; nouns rather than verbs, definitions rather than actions.
You can see how far from a unit of action the function is by looking at how functional programming approaches sequencing. How do you say "do this 10 times"? You simply define your 10th step in terms of your 9th step, your 9th step in terms of your 8th step and so on. Thus, to turn your question into your answer the 10 steps must be evaluated in order. Anything essentially non-definitional is handled by widening your definitions, up to and including defining the world outside of your program as part of your program.
That's not to say these functional abstractions don't work, or can't be extremely elegant when your problem is definitional in nature. However, what if your problem is not definitional? What if it is essentially actional, and best modelled in terms of steps or sequences? In that case, these abstractions become nothing better than glue code designed to pave over the impedence mismatch between your problem and the tools you're using to solve it. It's easy to mock Java programmers for working around every deficiency in their language with more patterns and more classes, but why is doing the same with more functions any better?
The Church–Turing thesis tells us that imperative and functional programming are equivalently expressive. And yet when we turn to theory, functional programming is seen as more rigorous, more serious, somehow higher and better than imperative programming. Why? I believe it's just more compatible with existing mathematics. It's easier to define things about computation if your computation is also made of definitions. It's not the only way, though, and more actional formalisms exist, like process calculus, state machines, and the original Turing machines.
It's interesting to ponder about what the equivalent of a modern high-level functional programming language built around actions would look like. Of course, many popular programming languages are imperative, but even so one of the first things introduced to imperative programming languages is functions, and with them recursion, statelessness, and other functional concepts. Even without functional programming, it's easy to end up in a kingdom of nouns. What would it look like to be fundamentally verby?
I think one of the key hints that you have found such a language is that, much as you need extra layers of abstraction to express actions in functional programming, a truly actional language would take extra steps to express functions. It is telling, perhaps, that in Forth and other concatenative languages, the most difficult thing to express is mathematics. Could it be that concatenative programming is the path to a language of action?
One reason is that, unlike getting up early, which is fairly simple as far as rules go, this requires constant vigilance. Not volunteering information except when it suits me is a habit, and breaking it requires creating a newer, stronger habit of truth-telling. But forming that habit takes continuous attention. I have to listen carefully to hear that quiet voice saying "hmm, I wonder if I should say..." and jump in before it concludes "nah, better not" and disappears.
But the other thing is that, even when I can think of what to say, honesty is very challenging. When I started, I wrote that the opposite of honesty isn't deceit, but cowardice. Which is another way of saying that honesty is a kind of bravery. Even when I know exactly the truth that I want to speak, I can't help thinking about the consequences. Even if they're not big consequences, even if they're as simple as knowing that the conversation will be marginally more complicated as a result. Continuing despite those consequences requires bravery, even if just a little.
This kind of small bravery is very hard to maintain. Oh, sure, we all have glorious acts of one-off courage in us. Maybe we'll jump in front of a train to save a child, but what if we're just in the carriage watching someone get hassled? It's so much easier when there is a big decision with big consequences, something we can build up to and conquer once and for all. But most things are just lots of little decisions, each one an opportunity for small bravery, and some of them you have to make over and over again.
And that's the difficulty I have. One act of bravery is easy, constant vigilance and the courage to act on it is hard.