I've written a bit before about new kinds of performance made possible by the internet. And in that vein I was thinking a bit today about the future of debate. Sadly, formal debating seems to be basically irrelevant these days. Outside of political debates, which tend to be fairly heavy on rhetoric and light on substance, there isn't much of a public debate scene. And what little there is is often dominated by personality and spectacle, rather than ideas and good argument. Worse still, the closest we get to taking advantage of the enormous interactive scale of the internet is taking the same debate format and streaming the video to the internet.
My idea for doing better is called The Internet Debates. The debate has two teams and an impartial moderator. The moderator can decide the format, but let's assume four rounds of 7 minutes per team, for a total of just under an hour of content for the whole debate. In reality, the debate itself would be constructed over eight weeks because, instead of being live, each round would be a video collectively created and edited from the combined abilities of the internet. Or, at least, anyone on the internet who wants to participate.
A team's round goes in two parts: firstly, anyone who wants to submits content for the round. These will usually be short (<1min) videos making a particular point. Anyone who has submitted a video can vote on everyone else's to help rank them. In the second part, anyone can submit a candidate edit combining enough content to fill the 7 minute round. Those are also voted on by anyone who has submitted, and the end result becomes the team's video for that round.
In a sense, it's pretty similar to Kasparov v World, but World v World and tweaked to suit the format. Hopefully, much like in the Kasparov game, strong voices on each team would rise to the top and there would be a healthy discussion about how best to present the arguments. At the end, the moderator would declare a winner, but you'd also do a pre- and post-poll delta winner to figure out which team changed the most minds.
I think it would be particularly interesting watching the debate develop over the course of a few weeks. Perhaps people would be driven by feeling that the side they believe most stringly isn't being represented well, and go from watching to participating. It seems like the kind of thing you could get very invested in.
I was very frustrated today by power-saving technology. I thought it would be a nice idea to hook up my TV, sound system, and selected other electrical macguffins to a master/slave powerboard. That way, I turn on the TV, everything comes on at once. I turn off the TV, everything turns back off again. More convenient and super environmentally conscious/dolphin-friendly. How could I lose?
Well, it turns out that all this environmental friendliness is starting to trip over itself, because most devices now start up in standby mode. So you can't just power them on to power them on, you have to power them on and hit a button on the remote. Some devices, by pure dumb luck I assume, will accept the switch already being pressed in when they start. If that's the case, you can hack the behaviour you want if you figure out a way to hold the button in permanently. I may or may not have a G-clamp affixed to the sound system in the loungeroom.
However, the amplifier in my room was less cooperative. I spent some time searching for a way to trick it into powering on when it powered on, to no avail. Everything seemed hopeless. In a gesture akin to burning a goat carcass, I even emailed the manufacturer's product support address. I have not yet received the bland non-reply I know is my due, but when it arrives I'll be sure to assume a suitable pose of supplication, head towards the Philippines, and pray that I never sink so low again. However, while drowning my sorrows in the depths of old product manuals, I made a curious discovery.
The amplifier has two remote-control ports, one out and one in, so that you can synchronise it with other hardware made by the same company. Of course, I don't have any hardware made by the same company, but, hey, a control port is a control port. The ports were standard 3.5mm audio jacks, so I figured I could break that out to a couple of wires, and then... learn enough signal processing to reverse-engineer it, I guess? I was actually at a bit of a loss about how to proceed. The public transport-themed Bus Pirate seemed like a pretty good bet, but I'd have to order it and I still wasn't confident I'd necessarily get it to work.
But then it suddenly hit me. If it looks like an audio jack, maybe it sounds like an audio jack! I don't need to understand the protocol, I just need to be able to replay it. So I plugged some earphones into the IR-out port and damn near blew my eardrums. Turns out digital signals are not designed for casual listening. Still, that's definite progress! I eventually managed to hook up a USB sound card and a line cable, recorded the "sound" of me turning the amplifier off and on, swapped the cable to the IR-in port and played it back.
And, would you believe it? It worked! I don't think in my entire life I've ever had something deserve to work less and still work. I am gobsmacked by my own hillbilly ingenuity. I am now controlling my amplifier by playing sounds at it from a tiny ARM box that detects when the TV is turned on via HDMI-CEC. I put the remote control audio files up on Github. So if you have an HK-series amplifier, or you just want to hear what a remote control sounds like, that might be useful to you.
But, uh, maybe turn your volume down first. Remote controls are loud.
I don't remember where – Covey, maybe? – but I once read that it's much less cognitive effort to be idealistic than pragmatic. For example, if your answer to the question "is lying ever the right thing to do?" is "sometimes", then every time a situation comes up where you might lie you have to think about it. Is this time the time I should lie? What about now? Do I have enough information to be sure? Whereas if your answer is "no, never", then it becomes very simple. Should I lie in this situation? No, because I never lie.
Obviously, it comes at the cost of inflexibility; if you never lie, you do actually lose out on opportunities where lying could be advantageous. Maybe for some things that's worth it, and I'm not sure if lying is one of them. I've personally noticed, though, that every degree of freedom I give myself is an extra bit of cognitive load that I have to endure in order to get the right result. Sometimes, like when designing something new it can pay off, but with habits I've found inflexibility has serious benefits. But, still, how do you know which is best?
Covey (probably)'s answer is that that's what you call character: the things you're willing to give up being able to change situationally, where you'd be happy to say "I will never lie", or "I will never steal". Perhaps there aren't that many of those things, and that seems fine; each one is a fairly significant sacrifice. But there is also a benefit, in that each one also frees you from a class of things you need to consider in a situation. In essence, the more character you have, the easier your decisions will be, and the fewer situations in which you will be able to achieve an optimal result.
I think, to an extent, everyone has things they think of as character traits; if you were asked "what's your character", I'm sure you could come up with something. But I wonder how many of us would be happy being pinned to a specific character, or would be willing to say that we act in-character all of the time. And if there are some qualities that you admire, some things that you would be proud to have as a character, how hard would it be to commit to them? And would it make your life easier to do so?
A while back I read the most amazing NASA report. It was just after Lockheed Martin dropped and broke a $200+ million satellite. The sort of thing that you might consider fairly un-NASA-like given their primary mission of keeping things off the ground. They were understandably pretty upset and produced one of the greatest failure analyses I've ever seen.
It starts by saying "the satellite fell over". So far so good. Then "the satellite fell over because the bolts weren't installed and nobody noticed". Then "nobody noticed because the person responsible didn't check properly". Then "they didn't check properly because everyone got complacent and there was a lack of oversight". "everyone got complacent because the culture was lax and safety programs were inadequate". And so on. It's not sufficient for them only understand the first failure. Every failure uncovers more failures beneath it.
It seems to me like this art of going meta on failures is particularly useful personally, because it's easy with personal failures to hand-wave and say "oh, it just went wrong that time, I'll try harder next time". But NASA wouldn't let that fly (heh). What failure? What caused it? What are you going to do different next time? I think for simple failures this is instincitvely what people do, but many failures are more complex.
One of the hardest things to deal with is when you go to do different next time and it doesn't work. Like you say, okay, last time I ate a whole tub of ice cream, but this time I'm definitely not going to. And then you do, and, you feel terrible; not only did you fail (by eating the ice cream), but your system (I won't eat the ice cream next time) also failed. And it's very easy to go from there to "I must be a bad person and/or ice cream addict". But What Would NASA Do? Go meta.
First failure: eating the ice cream. Second failure: the not-eating-the-ice-cream system failed. Okay, we know the first failure from last time, it's because ice cream is delicious. But the second failure is because my plan to not eat the ice cream just didn't seem relevant when the ice cream was right in front of me. And why is that? Well, I guess ice cream in front of me just feels real, whereas the plan feels arbitrary and abstract. So maybe a good plan is to practice deliberately picking up ice cream and then not eating it, to make the plan feel real.
But let's say that doesn't work. Or, worse still, let's say you don't even actually get around to implementing your plan, and later you eat more ice cream and feel bad again. But everything's fine! You just didn't go meta enough. Why didn't you get around to implementing the plan? That sounds an awful lot like another link in the failure chain. And maybe you'll figure out why you didn't do the plan, and something else will get in the way of fixing that. The cycle continues.
The interesting thing is that, in a sense, all the failures are one failure. Your ice cream failure is really a knowing-how-to-make-ice-cream-plans failure, which may itself turn out to be a putting-aside-time-for-planning failure, which may end up being that you spend too much time playing golf. So all you need to do is adjust your golfing habits and those problems (and some others, usually) will go away.
I think to an extent we have this instinct that we mighty humans live outside of these systems. Like "I didn't consider the salience of the ice cream" is one answer, but "I should just do it again and not screw it up" is another. That line of thinking doesn't make any sense to me, though; your system is a system, and the you that implements it is also a system. Trying to just force-of-will your way through doesn't make that not true, it just means you do it badly.
To me that's the real value of going meta: you just keep running down the causes – mechanical, organisational, human – until you understand what needs to be done differently. Your actions aren't special; they yield to analysis just as readily as anything else. And I think there's something comforting in that.
There's a great quote, sometimes attributed to Kelvin, but apparently fabricated from things said by one or more other people, that goes "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement." Of course, this was in the late 1800s, just before the discovery of relativity, nuclear physics, quantum mechanics, subatomic particles, black holes and the big bang theory. So I guess you could say that turned out to be a bit short-sighted.
Though, when you think about it, what is the answer? Will we ever know everything? I think the instinctive answer is "no", because the universe is too big, and to an extent maybe we want it to be too big. But, if you follow that through for a second, how could it possibly be true? Purely from an information theory perspective, there's no way you can encode an infinite amount of information in a finite space, so, worst case, there must be a finite description of the observable universe.
That description could be as large as the universe itself, if the unverse was purely structureless and random. I'm not even sure if that's possible; if the universe used to be smaller and denser, the information now can't be greater than the information then unless we assume some external source injecting information in. Regardless, the universe seems to have structure – in fact, a lot of structure – so I can't see any reason it won't, eventually, be completely described. I think, at some point, we will know everything.
And where does that leave us? I mean, what do we do when the universe's mysteries are completely explained to us? Perhaps it will all seem pointless then. But on the other hand, there are a lot of domains where people effectively know everything now and it doesn't seem to bother them. It's possible to know everything about a given programming language, for example, or bicycle repair. I don't think people who use programming languages or repair bicycles are filled with existential dread. Or, at least, not because of the completeness of their knowledge. And many fields seem to just generate an infinite stream of new things.
In the end, I suppose I'm making an argument that essential complexity is finite, but I don't think the same is true of accidental complexity. I read an Iain Banks book where a super-advanced species lived only for a kind of futuristic Reddit karma. Maybe that's where we'll end up.