I've had an Arduino ECG shield kicking around for ages that I've been meaning to try out, so I figured I'd give it a try. The example sketch worked great, and it just dumps data via serial, so I figured I'd do some Rust to read the serial data and print it. I spent a while trying to figure out the right way to do packet parsing in Rust, before eventually giving up and just printing the raw data. The data looked cool, though Rust's immature library ecosystem makes it probably the wrong choice of tool for this sort of problem when compared with, say, Python.
I wanted to try out some Elm again, after my previousattempts turned out fairly unimpressively. Elm has since ditched its effect model for a new system so I figured that would be fun. I actually felt like I was fighting things less this time, so it's nice to see the language mature. This was just a simple toy to generate random numbers (impure!) and send them over websockets (even more impure!). Worked well though, and had a much better experience than before. Elm still seems to really like starting you off with a simple mode with less options, which is just confusing because you outgrow it so quickly and then have to rewrite everything.
I recently got some milight smart bulbs and was hoping to control them using one of the many embedded computers I have lying around. Unfortunately, the protocol has some scrambling that's just complex enough to be annoying to hobbyists, while basic enough not to add any actual security. Oh, hardware manufacturers, never change. So I figured I'd put on my reverse-engineering hat and have a crack at it. First step was to pull together the various bits of information about the new bulbs and how to communicate with them. This culminated in a patch to the openmili library, which let me dump out the data.
Next is the unscrambling. I've never really done a lot of this, but I figured I'd just look for patterns. Each message is 9 bytes. Bytes 1-6 and 8 seem to be random, but if one is the same then they're all the same, while bytes 7 and 9 are only the same when you reset the remote, making 7 likely a counter and 9 likely a checksum. I collected enough data to find a way to predict byte 2 given byte 1. I think that technique should allow for eventually descrambling the whole thing, but I have to figure out how the counter byte changes. For now, this prototype is just the code for predicting that second byte.
I hear a lot lately about Big Data, the process of collecting lots and lots of information and then figuring out how to find meaning in all of it. This is an approach that has become popular in (especially tech) businesses, in science, and even in personal management. When it comes to data, bigger is better. Why have megabytes when you can have gigabyes, and why gigabytes when you can have terabytes? Just think of all the great science you can do with all that data!
Only, I'm not sure it really is science. I mean, one of the fundamental tenets of science is that you test theories with evidence. That means the theory has to come before the evidence does. If you do the experiment first and then figure out the theory afterwards, you fairly quickly fall into the famed phlogiston school of "oh what a coincidence, my theory also explains this evidence". In other words, you're not using the evidence to test the theory, you're using theory to describe the evidence.
Even when they work, there is something fundamentally missing from data-driven approaches; no matter how sophisticated, prediction isn't understanding. Let's say you have a powerful weather prediction model, trained on the biggest Big Data your big Big Data data warehouse can hold. It can tell you with 99.9% accuracy the weather tomorrow based on a thousand different dimensions of input. But do you actually know how the weather works? Have you learned anything about fluid dynamics? Can you turn your predictions into understanding?
I think the essential conflict is that understanding means less data. A terabyte of random-looking numbers can be replaced with a one-line formula if you know the rule that generated them. An enormously sophisticated and complex model can be replaced with a simple one if you figure out what the underlying mechanics are. The standard model of particle physics can fit on a t-shirt. If your model doesn't, either it's more complex than particle physics or you just don't understand it very well.
Now, you might say particle physics is a bad example. The Large Hadron Collider records terabytes of data for every particle collision, surely that's Big Data? That actually gets to the core of the issue; it's not that analysing lots of data can't be useful, but it's a means, not an end. Nobody at CERN started with "hey let's just smash a bunch of particles together and record it all, maybe we'll find some science in there". Theory came first, and the LHC is the experiment that comes after. If there was an easier experiment, some way that didn't require all that data and all that expense, you think anyone would bother with particle colliders?
The most wonderful skill in all of science is to take a complex question and turn it into a simple question, and then use a simple answer to solve both. Big Data is too often a way of answering complex questions without making them simple. What you get is a complex answer, when what you really wanted was a simple one. Sometimes you might need a lot of data to answer a simple question, but often you only need a little, and I think it'd be good to see more hype for little data.
Today I'm happy to release Automata by Example, a new project where you can build your own cellular automata by just clicking around on a grid. Each click creates new cellular automata rules that lead to large scale changes across the entire grid.
I think of this project as being vaguely a successor to The Sound of Life and Scrawl, two previous projects concerning cellular automata and iterative line construction respectively. The difference here is that instead of the system being controlled directly, it's controlled by rules that you generate.
The technique, which I think of as rule generation, is something a little like direct manipulation, but with an additional generalisation step. You click the pixel you want, but instead of just setting the pixel directly, the system figures out a rule that would set that pixel and applies that rule globally. In other words, you determine the rule from the action, then use the rule to apply more actions.
It seems to work really well, provided your system is simple enough to turn rules into actions easily. The 3x3 neighbourhood grid (inspired by Conway's Game of Life) turns out to be great for this. Each click captures the 3x3 neighbourhood at the time of the click and uses that to generate a cellular automata rule that toggles the cell under the mouse and all others with the same neighbourhood.
This representation makes construction rules easy, but it's quite hard to be exhaustive. For example, if you want to match all cells with 4 neighbours you have to explicitly create every permutation as a rule. Still, there are a lot of fascinating patterns and interesting automata you can make with very few rules.
I've been thinking about something interesting in the drawing space lately. There are various different options for constructing shapes with perfect proportions, I even worked on a demo along those lines myself. However, there's something inherently mechanical and kind of inhuman about these approaches. You can make a square, sure, but all the squares look the same; there's no variation, no nuance.
The only reason you need construction techniques is because of the limitations of the limitations of our hands and our tools. We can't easily draw straight lines or circles freehand because our limbs don't naturally move in straight or circular patterns. We can't really change our hands (yet) but we can change our tools.
What if we made tools that were more assistive in nature? A ruler is perfectly rigid, but what if you had a magnetic pen and put a metallic grid underneath the paper? It wouldn't force you to draw a straight line, but it would mean you naturally move more linearly. That means your lines will still be, broadly speaking, straight lines, but still with some human variation.
A more circular version of this could perhaps be done with a complex arrangement of gears and pulleys that change the fundamental mechanics of your pencil's movement. So you can draw straight lines, but the gears naturally try to pull you into spirals. A perfectly straight line, in this world, would still be kind of circular.
There is of course, the spirograph, which is a rigid version of this, but I think it'd be worth making something with spirograph-like behaviour designed to adjust your drawing rather than control it. A cyborg spirograph.
Last time I was doing some silly projects and experiments. Over the last couple weeks I kept the same creative vibe and worked on some new things for projects to post. I also added Racket to the languages I've made prototypes in, which feels pretty good.
I've been working on an idea for neural network-driven music. Since it doesn't need to be, uh, strictly scientific I've been playing around with different ways to do it. I had an older version where you drew the graph manually, but I realised it would be more fun to just play notes and have that automatically build and train a network. To kick that off I figured I'd make a simple web piano. Took a bit of messing around to get the oscillators not to pop, but worked great in the end.
This was some supporting code for Etaoin Shrdlu. To get everything to work I needed to do some data processing. I could have just done it fairly quickly in CoffeeScript or Python or something, but I figured it'd be an interesting chance to try a new language. I'd just watched a neat video about Racket so I gave that a try. I hadn't used a Lisp before, but it was pretty painless and I got used to the parentheses quickly enough. Still not sure about all the closing parens being on the same line though, that's weird.