A classic piece of software lore is the "10X programmer". That is, the difference in productivity between programmers is not just a bit, it's a whole order of magnitude; the best programmers are 10 times more productive than the worst. It's something I'd heard repeated a lot of times, and I kind of assumed it was yet another poorly-supported software development factoid. However, it's actually backed up by a fair bit of evidence, at least by the standards of software development. In fact, the difference may be as high as 20 times.
But those numbers are just nuts. I mean, an average person can probably do a 100m sprint in 15-20 seconds. The current world record is just under 9.6 seconds. If sprinting had the same variation as software, the fastest time would be 10 seconds and the slowest would be 1.6 minutes. And, let's be clear, the studies were not comparing amateurs to professionals, the variation was between professional programmers. Professional sprinters, as opposed to amateurs, are generally expected to have a time close to 10 seconds. The equivalent for sprinting would be something like 1.05X.
Numbers that high suggest something beyond just differences in how long it takes to put code into a computer. 10X says to me that there's an exponential hidden in there somewhere; something that happens early in a project that reinforces itself throughout the life of the project. That something can't be writing code, which is hard but not that hard, but it does sound a lot like design: making decisions about code. Each decision compounds with each new decision, and those decisions have a drastic impact over the productivity of the project.
So I don't think the 10X effect is because good programmers work faster, but rather that they work in a way that avoids making more work. Bad programmers make bad decisions. Those decisions make everything else take longer, and they compound together with future bad decisions. A good programmer can avoid getting into that situation in the first place but, crucially, a good programmer deep into a badly designed program is not going to be 10X anything.
Which means that a better way to think about it is 1/10X. That is, good programmers are able to make things 1/10th as difficult for themselves, and thus do 1/10X the work. Chuck Moore controversially claimed that he could solve a problem in colorForth using 1% of the code it would take someone else to solve it in C. But note the crucial distinction: he didn't say he would take someone else's C and and rewrite it in Forth to be 1% of the size. Rewriting it would make it a bit smaller, sure, but the biggest difference is that he would redesign it.
It's worth thinking beyond software, too. Any situation where your decisions compound with themselves would show similar variation. And the crucial thing is that someone looking and seeing only the most recent decision would say "you're not doing anything that much better, your problems are just easier to solve". But you need to look at the entire chain of decisions; each problem is a consequence of the decisions that come before it. The real trick is to avoid letting your problems become difficult in the first place.
This was my second run at trying out different langauges for my superlative number market idea. This time I tried Rust, and I found it a lot more approachable than Pony. I managed to get a working web service that could store and retrieve a message, though I confess at a certain point I lost my grip on the memory management and just arbitrarily sprinkled ref/mut/&/move/as_ref/unwrap/Mutex around until it worked. I can see how the model could be very powerful once I've internalised it.
I find myself writing a really basic jQuery knockoff in one line at the top of my files a lot. jQuery's huge and I really just want a little bit of sugar over the DOM, which, in modern browsers at least, is surprisingly bearable. I thought it'd be pretty fun to try to make a modern jQuery small enough to fit in a tweet. Unfortunately I only got it down to 162 characters, but it can query for 1 or multiple elements and create documentFragments from html strings, which is pretty good considering.
This was MeDB, the first part of my stats resurgence. It's a little plumbing utility to load data into InfluxDB for use as a personal metrics database. I got as far as recording public GitHub stats and CouchDB documents from my website, but I designed it to be extensible for more plugins and things. At some point I'll probably clean it up a little and drop it all on npm as a real project.
This was the second part, Monotonic, which pulls data out of InfluxDB and puts it into my CouchDB. Nothing terribly fancy here, but I needed it for the stats to work. Much like MeDB, this is designed to be extensible so I can add more stats later.
This one turned into a full-blown project called pullitzer. Basically I wanted to mirror some of my GitHub repos to my server as a kind of ghetto deployment system. I got to spend a bit of fun time messing around with HMACs and things, but mostly it was pretty straightforward. It's now powering WTFISydJS, all part of my master plan to remove all manual intervention from updating that site.
This one got way out of hand. I'd had an idea for doing Markov chain music for a while, and I wrote a bit of code but never really stuck into it. This time I did, but it turns out splitting all the samples out, although quite relaxing, was incredibly time consuming. I totally blew my time estimate doing it, but I ended up with a fun demo and something to put up on GitHub. What I'd really like to do in a future prototype is add some nice graphics like I did with the later Audiomata prototypes.
This was an idea that rose out of some quibbles I had with Redux's actions. Everything just seemed so wordy. I thought it'd be interesting to see if I could make something a bit nicer. Architecturally, Redux is a lot of fun to work with. It's not that there's anything amazingly mindblowing in there, it's just good design and good code. The end result of my tinkering ended up being about 30 lines, which is very nice and largely a result of having a decent system to build on top of.
So I did well this week, but I still feel squarely in the yellow zone. I still blew through my time limits a few times, and if things had gone even a little bit differently I might not have made it. So for the mean time I'm going to keep at it, keep committing. and work at it until it gets easier. Until next week!
Over time, it seems like we're gaining more and more control over our lives. While a lot of that can be attributed to political conditions favourable to personal freedom, I think the real driver of control these days is technology. For example, it used to be that you had very little choice about who you interacted with on a daily basis. The people you lived near were your people, and, like them or not, they're who you'll spend time with. But now between cities, private transport, internet shopping and communication technology, most of the time you don't need to interact with anyone you don't choose to.
Partly as a consequence of this, you also have a lot more control over the ideas and information you're exposed to. News and other media is a lot more decentralised and personalised than it used to be, with the consequence that instead of reading a national or regional newspaper and listening to the radio, you tend to read hyper-specialised information from newsletters, websites, podcasts and social media. Sometimes this is done by a specific personalisation system (eg Facebook's News Feed or Reddit's subreddit filtering), but it's also just the de facto consequence of having more and finer-grained choices.
And the last one is that we have more control over what we do. The rising tide of education (and formal education alternatives), social mobility and individual power means that it's much more feasible to aim for not just work, but vocation and passion. Whereas once upon a time you would take the first good job you could get your hands on and spend a career under the wing of one corporate entity, modern workers have more power and flexibility to make work work for them.
So we have ever-increasing control over who we interact with, what information we see, and what we do with our time. Those are all good things! While there are perfectly valid arguments about too much choice being mentally taxing, and our increase in personal power outstripping our society's ability to set social norms around it, I think those factors are not enough to really swing the equation: more control means we get better outcomes.
However, there is one important caveat that I think is underappreciated: that power is a lot of responsibility. I don't mean moral responsibility, though I did write about that earlier. I mean that if you have complete control over your life, then the quality of that life is completely up to you. So if you're not very good at choosing good people to interact with, good information to consume, or good activities to pursue, there's nothing stopping you from ending up in a fairly terrible situation. And, even in a less extreme sense, the more control you have, the more you rely on your own judgement.
I would never suggest giving up the important gains in control we've made, but I would suggest being aware of that significant limitation. If there's something better out there, something so good you'd never think to look for it, complete control guarantees you won't find it. For that reason, I think it can be worth deliberately giving up control in limited ways. Talking to strangers (or letting them talk to you), taking on an activity you would normally never do, and exposing yourself to strange and uncomfortable sources of information are all ways you can do this.
More generally, it's dangerous to put yourself in a position where you are betting on the completeness of your present understanding of the world. It's not so much that you might turn out to be wrong, it's that you'd never know.
There's an old internet adage known as the robustness principle. It says "be conservative in what you do, be liberal in what you accept". That is, you should attempt to act according to specification, but be flexible and recover from non-spec behaviour from others. It seems like a good idea at first, but then the non-spec behaviour that you accept inevitably becomes a new de facto spec. My favourite example of this is the 3.5mm audio jack.
In many real-life situations, I think people try to follow a similar robustness principle. They are more than willing to accept a certain level of deviance in others, but are conservative in expressing their own differences. So at a job you might pretend to be interested in work that bores you, or on a date you might avoid mentioning that a significant part of your life revolves around collecting Pokemon cards. In a sense, this is trying to follow a particular social specification to maintain compatibility with others.
But I'm not sure that is actually the right strategy in the long term. After all, if you make a point of not mentioning how you differ from the standard, all that means is that you will end up with people and situations that aren't actually compatible with you. If Pokemon collecting is that large a part of your life then you're attracting people who aren't compatible with you by pretending to be compatible with them. And, the reverse problem, other people who share your passion for card collecting may pass you over because you didn't say anything.
In that sense, I think compatibility is symmetrical; in every way that you're incompatible with someone or something, they're also incompatible with you. So homogenising away your differences just hides that incompatibility behind a veneer of superficial compatibility, much like the audio jacks with a zillion different pin layouts, or the early days of the web with wildly different rendering behaviour in different browsers.
That's not to say there's never any reason to feign compatibility. If you're going for quantity over quality, and you're happy to engage at a superficial level, it makes sense. Much like political figures hide their rough edges to appeal to the masses, maybe you can gain more popularity by being more compatible. There's even a theory that the web really took off for that reason, because people could just write what they wanted and browsers would kind of figure it out. But then, inevitably, that generation of amateur web developers learned bad habits and became incompatible with the actual standard.
So perhaps it's worth being a bit more liberal in what you do, and conservative in what you expect. That might not win you as many friends, but the ones you do make will be actually compatible with who you are. That seems to me a better kind of robustness.
With rare exception, software never seems to be complete. Donald Knuth famously gives TeX version numbers that asymptotically approach pi. The last major version was 3, and it's currently on 3.14159265. There will be no more major versions, no new features. Each subsequent version will only include (more and more minor) bugfixes. That is to say, Knuth considers TeX to be done, and is now only pursuing closer and closer approximations of correctness.
I think it's useful to think about software is in terms of the Platonic ideal. In the real world, of course, you run into a lot of problems trying to define a perfect abstract chair, but there is absolutely such a thing as a perfect abstract algorithm. When you're implementing Quicksort your implementation is an approximation of that algorithm. And, in a sense, all sorting algorithms are approximations of an ideal abstract sort, which returns correctly sorted elements with the minimum possible resources.
Even for less abstract problems, it can be meaningful to use Platonism to think about software. You can define a perfect calculator as one that includes every one of a set of defined mathematical operations, executes them in a defined (bug-free) way, and operates using the minimum resources to achieve that. In a sense, all testing (and in particular formal verification) relies on this idea of an abstract ideal program that your program can be measured against.
However, the more your software tries to do, the more complex that Platonic ideal is. It's rare that a piece of software will be as simple as a calculator; usually the requirements will be in some way defined by the needs of real people, which means that the software model also needs to include a user model, and if it is commercial software, the software model is dependent in some way on a business model. These result in user acceptence testing and behaviour-driven development.
In the extreme, your software's requirements can become so complex that its abstract ideal is a system of infinite size. Perhaps that sounds hyperbolic, but actually it's not so uncommon. When you define software by the features it has, ie: does x, does y, does z, it's going to be finite. But systems are more often defined in terms of goals like "people can say anything and our software will do it", or "be able to access all the content on the internet". Apple's Siri and Google Chrome are both implementations of an infinitely large abstract system.
How can you tell? The question is, when would you stop adding features? What would your software have to do to be finished? Siri will never stop adding features because we will never stop having things we want to do. Chrome will never be finished because there will always be more new kinds of content on the internet. The ultimate end goal of both systems is a fully general system capable of doing anything: a Platonic Black Hole.
If you're making a system like that, it's not necessarily a bad thing, but it does make your situation kind of unstable. While other software will slow down as it asymptotically approaches completeness, yours will just keep growing and growing forever. The eventual fate of the finite system is that the marginal cost of additional work on it drops below the marginal benefit of ever-tinier improvements. At that point, you can step away from the keyboard; it's done.
But the eventual fate of the infinite system is that it gets so large that it can't adapt to future change, and another, newer infinite system takes its place.