The Thrilling Adventures of Lovelace & Babbage

23 Jan 2017

Tori got me The Thrilling Adventures of Lovelace & Babbage by Sydney Padua for Christmas; I’d seen a few scattered excerpts, but didn’t really know anything about the thing as a whole. Turns out it’s rather good. The comics are entertaining, imaginative and well executed, but the real treat are the notes. Padua goes into detail about the characters, history and technology in an informal and engaging way, often exploring surprising tangents. Definitely recommended.

Now Playing

22 Jan 2017

A very brief post to note that I’ve added a /now page. This is something of an experiment; time will tell if I consistently update it, or it languishes unloved for a while and the gets shoved down the memory hole.

How To Get Ahead In Pro Computing

27 Nov 2016

If you follow things in the Mac world, you can’t have missed the hand-wringing regarding the Mac Pro. The 5K iMac is an excellent desktop all-in-one, and the MacBook Pro line just had a substantial update (albeit overdue, and not without issues). The Mac Pro, on the other hand, has had no updates since it was unveiled to great fanfare in 2013. This lack of attention has led to speculation that Apple is no longer interested in Pro desktops. Marco’s widely-circulated post does an excellent job of explaining why this would be a bad thing.

One thing that has lead to pessimism regarding the Mac Pro’s future is the news that Apple is out of the stand-alone display business. That move makes more sense for a company that doesn’t make desktops that need external monitors. However, it raises an interesting possibility. What if the next Mac Pro, or whatever replaces it, doesn’t connect to a monitor at all?

The key insight is that the machine that’s doing your intensive, pro-level computation (and generating all the noise, and slurping the power) does not need to be the one you’re looking at and touching. The user interface, including hardware like the monitor and keyboard, is provided by an iMac or MacBook, and the Pro box just needs to worry about the low-level, high performance calculation. Dividing the responsibility like this allows each part to make different trade offs. It frees the user facing part from trying to dissipate heat from demanding components, and frees the computation part from concerns of ergonomics.

This path is well-trodden for larger teams; animators and developers share farms of servers to render graphics and compile code. A headless Mac Pro would provide similar benefits to individuals. It could do this over the network, being essentially a miniature, personal server farm, but Apple could instead (or as well) use a faster, more direct connection. Their enthusiastic support for Thunderbolt 3 may not just be to get rid of big, ugly legacy ports.

If you relax the constraint that the thing providing computational grunt has to be a fully fledged general purpose computer, you could do this today. Thunderbolt is essentially an external PCIe bus, so a high-end graphics card in a suitable enclosure could provide plenty of additional OpenCL cores. If you want something more like traditional CPUs, Intel’s Xeon Phi would work similarly.

The big stumbling block in this is application support; these thousands of cores won’t do much good if they sit idle while Lightroom grinds away on you internal i7. However, Apple has a good track record in this regard. Things like Grand Central Dispatch show that they have both the technical chops to design a usable API for complex tasks like parallel programming, and the organisational will to aggressively push its adoption.

As a final cherry in the cake, a headless Mac Pro would also answer the question of doing “real” (in this case, computationally intensive) work on an iPad Pro. An external box would allow developers to create hybrid applications that served demanding use cases without losing the things that make an iPad an iPad. The Surface Studio seems like a compelling form factor, but in Apple’s world it needs to be iOS, not a Mac. A headless Mac Pro (combined with an even larger iPad) would achieve this 1.

I’m not saying that we’re about to see the release such a Mac Pro — predicting future Apple products is a game for fools and clickbaiters. However, it would seem to allow them to support “Pro” users (including, not forgetting, their own developers and designers) without compromising the commercially more relevant consumer products. Whatever happens, if they don’t do something, they’ll end up ceding the space to a competitor. If they no longer care about it, then perhaps that’s a good thing.

(The heading image is Caravaggio’s Judith Beheading Holofernes.)

  1. One interesting but solvable problem that this raises is Thuderbolt, which is an Intel technology. Might we see an x86 iPad before an ARM Mac? [back]

Ain't Got Nothing

14 Nov 2016

At work, we use web technology for something a little unusual — delivering realtime augmented reality to operating rooms to support image-guided surgery. We’re constantly looking for ways to stay on top of the complexity involved, whilst still being able to add new functionality. To this end, we just migrated this from vanilla JavaScript to TypeScript, the statically typed variant from Microsoft Research that’s just hit 2.0. All in all, I’ve been very pleased with the results.

For those of you not familiar with it, TypeScript is essentially a superset of ECMAScript 6, with a type system added on. The decision to base the language on ECMAScript, rather than just using it as a compilation target, gives a far clearer migration path — you can simply change the extension on your existing source file and start adding type annotations. Compared to the prospect of a complete rewrite, this makes migrating an existing code base a far less… courageous endeavour.

Another benefit is that you get to use all of the nice features of ES6 (lexical let and const, arrow functions, default arguments) without having to worry about support on various platforms (the TypeScript compiler takes care of it). On the other side of the scales, it’s true that TypeScript inherits all of JavaScript’s flaws. However, I quite like JavaScript, especially the recent versions. Moreover, the vast majority of the issues that remain (numerical representation, limited tooling, anaemic standard library) are characteristics of the environment rather than the language itself.

The idea of glomming a type system onto a formerly dynamic language is one that could go horribly wrong, but TypeScript does a very good job of it (unsurprisingly, given it’s pedigree). Firstly, the type system is gradual; by default, things get the permissive any type, so your unmodified JavaScript will work out of the box. Once you’ve annotated everything, you can turn this default off to keep everyone honest.

This doesn’t mean you have to have annotations for every function and variable, though; like any modern language worth its salt, it uses type inference to fill in the blanks. The type system also supports algebraic types, including (in TypeScipt 2.0) discriminated unions. If you tend towards a functional rather than object oriented style anyway, the result is something that feels very much like an ML for the working programmer.

While TypeScript has a lot of details that I like, there’s one feature in particular that has won me over: strict null checks1. This means that, if the type system says that something is a string, it’s actually a string; it can’t be null (or undefined). This eliminates a pervasive flaw in JavaScript, and indeed almost all other languages that use reference types (Tony Hoare called it the “billion dollar mistake”). If a null value is a possibility, you have to account for that, or your code won’t type check. The impact of this seemingly simple change is immense; in fact, I’d recommend adopting TypeScript on the strength of this feature alone.

The only significant snag we hit in moving to TypeScript is the build process. We try and keep third party dependencies to an absolute minimum, which meant we’d previously been able to get away with a very minimal build system based on RequireJS. With TypeScript introducing a mandatory build step, we decided to bite the bullet and go with something more fully featured.

We picked webpack, and while we have a system that works I’ve not been particularly impressed. On the plus side, it’s allowed us to adopt SCSS, and supports things like source maps. However, it’s a prime example of the JavaScript community’s tendency to have hundreds of micro-dependencies. This isn’t a deal breaker for us in this case, as they’re development rather than runtime, but is still far from ideal. Related to this, much of the functionality (even basic things, such as returning a failing exit code if one of your build steps doesn’t work) is in the form of third party plugins. The TypeScript plugin we’re using seems to be having some kind of configuration turf war with the compiler’s own configuration system, which means we can’t benefit from a lot of the deep tool support that the latter provides.

The specific issues we’re seeing are relatively minor, and I expect that we’ll solve them with a little tweaking. The underlying problem will still be there: there seems to be an unnecessary amount of accidental complexity to solve a relatively simple problem. This isn’t just webpack, though — we went with it because it looked like the least bad JavaScript build system. If we replace it with anything, it’ll be good old-fashioned make (“the worst build system, except for all the others”).

When compared to more established languages like Python, the JavaScript landscape is, to put it mildly, somewhat turbulent. Innovation and pace are valued over stability and rigour. However, as the web platform is used for more and more critical things, this is starting to change. TypeScript is definitely a move in the right direction.

  1. Strict null checking was introduced in TypeScript 2.0, but as it will almost certainly break existing code, it’s off by default. You can enable it with the flag --strictNullChecks, and I highly recommend that you do. [back]

In The Abstract

18 Aug 2016

In Under the Radar #37, David and Marco discuss code reuse, and the benefits of abstraction. They come down solidly on the side of YAGNI — You Ain’t Gonna Need It. In other words, it’s usually not worth putting extra effort into making something more general than it needs to be right now, as you’re unlikely to correctly predict the way in which it will need to generalise (if it ever does).

This is certainly not a straw man; it’s a trap I’ve often seen people fall into, myself included. However, I’ve also seen people go too far in the opposite direction, viewing any abstraction or generalisation as a problem.

This made me look at my own use of abstraction, and I found myself thinking of the Field Notes brand tagline:

“I’m not writing it down to remember it later, I’m writing it down to remember it now.”

Similarly, I tend not to create abstractions to cope with future cases, but in order to understand the code that’s already there. Some people understand things best when they’re concrete and explicit, but I’m not one of them. I need to systematise things, to put them in a framework and create rules. This is one of the reasons I’m attracted to programming. The framework I construct to understand isn’t just a model of something; it can be the thing.

Abstraction, of course, can have numerous other benefits — brevity, clarity, and yes, even reuse. For me, though, it’s first and foremost a tool for understanding what the hell’s going on.

This site is maintained by me, Rob Hague. The opinions here are my own, and not those of my employer or anyone else. You can mail me at rob@rho.org.uk, and I'm robhague on Twitter. The site has a full-text RSS feed if you're so inclined.

Body text is set in Georgia or the nearest equivalent. Headings and other non-body text is set in Cooper Hewitt Light. The latter is © 2014 Cooper Hewitt Smithsonian Design Museum, and used under the SIL Open Font License.

All content © Rob Hague 2002-2015, except where otherwise noted.