How To Get Ahead In Pro Computing

27 Nov 2016

If you follow things in the Mac world, you can’t have missed the hand-wringing regarding the Mac Pro. The 5K iMac is an excellent desktop all-in-one, and the MacBook Pro line just had a substantial update (albeit overdue, and not without issues). The Mac Pro, on the other hand, has had no updates since it was unveiled to great fanfare in 2013. This lack of attention has led to speculation that Apple is no longer interested in Pro desktops. Marco’s widely-circulated post does an excellent job of explaining why this would be a bad thing.

One thing that has lead to pessimism regarding the Mac Pro’s future is the news that Apple is out of the stand-alone display business. That move makes more sense for a company that doesn’t make desktops that need external monitors. However, it raises an interesting possibility. What if the next Mac Pro, or whatever replaces it, doesn’t connect to a monitor at all?

The key insight is that the machine that’s doing your intensive, pro-level computation (and generating all the noise, and slurping the power) does not need to be the one you’re looking at and touching. The user interface, including hardware like the monitor and keyboard, is provided by an iMac or MacBook, and the Pro box just needs to worry about the low-level, high performance calculation. Dividing the responsibility like this allows each part to make different trade offs. It frees the user facing part from trying to dissipate heat from demanding components, and frees the computation part from concerns of ergonomics.

This path is well-trodden for larger teams; animators and developers share farms of servers to render graphics and compile code. A headless Mac Pro would provide similar benefits to individuals. It could do this over the network, being essentially a miniature, personal server farm, but Apple could instead (or as well) use a faster, more direct connection. Their enthusiastic support for Thunderbolt 3 may not just be to get rid of big, ugly legacy ports.

If you relax the constraint that the thing providing computational grunt has to be a fully fledged general purpose computer, you could do this today. Thunderbolt is essentially an external PCIe bus, so a high-end graphics card in a suitable enclosure could provide plenty of additional OpenCL cores. If you want something more like traditional CPUs, Intel’s Xeon Phi would work similarly.

The big stumbling block in this is application support; these thousands of cores won’t do much good if they sit idle while Lightroom grinds away on you internal i7. However, Apple has a good track record in this regard. Things like Grand Central Dispatch show that they have both the technical chops to design a usable API for complex tasks like parallel programming, and the organisational will to aggressively push its adoption.

As a final cherry in the cake, a headless Mac Pro would also answer the question of doing “real” (in this case, computationally intensive) work on an iPad Pro. An external box would allow developers to create hybrid applications that served demanding use cases without losing the things that make an iPad an iPad. The Surface Studio seems like a compelling form factor, but in Apple’s world it needs to be iOS, not a Mac. A headless Mac Pro (combined with an even larger iPad) would achieve this 1.

I’m not saying that we’re about to see the release such a Mac Pro — predicting future Apple products is a game for fools and clickbaiters. However, it would seem to allow them to support “Pro” users (including, not forgetting, their own developers and designers) without compromising the commercially more relevant consumer products. Whatever happens, if they don’t do something, they’ll end up ceding the space to a competitor. If they no longer care about it, then perhaps that’s a good thing.

(The heading image is Caravaggio’s Judith Beheading Holofernes.)

  1. One interesting but solvable problem that this raises is Thuderbolt, which is an Intel technology. Might we see an x86 iPad before an ARM Mac? [back]

Ain't Got Nothing

14 Nov 2016

At work, we use web technology for something a little unusual — delivering realtime augmented reality to operating rooms to support image-guided surgery. We’re constantly looking for ways to stay on top of the complexity involved, whilst still being able to add new functionality. To this end, we just migrated this from vanilla JavaScript to TypeScript, the statically typed variant from Microsoft Research that’s just hit 2.0. All in all, I’ve been very pleased with the results.

For those of you not familiar with it, TypeScript is essentially a superset of ECMAScript 6, with a type system added on. The decision to base the language on ECMAScript, rather than just using it as a compilation target, gives a far clearer migration path — you can simply change the extension on your existing source file and start adding type annotations. Compared to the prospect of a complete rewrite, this makes migrating an existing code base a far less… courageous endeavour.

Another benefit is that you get to use all of the nice features of ES6 (lexical let and const, arrow functions, default arguments) without having to worry about support on various platforms (the TypeScript compiler takes care of it). On the other side of the scales, it’s true that TypeScript inherits all of JavaScript’s flaws. However, I quite like JavaScript, especially the recent versions. Moreover, the vast majority of the issues that remain (numerical representation, limited tooling, anaemic standard library) are characteristics of the environment rather than the language itself.

The idea of glomming a type system onto a formerly dynamic language is one that could go horribly wrong, but TypeScript does a very good job of it (unsurprisingly, given it’s pedigree). Firstly, the type system is gradual; by default, things get the permissive any type, so your unmodified JavaScript will work out of the box. Once you’ve annotated everything, you can turn this default off to keep everyone honest.

This doesn’t mean you have to have annotations for every function and variable, though; like any modern language worth its salt, it uses type inference to fill in the blanks. The type system also supports algebraic types, including (in TypeScipt 2.0) discriminated unions. If you tend towards a functional rather than object oriented style anyway, the result is something that feels very much like an ML for the working programmer.

While TypeScript has a lot of details that I like, there’s one feature in particular that has won me over: strict null checks1. This means that, if the type system says that something is a string, it’s actually a string; it can’t be null (or undefined). This eliminates a pervasive flaw in JavaScript, and indeed almost all other languages that use reference types (Tony Hoare called it the “billion dollar mistake”). If a null value is a possibility, you have to account for that, or your code won’t type check. The impact of this seemingly simple change is immense; in fact, I’d recommend adopting TypeScript on the strength of this feature alone.

The only significant snag we hit in moving to TypeScript is the build process. We try and keep third party dependencies to an absolute minimum, which meant we’d previously been able to get away with a very minimal build system based on RequireJS. With TypeScript introducing a mandatory build step, we decided to bite the bullet and go with something more fully featured.

We picked webpack, and while we have a system that works I’ve not been particularly impressed. On the plus side, it’s allowed us to adopt SCSS, and supports things like source maps. However, it’s a prime example of the JavaScript community’s tendency to have hundreds of micro-dependencies. This isn’t a deal breaker for us in this case, as they’re development rather than runtime, but is still far from ideal. Related to this, much of the functionality (even basic things, such as returning a failing exit code if one of your build steps doesn’t work) is in the form of third party plugins. The TypeScript plugin we’re using seems to be having some kind of configuration turf war with the compiler’s own configuration system, which means we can’t benefit from a lot of the deep tool support that the latter provides.

The specific issues we’re seeing are relatively minor, and I expect that we’ll solve them with a little tweaking. The underlying problem will still be there: there seems to be an unnecessary amount of accidental complexity to solve a relatively simple problem. This isn’t just webpack, though — we went with it because it looked like the least bad JavaScript build system. If we replace it with anything, it’ll be good old-fashioned make (“the worst build system, except for all the others”).

When compared to more established languages like Python, the JavaScript landscape is, to put it mildly, somewhat turbulent. Innovation and pace are valued over stability and rigour. However, as the web platform is used for more and more critical things, this is starting to change. TypeScript is definitely a move in the right direction.

  1. Strict null checking was introduced in TypeScript 2.0, but as it will almost certainly break existing code, it’s off by default. You can enable it with the flag --strictNullChecks, and I highly recommend that you do. [back]

In The Abstract

18 Aug 2016

In Under the Radar #37, David and Marco discuss code reuse, and the benefits of abstraction. They come down solidly on the side of YAGNI — You Ain’t Gonna Need It. In other words, it’s usually not worth putting extra effort into making something more general than it needs to be right now, as you’re unlikely to correctly predict the way in which it will need to generalise (if it ever does).

This is certainly not a straw man; it’s a trap I’ve often seen people fall into, myself included. However, I’ve also seen people go too far in the opposite direction, viewing any abstraction or generalisation as a problem.

This made me look at my own use of abstraction, and I found myself thinking of the Field Notes brand tagline:

“I’m not writing it down to remember it later, I’m writing it down to remember it now.”

Similarly, I tend not to create abstractions to cope with future cases, but in order to understand the code that’s already there. Some people understand things best when they’re concrete and explicit, but I’m not one of them. I need to systematise things, to put them in a framework and create rules. This is one of the reasons I’m attracted to programming. The framework I construct to understand isn’t just a model of something; it can be the thing.

Abstraction, of course, can have numerous other benefits — brevity, clarity, and yes, even reuse. For me, though, it’s first and foremost a tool for understanding what the hell’s going on.

HyperDev, HyperCard and a Small Matter of Programming

12 Jun 2016

HyperDev, a new web development product from Fog Creek, looks very promising. It aims to remove the friction from creating a web application by getting rid of all the incidental complexity usually associated with development. To quote from Joel’s introductory post:

Step one. You go to hyperdev.com.

Boom. Your new website is already running. You have your own private virtual machine (well, really it’s a container but you don’t have to care about that or know what that means) running on the internet at its own, custom URL which you can already give people and they can already go to it and see the simple code we started you out with.

All that happened just because you went to hyperdev.com.

Notice what you DIDN’T do.

  • You didn’t make an account.
  • You didn’t use Git. Or any version control, really.
  • You didn’t deal with name servers.
  • You didn’t sign up with a hosting provider.
  • You didn’t provision a server.
  • You didn’t install an operating system or a LAMP stack or Node or operating systems or anything.
  • You didn’t configure the server.
  • You didn’t figure out how to integrate and deploy your code.

This is a big deal. If you’ve been programming for a while, you cease to notice the rituals and rain dances required to get things working. You build up mental callouses, and after a while forget that the accidental complexity is even there. To someone just getting started, this is a major impediment1. HyperDev shows that it’s an unnecessary one.

Crucially, HyperDev isn’t a toy environment, a sandbox for learning the basics. It produces real applications, running on a real platform2. You can take what you’ve created, deploy it elsewhere, and expand it on any way you choose. This turns it from a dead end into a launch pad.

All this makes it interesting as a platform for end user development — allowing users who aren’t developers, and don’t necessarily want to be, to use web technology to solve their own problems. However, while they’ve removed most of the rough edges, one thing remains that makes it less than perfect for this use case: programming itself.

Consider two end-user development systems (perhaps the only two) that have seen widespread success: HyperCard3 and Excel4. Both are notable in that you can start using them interactively and directly, and immediately get value from them without programming at all. Excel is a useful tool even if you just type figures into a grid and make charts. Not everyone needs to bother with formulae, and even fewer with Visual Basic, but those facilities are there as and when you need them. Similarly, HyperCard allows you to start with a simple graphical editor, and move on to links and then scripts.

In the case of Excel, the overall architecture is geared towards the simple case, and it shows the strain when pushed into more complex use. HyperCard was had a better model in this regard, but the vagaries of the market meant it never got a chance to keep pace with changes in the technology landscape. HyperDev’s story for progression is stronger than either, at the expense of missing out the first step.

In summary, HyperDev does a fantastic job of removing unnecessary friction from traditional development. It’s shaping up to be a useful tool in its own right, and it also acts as an existence proof to make us reexamine the assumptions embedded in our existing environments. In terms of empowering end users to solve their own problems, though, there’s still more to explore.

The title of this post is a reference to A Small Matter of Programming, by Bonnie Nardi, a great book covering the motivation and theory behind end-user programming.

  1. Some regard this as a badge of honour, a shibboleth, a bouncer on the door keeping out the riffraff, and see it as a good thing. It isn’t. [back]

  2. Well, Node.JS. [back]

  3. I assume that the name “HyperDev” is a nod in this direction. [back]

  4. Fog Creek founder Joel Spolsky worked on Excel when at Microsoft; I’d be interested to hear how his experiences there shaped the development of HyperDev. [back]

Web Fonts

9 Dec 2015

One of the things I enjoy doing with this site is tweaking the design. I’ve previously written about the goals I’m aiming towards when I do this. Along with the publication of this post, I’m made another change in order to get closer to the ideal described there; I’ve introduced web fonts.

In that previous post, I was less than complimentary about web fonts:

No web fonts for body text. This not only adds another resource to the page load, but unlike an image or style sheet it renders the text itself invisible until it arrives. Until someone works out a way around this, I’m steering clear of web fonts for sites with casual visitors.

This is all still factually correct, but I’ve reconsidered how it relates to my goal of a fast, lightweight site that works well for drive-by readers. Firstly, I realized that the that the biggest non-content resource on all pages was the logo, which is just text. If I could replace this with real text in a font of my choice, I could eliminate the image entirely while still retaining visual control.

The second aspect, the invisible text while loading, remains, but isn’t the problem I thought it was. Even in my original objection I left myself an out — the body text is still in reliable old Georgia, and so will not be affected. A brief delay in the rendering of headings and so on seems acceptable, as long as it really is brief. On a lot of sites that use web fonts, it isn’t — it’s often noticeable, and occasionally a real impediment to getting to the content. Why do I think I can do better?

The main problem isn’t web fonts themselves (which are a just a resource like any other), but rather the way they’re often implemented. Services like Adobe TypeKit and Google Fonts offer a wide range of high-quality fonts, hosted on their own servers. This makes for painless integration — just add a single stylesheet and you’re done — and allows them to manage licensing and pricing, but comes at a cost. Third party resources like this not only put you at the mercy of someone else’s operations and priorities, but can also torpedo performance by requiring a separate HTTP connection and DNS lookup. For reasons of both performance and pigheadedness, I ruled such services out.

The alternative is to host the font yourself, just like a stylesheet or image. There’s no technical trick to this, but it requires a font licensed in such a way that allows redistribution. Unless you’re willing to pay far more than I can justify, this rules out most well known fonts from major foundries (who, not unreasonably, want to charge for their work). However, in parallel to the software world, there are fonts with more open licenses which would suit my purposes. After a bit of searching on FontSquirrel, I came across the rather nice Cooper Hewitt, a font that the Cooper Hewitt Smithsonian Design Museum commissioned for their own branding and then released under an open license. The light weight fit with the look I was aiming for, and the integration was straightforward. I also took the opportunity to replace some image links with text, meaning that there are now no non-content images on the site.

So, what’s the end result? Looking at the previous post, which is short but not unusually so:

  • Before: six resources, totaling 65.3KB.
  • After: four resources, totaling 34.0KB, a little over half the size.

So, not only do web fonts make the site prettier and more accessible, they substantially reduce both the size and number of resources. This translates into genuine performance gains for anyone visiting the site. When I started looking at web fonts, my aim was to improve the appearance without increasing load times, so such a marked improvement is a very nice bonus.

Update

Having inlined the font stylesheet into the main one, the page is now down to three resources — the HTML, a single stylesheet, and the font itself. The total size remains about the same.

This site is maintained by me, Rob Hague. The opinions here are my own, and not those of my employer or anyone else. You can mail me at rob@rho.org.uk, and I'm robhague on Twitter. The site has a full-text RSS feed if you're so inclined.

Body text is set in Georgia or the nearest equivalent. Headings and other non-body text is set in Cooper Hewitt Light. The latter is © 2014 Cooper Hewitt Smithsonian Design Museum, and used under the SIL Open Font License.

All content © Rob Hague 2002-2015, except where otherwise noted.