Tori got me The Thrilling Adventures of Lovelace & Babbage by Sydney Padua for Christmas; I’d seen a few scattered excerpts, but didn’t really know anything about the thing as a whole. Turns out it’s rather good. The comics are entertaining, imaginative and well executed, but the real treat are the notes. Padua goes into detail about the characters, history and technology in an informal and engaging way, often exploring surprising tangents. Definitely recommended.
A very brief post to note that I’ve added a /now page. This is something of an experiment; time will tell if I consistently update it, or it languishes unloved for a while and the gets shoved down the memory hole.
If you follow things in the Mac world, you can’t have missed the hand-wringing regarding the Mac Pro. The 5K iMac is an excellent desktop all-in-one, and the MacBook Pro line just had a substantial update (albeit overdue, and not without issues). The Mac Pro, on the other hand, has had no updates since it was unveiled to great fanfare in 2013. This lack of attention has led to speculation that Apple is no longer interested in Pro desktops. Marco’s widely-circulated post does an excellent job of explaining why this would be a bad thing.
One thing that has lead to pessimism regarding the Mac Pro’s future is the news that Apple is out of the stand-alone display business. That move makes more sense for a company that doesn’t make desktops that need external monitors. However, it raises an interesting possibility. What if the next Mac Pro, or whatever replaces it, doesn’t connect to a monitor at all?
The key insight is that the machine that’s doing your intensive, pro-level computation (and generating all the noise, and slurping the power) does not need to be the one you’re looking at and touching. The user interface, including hardware like the monitor and keyboard, is provided by an iMac or MacBook, and the Pro box just needs to worry about the low-level, high performance calculation. Dividing the responsibility like this allows each part to make different trade offs. It frees the user facing part from trying to dissipate heat from demanding components, and frees the computation part from concerns of ergonomics.
This path is well-trodden for larger teams; animators and developers share farms of servers to render graphics and compile code. A headless Mac Pro would provide similar benefits to individuals. It could do this over the network, being essentially a miniature, personal server farm, but Apple could instead (or as well) use a faster, more direct connection. Their enthusiastic support for Thunderbolt 3 may not just be to get rid of big, ugly legacy ports.
If you relax the constraint that the thing providing computational grunt has to be a fully fledged general purpose computer, you could do this today. Thunderbolt is essentially an external PCIe bus, so a high-end graphics card in a suitable enclosure could provide plenty of additional OpenCL cores. If you want something more like traditional CPUs, Intel’s Xeon Phi would work similarly.
The big stumbling block in this is application support; these thousands of cores won’t do much good if they sit idle while Lightroom grinds away on you internal i7. However, Apple has a good track record in this regard. Things like Grand Central Dispatch show that they have both the technical chops to design a usable API for complex tasks like parallel programming, and the organisational will to aggressively push its adoption.
As a final cherry in the cake, a headless Mac Pro would also answer the question of doing “real” (in this case, computationally intensive) work on an iPad Pro. An external box would allow developers to create hybrid applications that served demanding use cases without losing the things that make an iPad an iPad. The Surface Studio seems like a compelling form factor, but in Apple’s world it needs to be iOS, not a Mac. A headless Mac Pro (combined with an even larger iPad) would achieve this 1.
I’m not saying that we’re about to see the release such a Mac Pro — predicting future Apple products is a game for fools and clickbaiters. However, it would seem to allow them to support “Pro” users (including, not forgetting, their own developers and designers) without compromising the commercially more relevant consumer products. Whatever happens, if they don’t do something, they’ll end up ceding the space to a competitor. If they no longer care about it, then perhaps that’s a good thing.
(The heading image is Caravaggio’s Judith Beheading Holofernes.)
One interesting but solvable problem that this raises is Thuderbolt, which is an Intel technology. Might we see an x86 iPad before an ARM Mac? [back]
For those of you not familiar with it, TypeScript is essentially a superset of ECMAScript 6, with a type system added on. The decision to base the language on ECMAScript, rather than just using it as a compilation target, gives a far clearer migration path — you can simply change the extension on your existing source file and start adding type annotations. Compared to the prospect of a complete rewrite, this makes migrating an existing code base a far less… courageous endeavour.
Another benefit is that you get to use all of the nice features of ES6 (lexical
The idea of glomming a type system onto a formerly dynamic language is one that could go horribly wrong, but TypeScript does a very good job of it (unsurprisingly, given it’s pedigree). Firstly, the type system is gradual; by default, things get the permissive
This doesn’t mean you have to have annotations for every function and variable, though; like any modern language worth its salt, it uses type inference to fill in the blanks. The type system also supports algebraic types, including (in TypeScipt 2.0) discriminated unions. If you tend towards a functional rather than object oriented style anyway, the result is something that feels very much like an ML for the working programmer.
The only significant snag we hit in moving to TypeScript is the build process. We try and keep third party dependencies to an absolute minimum, which meant we’d previously been able to get away with a very minimal build system based on RequireJS. With TypeScript introducing a mandatory build step, we decided to bite the bullet and go with something more fully featured.
make (“the worst build system, except for all the others”).
In Under the Radar #37, David and Marco discuss code reuse, and the benefits of abstraction. They come down solidly on the side of YAGNI — You Ain’t Gonna Need It. In other words, it’s usually not worth putting extra effort into making something more general than it needs to be right now, as you’re unlikely to correctly predict the way in which it will need to generalise (if it ever does).
This is certainly not a straw man; it’s a trap I’ve often seen people fall into, myself included. However, I’ve also seen people go too far in the opposite direction, viewing any abstraction or generalisation as a problem.
This made me look at my own use of abstraction, and I found myself thinking of the Field Notes brand tagline:
“I’m not writing it down to remember it later, I’m writing it down to remember it now.”
Similarly, I tend not to create abstractions to cope with future cases, but in order to understand the code that’s already there. Some people understand things best when they’re concrete and explicit, but I’m not one of them. I need to systematise things, to put them in a framework and create rules. This is one of the reasons I’m attracted to programming. The framework I construct to understand isn’t just a model of something; it can be the thing.
Abstraction, of course, can have numerous other benefits — brevity, clarity, and yes, even reuse. For me, though, it’s first and foremost a tool for understanding what the hell’s going on.
This site is maintained by me, Rob Hague. The opinions here are my own, and not those of my employer or anyone else. You can mail me at firstname.lastname@example.org, and I'm robhague on Twitter. The site has a full-text RSS feed if you're so inclined.
Body text is set in Georgia or the nearest equivalent. Headings and other non-body text is set in Cooper Hewitt Light. The latter is © 2014 Cooper Hewitt Smithsonian Design Museum, and used under the SIL Open Font License.
All content © Rob Hague 2002-2015, except where otherwise noted.