The iOS Mail app is great in many ways, but it unfortunately leads you to commit one of the cardinal sins of emailing:

Specifically, when you reply to a message, Mail places the insertion point and signature above the quoted text of the message to which you’re replying. This leads you to write your reply before everything, which as all right-thinking people know is beyond the pale.

In order to solve this, I’ve come up with a script called TidyMail, which can be downloaded from the GitHub project page. This adds a signature where you want it, and performs other tidying (such as remove the signatures from quoted messages) as it goes. To use it, you turn off the signature in Mail, and instead insert the standard signature leader (a line containing only “–”) where you want the signature to be inserted. Everything below this line will be deleted, which is useful if you only want to quote the start of a message.

There is one remaining question; how do you invoke the script to tidy up outgoing messages? iOS is notably (perhaps notoriously) stringent in what it allows to run on the device, so invoking the script on the client is not an option. Fortunately, I run my own mail server, which gives me a way in. There are various possibilities, but I’ve settled on configuring my MTA, Exim, to run the script as part of its remote SMTP transport. This is easy to do by adding the following line to the “remote_smtp” section:

transport_filter = /etc/exim4/tidymail.py

This works reasonably well, but has the drawback that it won’t apply the filter to mail delivered locally. This is fine for my use, as I’m the only one using the server, but your mileage may vary. I imagine there are better ways to hook the script into Exim, but I’m not all that familiar with the package. If you have any suggestions, I’d be delighted to hear from you

There we go; it’s a slightly convoluted system, but it’s been working well for me for a couple of years. If you use iOS or a similar mail client, have your own server, and dislike top-posting, then you might find it useful too.

Of Gardens and Sandboxes

Yesterday, Matthew Bloch, MD of Bytemark Hosting1, tweeted:

next stop: no installations on your own computer from outside the walled garden. A free desktop is much more important.

He was referring to Apple’s announcement that they’ll soon be requiring all apps sold via the Mac App Store to implement sandboxing. Broadly speaking, this strictly limits what an individual app could do once installed - for example, it can’t access files outside it’s own area without going through the standard save and restore dialogs. Mac OS X’s younger sibling, iOS, has had even stricter sandboxing in place since it first allowed third-party applications. It also has another property, namely that the App Store is the only way to install applications under iOS. Matthew’s worry is that Apple’s adoption of one iOS policy (sandboxing) on the Mac suggests that they’re likely to adopt another, and forbid installation of non-App Store apps.

Personally, I think the two are separate issues. Sandboxing is a sensible policy in its own right. It’s a key component of the success of the iOS App Store - the fact that you don’t need to worry about the developer’s incompetence or malice before downloading that 69p impulse buy. Relatively few applications actually need to do things that sandboxing prevents, like alter system settings. For the rest, sandboxing allows the ease-of-use and convenience that iOS users have grown used to.

It’s not like sandboxing is even a new idea. Not only did Java applets (remember those?) give it a go in the late 90s, but Mac OS X’s own version has a lineage stretching back to chroot jails, introduced to Unix in 1979. The idea that every program a user runs has to be able to do anything that that user is not an absolute, and there’s nothing inherently wrong with a user choosing to restrict the facilities available to specific programs. All modern systems do this to an extent, by restricting root privileges to programs explicitly granted them. Sandboxing could be thought of as simply a more fine-grained version of the same idea.

Restricting installation to only the official App Store, on the other hand, represents a significant break from the current status quo. At present, if I want to install a random piece of software from some shady corner of the internet on my Mac, I am free to do so, and live with the consequences. Channelling all installations through the App Store (and Apple’s somewhat mercurial review process) doesn’t give me this option. For many users, who just want to get work done, and don’t want to worry about maintaining their computer, this might be a reasonable trade-off, but for others (such as myself), it is not. However, it’s important to remember that this isn’t an all-or-nothing distinction. For example, I’d want Galcon Fusion to be sandboxed, but still build Emacs myself. If I lose the option of installing arbitrary software, though, the range of things I could do on Mac OS as opposed to another system would be severely curtailed. I don’t think there’s much risk of this happening, for two reasons.

Firstly, the people that such a move would alienate - developers, designers, and those who see themselves as power users - are the sort of people who are willing to pay a premium for high-end hardware. Whilst not as numerous as general consumers, this group must still be a significant source of profit for Apple, and so they’d need a good reason to abandon them. Furthermore, many of these users are the very people who keep the App Store stocked with high-quality products. Driving them away would, in my opinion, be a significant own goal.

Secondly, and more importantly, why do they need to? Apple already sell a walled-garden OS for people who want it, and the devices that use it sell pretty much as fast as Apple can manufacture them. They don’t need to sneak a walled garden in via the back door, because they’ve already kicked down the front. My expectation is that Mac OS will be increasingly seen as a “Pro” platform, and iOS as the way for general consumers. This is arguably no bad thing, as it allows people to use computers for what they actually want to do without having to worry about servicing and maintaining them (I touched on this in a previous post). As John Gruber points out the Mac/iOS division allows Apple to cater for those of us who want the flexibility offered by a general purpose system, whilst not burdening everyone with the attendant complexity. And if at some point they choose to stop catering for us, there are plenty of others who will.

Update: Mathew has responded on his blog. He makes some interesting points. Most importantly, he points out that Apple already charges developers $99 per year. Root access could be restricted to these developers (and anyone who paid the fee could join them), whilst not providing it for regular users. The latter would effectively be handing over both the hassle and the control of auditing the software on their machine to Apple, a bargain that many people would consider reasonable (and one that most people already make with, for example, their email).

The fallout for developers would be that they could no longer rely on end users being able to install software themselves, and therefore everything would have to go through the App Store, with Apple taking their 30% cut. For many small developers, this again is a good bargain - the simplicity of the App Store, plus the trust user’s have in it, are worth Apple’s cut (and the small but non-zero risk of running into problems with the review process). However, as Matthew points out, big developers such as Adobe and Microsoft are less likely to be happy, and could go as far as abandoning the platform.

That’s a key reason I don’t think that Apple are likely to go down this route. Over the last decade, Mac OS X has gone from having the scantest selection of third-party software to having arguably the best. Part of this has come from the kinds of small commercial developers that are a natural fit to the App Store, but it also comes from big-name applications like Office and Creative Suite. Losing these applications would be a significant blow to the platform, and I’d be surprised if the additional App Store revenue would make up for it. Indeed, it’s not at all clear that there would be any additional revenue, given that the change would result in users leaving a Mac OS that no longer met their needs.

More importantly, the end result of this game makes no sense. Why would Apple turn Mac OS into another iOS, when they already have iOS? If people are choosing between a hassle-free/locked-down device (depending on your perspective) and a more flexible/fragile one, why would Apple not want a product on both sides? The only reason would be that one side doesn’t make enough money to justify pursuing. I don’t think that’s the case, but if and when it is, I wouldn’t expect them to turn Mac OS into iOS. They’ll just stop selling it.

In conclusion, I don’t expect Apple to remove the ability to install arbitrary applications from Mac OS in the foreseeable future, not through any moral compunction or fear, but simply because I don’t think it would increase their profits in the long term. I could be wrong about this; Apple clearly know their business better than I do. We’ll see.

  1. Who are, as I believe I’ve mentioned before, highly recommended if you’re in the market for hosting.

Give Me Inconvenience, Or Give Me Death!

In this week’s episode of The Pod Delusion, there’s a piece contributed by my brother, entitled “Death of the Operating System”. In it, he talks about the rise of “walled garden” operating systems such as Apple’s iOS, and what this means for “general purpose” operating systems such as Mac OS X, Windows and Linux1. Once walled gardens are the norm, he suggests, general purpose OSs will come to be viewed by many as being only useful for illegal purposes, and eventually will become illegal themselves.

One point on which I agree is that wall gardens are going to become more prevalent. In a recent article, John Gruber makes the point that the recently-announced Windows 8 is a flawed response to the iPad2, as it includes the ability to run the existing Windows interface, and the applications that go with it, essentially unmodified. The iPad, on the other hand, started with a completely blank slate, with no attempt at compatibility with the pre-touchscreen world. This may, at first glance, seem like a weakness, but it is, Gruber argues, key to one of the major strengths of the platform – simplicity. He’s talking about the UI, but the point applies equally to the installation of software.

Even if you discount the configure-make-install dance that’s familiar to anyone who builds their own software on Unix-like systems, installing and updating software is a pain in the arse. Systems vary in how well they handle it - Mac OS X beats Windows, and both are in turn beaten by Debian - but even if the normal install channels work well, anyone but an expert has a hard time keeping track of exactly what’s been done. This is compounded by the tendency, permitted by the general purpose operating system, for all and sundry to roll their own installation and update infrastructures. Worse, once you’ve given permission for a piece of software to install things, it’s easy for malicious software to creep in, necessitating yet more installation and tending of security software.

Most people simply don’t want this hassle. They just want to read their email, and check their Facebooks, and go on The Google. Maybe catapult the occasional bird at a tower of pigs. A walled garden – if it’s well-tended – takes the responsibility for managing things like updates and installation, leaving the user to simply choose the applications they want from a list (if that). This brings the device closer to an information appliance, as described by Donald Norman in The Invisible Computer. With it’s over-the-air backups and syncing, iOS 5 is a significant step in this direction - it’s increasingly feasible for someone to entirely forgo owning a general purpose computer like a PC, as all their needs are fulfilled by walled garden devices.

Peter’s belief is that, when this is the norm, and owning a general purpose computer is a marginal pursuit, politicians playing to the peanut gallery will seek to ban it in the same way that they banned handguns after Dunblane. While PCs aren’t as obviously deadly as pistols, the twin modern-day bogeymen of terrorists and paedophiles might make them a convenient target when Something Must Be Done.

He draws an analogy with gun ownership in the United States, but I think this is a red herring. The second amendment isn’t in the Bill of Rights by chance; the right to bear arms is intrinsically bound up in the genesis of that country. As Sarah Palin recently pointed out (albeit in her usual ham-fisted, truthy way), the American revolution succeeded in no small part due to the fact that the citizens of the nascent republic were armed. As a result, gun ownership is seen by many Americans as a key component of liberty, and no amount of Wacoes and Columbines are going to override that.

In Britain, with no such historical context, governments have more latitude to pass whatever gun control laws they see fit. However, even after tragedies such as Dunblane, and the attendant media outcry, this hasn’t led to an outright ban on firearms. Whist you can’t buy a handgun or an assault rifle, it’s still relatively straightforward to buy and own a shotgun. The reason for this is obvious; shotguns have, to borrow a phrase from the Betamax case, substantial non-infringing uses (specifically, game hunting and pest control). Handguns, on the other hand, have essentially no other use than to injure or kill other human beings3.

General purpose operating systems clearly fall into the former category. They can be used to hack into a nuclear power station’s control system, or clandestinely distribute images of child abuse, but they can also be used to sequence genomes, or administer complex financial instruments, or develop the processor for your next phone. They’re also vital as the back end for all of the web applications and cloud services that are the bread and butter of your walled garden devices. Crucially, and unlike sports shooting with handguns, these activities make a lot of money. A hell of a lot of money. Successive governments, with their talk of creative and knowledge economies, and their laser-like focus on STEM4 education, recognise this, and there’s no way that they’d kill the goose that keeps laying golden eggs so that a junior minister can have a favourable news cycle.

However, there is another possibility. You need a licence to own a shotgun. What if you needed one to own a non-locked-down computer? This couldn’t happen today - too many companies rely on selling products to computer owners - but in the future, when the man on the WiFi-enabled Clapham Omnibus is satisfied with just his iPad, it’s possible. The problem with such a move is that much of the innovation in computing comes from individuals and small companies, precisely because the barriers to entry are so low. Any country that implemented such a scheme would see a dramatic chilling effect in its software sector at least. Few governments would want this, but it’s a subtle enough point that they might blunder into it by accident. Fortunately, technology companies have in recent years learnt not to be so shy and retiring when it comes to lobbying for their own interests.

It’s also worth considering that the idea of “owning a computer” needn’t be limited to buying a box and plugging it in in the spare bedroom. Even if we reach the stage where Ken Olsen’s widely-quoted5 utterance is true, and there is no reason for any individual to have a (general purpose) computer in their home, that doesn’t mean they disappear entirely. With ever-improving connectivity, the device that does your computing doesn’t necessarily have to be the thing you’re staring at and prodding. There are significant advantages to your general purpose computer being cossetted in a data centre somewhere, where it can have air conditioning and a backed-up power supply, and make all the noise it wants. This doesn’t necessarily meaning ceding control entirely; for example, I rent a virtual server from ByteMark6, over which I have free reign. I get the benefits of their fast internet connection and other infrastructure, whilst retaining control of my software, and importantly, my data.

This leads on to a potentially more troubling aspect of Apple’s recent WWDC announcements; the dominance of the cloud. It raises the question: how happy are you about giving control of your data to a single hardware company? My answer would be: slightly happier than I am about giving it to a single advertising company, but still far from ecstatic. However, that’s an issue for a whole other post.

  1. Peter draws the distinction between “walled garden” systems, where all software must be installed through channels sanctioned by the OS vendor, and “general purpose” systems, where, once the original OS is installed, the user can install additional software as they see fit. This isn’t the terminology I would’ve chosen, but it’ll do for the discussion at hand.

  2. He doesn’t really address the question of whether it’s aiming to be a response to the iPad. Microsoft’s internal fractiousness and lack of a coherent, clearly communicated vision makes this far from obvious.

  3. The only non-violent use I can think of is sports shooting, but I remain to be convinced that this needs to be done with real handguns using live ammunition.

  4. Science, Technology, Engineering and Maths.

  5. Widely, and accurately, quoted, but generally misinterpreted. The context of the quote suggests that Olsen was talking about home automation, not personal computers. Snopes has more.

  6. Highly recommended, by the way.