Comprehensive Data Archive Network

A video showcasing the Wolfram Language has been doing the rounds. It makes for an impressive demo, but not nearly impressive enough for me to commit to a proprietary programming language that only runs on a single vendor’s platform. Nonetheless, there are a lot of impressive features there.

A major one is the integration of curated data sets of real-world knowledge, meaning that unlike most programming environments the Wolfram Language comes with some knowledge of the world built in. The video includes an excellent demo of this, starting with a list of South American countries:

Getting a list of countries in the Wolfram Language

Wolfram then goes on to look up the flags of these countries, pull out the dominant colours, and so on. While these are all things that could be done in almost any language, you’d have to find the country list from somewhere, massage it into a useful format, and import it into your code. Having this kind of general data there at your fingertips removes a point of friction for many tasks, and seeing the demo got me thinking.

One of the big practical advances in programming that has occurred since I’ve been doing it seriously is the rise of the package manager. Starting with the venerable CPAN for Perl, almost every language worth it’s salt has a more-or-less well developed package system - Ruby Gems, Python’s PIP, npm on Node.js, and so on. This made me wonder: why isn’t there such a thing for data?

Off the top of my head, what I’m after would have the following characteristics:

  • Language-independent. If I need to tie an R list of countries to a Python list of flags, we’re pretty much back to square one.
  • Standardised format. XML would be an obvious choice, as would JSON. The actual details would be hidden behind language-specific implementations - as a programmer, you’d just see objects.
  • Distributed. You should be able to pick and choose data sources, in the same way you can choose APT repositories in Debian. There would be a few well-known, trusted sources to get you started, but you could add in more specialised collections.
  • You should be able to refer to data by natural, symbolic names (e.g. “Countries”, “Mountains”) or specific URIs (“http://example.com/geography/countries/v1.1”). Handling this sensibly in the context of the previous point is probably the hardest aspect.
  • Data should be cached, versioned, and there should be some kind of simple scheme to get aggregate data.

With all this in place, I could just fire up my Python (or Ruby or Node) shell and do the following:

>>> import cdan
>>> cdan.get("Countries", "Capitals")
[ ("Afghanistan", "Kabul"), ...]

These are just rough ideas, with a lot of hand-waving, but I think there’s something there. Moreover, I’d be amazed if I was the first person to think of this. Far more likely is that something similar already exists and I’ve just not come across it.

That’s where you come in, dear reader. If you know of something like this that I’ve missed, please let me know via Twitter or mail, and I’ll update this post with any interesting links.

Update: Thanks to @semapher for pointing me at the Open Knowledge Foundation. In particular, their CKAN project looks a step in the right direction, but it’s not quite in the form described above. Worth keeping an eye on, though.

Progress

Almost thirty years ago, in 1984, I got my first personal computer (shared, of course, with my brother) - an Acorn Electron. It had an 8-bit 6502 CPU running at 2MHz, 32K of RAM, much of which was used for the screen, and could display graphics in eight colours on an analogue TV, once you’d retuned a spare channel1. It opened up the world of arcade clones (I grew up on Snapper and Boxer, and wouldn’t play Pac-Man and Donkey Kong until years later), text adventures, and of course programming - first in BASIC, then in 6502 assembler. We loved it.

Twenty years ago, I had an Amiga 1200 (still shared). A 16-bit 68020 at 14hz. 3.5” floppy disks instead of cassette tapes. Hardware-accelerated graphics in hundreds of thousands of colors2. Gorgeous, deep games like Frontier and Beneath A Steel Sky. In every way, it made the Electron look like a relic of a bygone age (which, by that time, it was).

Ten years ago, I was writing up my thesis on an Apple iBook. 800MHz PowerPC G4, 512MB of RAM. Graphics that the Amiga could only dream of, and in sleek, portable package to boot. A capacious hard drive and wireless network connection. It was not only an incredibly powerful computer in its own right, it was part of a global network that was scarcely imaginable a decade before.

Today, I’m writing this on a MacBook Pro, which is… not all that different, to be honest. Somewhat faster, somewhat more storage, nicer screen, but it can’t do anything fundamentally different to the iBook I was using in 2004. Has progress stalled? Has the personal computer had its day?

It’s certainly true that there haven’t been any revolutions of magnitude of cassette to disk to mass storage, or orders-of-magnitude leaps in processing power or graphical ability. More importantly, the fundamental capabilities are the same. Ten years ago, I had a portable, Internet-connected computer with a powerful processor, plentiful on-board storage, a decent screen and a keyboard. Today, I have the same thing.

But.

I also have a smartphone, which allows me to check email, browse the web, and perform a plethora of other tasks wherever and whenever I want.

And I have an iPad that gives me a decent chunk of the laptop’s functionality in a lighter, more comfortable package, with a battery that lasts all day.

And I have a virtual server that provides a permanent presence on the network, without relying on my flakey domestic electricity supply and data connection.

And I have a Raspberry Pi, which lets me tinker at the lowest level of both software and hardware (or hook it up to a TV and watch videos if I’m feeling less tinkery).

And I have a host of other more specialised devices that let me read books, watch films, and listen to music in ways that didn’t even exist a decade ago.

Some might argue that these aren’t PCs - indeed, some of them are often referred to as “Post-PC” devices - but I think that requires a definition of PC that’s unnecessarily restrictive. My smartphone, for example, is most certainly a computer, and both physically and functionally it’s the most personal one I’ve ever owned.

The personal computer isn’t dead. In fact, it’s doing better than ever. More people are using computers, in more ways and more often, than ever before. All that has changed is that the PC is no longer a single machine that did everything, closetted away in a spare room (or plugged into the TV via an RF modulator). Ever-improving technology, coupled with the interoperability provided by the Internet and the web, have made it easy to have many different devices, each tailored to specific needs.

In The Invisible Computer, Donald Norman relates that, in the early twentieth century, a home might have a single electric motor, with numerous attachments to adapt it to specific tasks (sewing, grinding meat, churning butter). As motors became cheaper and more reliable, they proliferated, and each device (blender, vacuum cleaner, sewing machine) would have its own. At this point, users no longer see the motor, just the device and the task - I’m not using the motor, I’m cleaning the floor.

Norman suggests that computers will go through the same trajectory, and indeed this is what’s happening. He describes it as computers disappearing, and being replaced by “information appliances”. I’m not convinced by this distinction; to me, computers are simply becoming more prolific, more competent, and above all more personal. That’s progress.


Many thanks to the The Centre for Computing History for providing the Electron and Amiga images. They’ve just opened up a museum in Cambridge, with exhibits ranging from early punch card machines and minicomputers to the home micros of the 80s and beyond. Much of the vintage hardware is up and running, so you can see if you’re still any good at GoldenEye or try to remember some BASIC. Well worth a visit if you’re in the area.


  1. A TV would typically have eight channels - mapped to physical buttons with associated tuning knobs - which left plenty spare as there were only four broadcast channels at the time.

  2. Well, 256 in sensible display modes - to get more, you needed to employ the CPU-intensive trick of Hold-and-Modify (HAM).

Dunce, Monkeyboy

A little while ago, I wrote a piece about Apple, the App Stores, and the restrictions they place on developers. The main question was; would users be able to install software outside the official App Store? I suggested that Apple are unlikely to impose this restriction on OS X, for two reasons. Firstly, it would put OS X at a significant disadvantage compared to other desktop operating systems (most significantly, Windows), and secondly, they already have a very successful App Store only platform, namely iOS. However, another interesting development in this area has cropped up, and from an unexpected direction.

It has emerged that the upcoming new version of Microsoft’s free development tools, Visual Studio Express, will only support the development of Metro apps. There are two things at are significant about Metro in this context. Firstly, it only has access to a limited subset of the Windows API, making it in some ways closer to iOS than OS X. Secondly, and more significantly, Metro apps can only be installed from the Windows App Store1, which requires (as with Apple’s stores) a paid developer account.

This is an interesting move on Microsoft’s part, and (as the title may suggest), I don’t think it’s a smart one. Granted, the strategy has worked well for Apple on iOS, but that was starting from a clean slate. Microsoft’s situation is very different; they’re trying to move their existing platform from an open to a curated model (while simultaneously moving to a pared down UI). If they want users to make the switch, they’ll need a healthy software ecosystem on the new platform, which in turn means they need to court developers.

As a company, Microsoft has long had a reputation for encouraging and supporting developers (their CEO is noted for his quiet enthusiasm on the matter). However, this move seems to be something of a slap in the face, at the very time they’re asking developers to stick their necks out and commit to a new, untested platform. This won’t affect the big guys much - they’re already paying to be in the developer program, and for the Professional versions of Visual Studio - but it hits right at the heart of the small-scale developers that are such a large part of the success of iOS. It’s had to see it as anything other than a colossal own goal.

However, I don’t use Windows at home, and at work it’s mainly there to run SSH and VNC. I don’t really have any stake in the success or failure of the platform. The question I’m interested in is: what does this mean for the Mac App Store? Will Microsoft’s move embolden Apple to lock down OS X and ban third-party installation2?

I think the answer is no. My original reasoning stands; given that they’re very definitely keeping iOS and OS X as distinct systems (albeit with a fair degree of cross-pollination in terms of features and UI), there’s no need to impose the same limits on each. In this regard, they can have their cake and eat it. It may even further spur their growth in the PC market, if Microsoft press ahead with their developer-hostile policy. However, if I’m wrong, and the move succeeds in fostering Metro growth without alienating the Windows developer community, Apple might be encouraged to follow suit. We’ve certainly not seen the last move in this game.

Update: It seems that Microsoft has relented, and there will be a version of Visual Studio Express that targets non-Metro applications. This is a very sensible move, and shows a commendably responsive attitiude on Microsoft’s part. It’s easy to talk the developer-relations talk, but they appear to be genuinely ealking the walk as well, at least in this instance.


  1. For consumers, at least; enterprise versions of Windows 8 permit side loading of apps. However, this is intended for internal or bespoke apps, or those sold in large business-to-business contracts, and in any case isn’t an option on the consumer and small business versions of Windows.

  2. Note that I don’t believe that the Gatekeeper feature in Mountain Lion is a move in this direction. For one thing, the user can opt out. More importantly, it represents Apple making a conscious effort to extend some of the security benefits of the curated model to third parties. If they were planning to funnel everyone towards the App Store, it would have made more sense to do nothing.

Aim High

In the light of the Raspberry Pi project’s attempt to reintroduce kids to the idea that a computer is something you program, I thought I’d share with you something from my own childhood. Specifically, a book:

Unlike more modern children’s books, it doesn’t state a target age, but I believe it’s aimed at junior school children1. As you can see, it’s illustrated by with lots of little cartoon robots acting out the various operations that occur in a computer. And it teaches you machine code.

I can imagine some readers reaching for their pedantic hats. “This man is a fool,” they’re thinking, “who clearly doesn’t know what ‘machine code’ is. He obviously means ‘assembly language’.”

Au contraire2.

You see, in the mid-eighties, computers weren’t quite so impressively specced as they are today. Unless you had the Rolls Royce of home computers, the BBC Micro3, you wouldn’t have had the luxury of an assembler. You had a BASIC interpreter, and you considered yourself lucky. The book provides a listing for you to type in. This isn’t an assembler, of course: it’s just a program that allows you to type in bytes, in hex, to be poked into consecutive locations in memory. Once you’ve learnt hex on page 11, of course.

In 48 uncrowded, extensively illustrated pages, it goes from explaining things like binary and the difference between RAM and ROM, through addressing modes and registers, to writing real (if simple) machine code programs. In two different architectures, on half a dozen mutually incompatible computers. This is an impressive scope for such a slim book, made even more ambitious when you consider it’s aimed at children.

The important message is that this level of depth doesn’t scare kids off; instead, they lap it up (some of them, at least). By all means provide simpler material for those without the time, inclination or ability to tackle the complicated stuff. Just make sure you provide something challenging to keep the interest of those who can.


  1. I think I picked up this copy at a library sale when I was ten or eleven, but I could be out a year or two either side.

  2. A fool, no. Pretentious, possibly.

  3. Or it’s younger sibling, the Acorn Electron, which was a cheaper, cut-down version for those who weren’t rich enough to afford a BBC Micro, but weren’t cool enough to own a Spectrum. Guess what I had. (Actually, it was a fantastic machine that managed to get the vast majority of the features of the Beeb into a more affordable package for the home market.)

Raspberry Pi Launched

Today saw the official launch of the Raspberry Pi low-cost computer. Although there have been a couple of hiccups due to the distributors not expecting the phenomenal levels of demand1, it’s still a big day for the project, and a testament to the months and years of the hard (and unpaid) work of everyone at the Raspberry Pi Foundation2.

However, it’s worth remembering that the project is not about cheap computers for hobbyists (although that’s a useful side-effect); it’s about giving kids the tools with which to learn to program. To that end, here’s a video from the BBC’s Rory Cellan-Jones (who has also written a great article about the launch):


  1. Personally, I don’t think these teething troubles are a big deal, given the level of interest in such a small organisation. What has irked me somewhat is the level of vitriol directed at the Foundation. This has ranged from the merely thoughtless to the utterly vile, with occasional racist and homophobic language tossed in for good measure. It smacks of both a lack of respect and a sense of entitlement that would be comical of they weren’t so deeply unpleasant. Liz has been handling this with more patience and good grace than I think I would have been able to muster in the circumstances.

  2. I don’t have any formal connection to the Foundation myself; I’m just an enthusiastic supporter.

Ritchie and McCarthy

In the past two weeks, there has - justifiably - been a lot of coverage of the death of Steve Jobs. However, two other computing pioneers have also died during that time. Dennis Ritchie and John McCarthy were not the household names in the same way, but their contribution was arguably more fundamental. Whilst Job’s work made modern computing accessible, Ritchie’s and McCarthy’s made it possible. They created, respectively, C and Lisp, which are essentially the Greek and Latin of programming1.

In a field which values the new over the old to an almost pathological extent, these two languages have lasted for decades. If someone embarking on a new software project tomorrow using, say, PL/I, it would seem like an obtuse exercise in nostalgia. Use C, on the other hand, and nobody would raise an eyebrow. Lisp’s influence is less obvious, but arguably more profound. Whilst you can still use it in unadulterated form, relatively few people do. However, the popular dynamic languages that all the cool kids are using - Python, Ruby, JavaScript - are essentially Lisp in new clothes2.

The origins of the two languages are instructively different. C was created to solve an immediate practical problem - writing Unix in a portable way. Moreover, it was targeting relatively modest hardware, the PDP-113. It therefore lacks some of the bells and whistles that, even in 1973, “real” languages were thought to need. However, instead of limiting its scope, this resulted in an elegant simplicity that has seen it adapt to four decades of hardware improvements whilst many “real” languages have fallen by the wayside.

In contrast, Lisp wasn’t intended to be a language to solve a particular practical problem. It wasn’t intended to be a programming language at all. McCarthy had written a paper outlining a theoretical system for formally describing algorithms. Fortuitously, he didn’t make the “theoretical” part clear to one of his grad students, who promptly went off and implemented it on the department’s computer, creating the first Lisp interpreter4. What had started as a mathematical model of programs turned out to be an amazingly flexible and powerful way to actually write them.

What both languages have in common is that they each have a small, conceptually coherent core. Because it’s small and coherent, the programmer is able to internalise the basic concepts of the langauge. This done, they’re free to concentrate on the task at hand, rather than the details of the language. This is also why they’ve lasted so long - simple, clear ideas date far less quickly than specific technologies.

You might wonder why, in an post about two men who have recently died, I’ve not said much about their lives. The reason is simple; I didn’t know them, and so I’ll leave the biography to those who did. Like millions of other people, though, I do know their work. Both have had an enormous impact on a field that touches the lives of almost everyone, and will continue to do so for a long time to come.


  1. Which makes Scheme, of course, Church Latin.

  2. Smug Lisp Weenies, as they are known, would claim that these more modern languages are merely partial, cargo-cult facsimilies of the real thing. There’s something in this, but they also succeed in ways that Lisp doesn’t. That’s a subject for a different day.

  3. Not, as I said in the original version of this post, a PDP-7; porting the nascent operating system from the PDP-7 to the PDP-11 was one factor that led to the development of C.

  4. Wikipedia’s Lisp page cite’s the following from Hackers & Painters by Paul Graham. Unfortunately, I don’t have my copy to hand to find the original source.

    McCarthy said: “Steve Russell said, look, why don’t I program this eval…, and I said to him, ho, ho, you’re confusing theory with practice, this eval is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the eval in my paper into IBM 703 machine code, fixing bug, and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today…”

VNC on the Raspberry Pi

Here’s a quick demo I’ve put together, showing the RaspPi running as a VNC client:

The setup is a slightly unusual one; instead of the client connecting to the server, it runs in listen mode, and the server initiates the connection. This allows the RaspPi to be used as a kind of shared, network display. The software on the RaspPi end is TightVNC, which is available in the Debiam ARM repository. On the server side, I used Vine Server, as the built-in server on Lion doesn’t support reverse connections (if you want to use the normal client/server arrangement, the built-in server is fine).

The reason that VNC supports this back-to-front arrangement is interesting in itself. ORL (later AT&T Labs Cambridge), the birthplace of VNC, also took a significant interest in location-aware computing, firstly with the Active Badge, which could track users to the granularity of a room, and then with the Active Bat, which could do so within a few centimetres. I worked at the lab for a year before starting my PhD (which they sponsored), in the project exploring the sort of applications you could build with this sort of technology.

One of the simplest and most effective was display teleporting. If you wanted to show someone your screen, you just held your Bat (or, in earlier versions, your Badge) against their monitor and pushed a button. Hey presto, via the magic of VNC reverse connection, there it was. Even though this application hasn’t exactly become ubiquitous, the reverse connection mode that was added to VNC to support it has remained a standard and useful feature. The location-aware technology lives on as well.

Mmmm... Pie...

I’ve been lucky enough to get my hands on one of the Raspberry Pi alpha boards. I have various plans for things to try with it, but I thought I’d post some initial thoughts about the experience of using the board, and the project as a whole.

If you’ve not heard the name before, it’s a charity that aims to “promote the study of computer science and related topics, especially at school level, and to put the fun back into learning computing”. To this end, they’re developing a $25 computer. Not a toy, or a working model, but a real computer that’s cheap enough to give to kids without worrying about them breaking it. This isn’t just pie in the sky1, either: they’ve already produced two generations of prototype hardware, and should be on course to have the final version in production, at the target price2, before the end of the year.

The board, pictured below, has at it’s heart a Broadcom system-on-a-chip based around an ARM CPU, with 128MB or 256MB of RAM stacked on top. There’s very little else on the board - the slightly more expensive version has a single-chip USB hub and Ethernet adapter, but that’s pretty much it. This simplicity is how they’ve managed to hit the ambitious price point.

Raspberry Pi Alpha Board

This kind of SoC is more often found in set-top boxes and the like, so it has reasonably good graphics and sound. It really is a complete system - you just plug a keyboard into the USB port, a TV (or monitor) into the HDMI, and you’re away.

On first booting the board, I was taken back. Not to primary school and the BBC Micro, but to my first exposure to Linux at University. The disk image I have boots to a console, and I found myself dredging up knowledge of things like runlevels and virtual consoles that I hadn’t used for years. My soft hands suitably re-calloused, I proceeded to poke around, install various packages (including, I admit, X Windows and LXDE), and generally having a play. The current OS image is based on Debian, and so hundreds of packages are only an apt-get away. The list isn’t as comprehensive as it is for x86, but there are still plenty of interesting packages (such as TightVNC) to install.

The thing that struck me as I used the Raspberry Pi was how familiar everything is. This isn’t some kind of esoteric embedded system, but a real computer, with a real OS, and standard languages and compilers right there, running on the board itself. Granted, it’ll never challenge the latest MacBook Pro in terms of raw horsepower, but for the intended use - programming education - it’s more than enough.

You could, of course, stick Firefox on the device, and get a dirt-cheap, tiny, low-power client for the web. There are also myriad things that become feasible when you can get a computer for the sort of money that Curry’s would charge for an HDMI cable not too long ago - smart screens, tiny servers, automation, and many others. You could even stick LibreOffice on it and equip a school computer lab for a fraction of the price today. However, doing so would miss the point.

The project stems in large part from the observation that fewer of the eighteen-year-olds arriving at Cambridge to read Computer Science had a good grasp off how computers actually work3. This is not, of course, to say that they’re any less bright, but just that their exposure to computers has been markedly different from that of those of use who grew up in the eighties (or even the nineties).

There are numerous factors behind this change, but a big one is the rise of the family computer. This is a device that has a myriad of uses - banking, shopping, gaming, watching TV - which means that parents are a lot less inclined to let an inquisitive seven-year-old poke around in it’s innards. A Commodore 64 or Acorn Electron, by contrast, was useful for… pretty much nothing, out of the box. Even getting a game to load involved reading the manual and getting some mysterious incantation just right.

This initial uselessness is not only part of the charm of early home computers, but also a key to their educational value. To make them do anything of interest, you had to program them at least a little, and this gave curiosity a way in. Even copy-typing listings from a magazine4 gives you the idea that, far from being some kind of magic, this box is something that could be understood if you put your mind to it.

Modern computers are increasingly black boxes, with a big implied or explicit “no user serviceable parts inside” notice discouraging people from wondering how they work. This is appropriate for tools that are increasingly central to our lives5, but raises the question of where the next generation of programmers and hardware engineers is going to come from. By providing a cheap, open machine that kids can tinker with to their heart’s content, Raspberry Pi are providing an answer. It’s not the complete solution, but it’s a big part of it. What they need now are software and ideas, and if you’re a programmer of teacher (or both) that’s something you can help with. You can also help by buying one of the final boards under their buy one, give one scheme. I’ll certainly be getting one or two.


  1. Sorry.

  2. There are actually two models (with a naming scheme that might seem familiar) - the model A for $25, and the model B for $35, which adds extra RAM, Ethernet, an additional USB port. The alpha board corresponds to the model B.

  3. Eben, one of the founders of the project, and I were both involved in interviewing prospective Computer Science undergraduates, for different colleges.

  4. Coverdisks? Cassettes? Checksums? Luxury! When I were a lad, we had copious syntax errors, and we were glad of it…

  5. This is one of the reasons I tend to buy Apple hardware these days - I’m willing to sacrifice some of the ability to upgrade and repair the kit myself for the integration and simplicity.

Give Me Inconvenience, Or Give Me Death!

In this week’s episode of The Pod Delusion, there’s a piece contributed by my brother, entitled “Death of the Operating System”. In it, he talks about the rise of “walled garden” operating systems such as Apple’s iOS, and what this means for “general purpose” operating systems such as Mac OS X, Windows and Linux1. Once walled gardens are the norm, he suggests, general purpose OSs will come to be viewed by many as being only useful for illegal purposes, and eventually will become illegal themselves.

One point on which I agree is that wall gardens are going to become more prevalent. In a recent article, John Gruber makes the point that the recently-announced Windows 8 is a flawed response to the iPad2, as it includes the ability to run the existing Windows interface, and the applications that go with it, essentially unmodified. The iPad, on the other hand, started with a completely blank slate, with no attempt at compatibility with the pre-touchscreen world. This may, at first glance, seem like a weakness, but it is, Gruber argues, key to one of the major strengths of the platform – simplicity. He’s talking about the UI, but the point applies equally to the installation of software.

Even if you discount the configure-make-install dance that’s familiar to anyone who builds their own software on Unix-like systems, installing and updating software is a pain in the arse. Systems vary in how well they handle it - Mac OS X beats Windows, and both are in turn beaten by Debian - but even if the normal install channels work well, anyone but an expert has a hard time keeping track of exactly what’s been done. This is compounded by the tendency, permitted by the general purpose operating system, for all and sundry to roll their own installation and update infrastructures. Worse, once you’ve given permission for a piece of software to install things, it’s easy for malicious software to creep in, necessitating yet more installation and tending of security software.

Most people simply don’t want this hassle. They just want to read their email, and check their Facebooks, and go on The Google. Maybe catapult the occasional bird at a tower of pigs. A walled garden – if it’s well-tended – takes the responsibility for managing things like updates and installation, leaving the user to simply choose the applications they want from a list (if that). This brings the device closer to an information appliance, as described by Donald Norman in The Invisible Computer. With it’s over-the-air backups and syncing, iOS 5 is a significant step in this direction - it’s increasingly feasible for someone to entirely forgo owning a general purpose computer like a PC, as all their needs are fulfilled by walled garden devices.

Peter’s belief is that, when this is the norm, and owning a general purpose computer is a marginal pursuit, politicians playing to the peanut gallery will seek to ban it in the same way that they banned handguns after Dunblane. While PCs aren’t as obviously deadly as pistols, the twin modern-day bogeymen of terrorists and paedophiles might make them a convenient target when Something Must Be Done.

He draws an analogy with gun ownership in the United States, but I think this is a red herring. The second amendment isn’t in the Bill of Rights by chance; the right to bear arms is intrinsically bound up in the genesis of that country. As Sarah Palin recently pointed out (albeit in her usual ham-fisted, truthy way), the American revolution succeeded in no small part due to the fact that the citizens of the nascent republic were armed. As a result, gun ownership is seen by many Americans as a key component of liberty, and no amount of Wacoes and Columbines are going to override that.

In Britain, with no such historical context, governments have more latitude to pass whatever gun control laws they see fit. However, even after tragedies such as Dunblane, and the attendant media outcry, this hasn’t led to an outright ban on firearms. Whist you can’t buy a handgun or an assault rifle, it’s still relatively straightforward to buy and own a shotgun. The reason for this is obvious; shotguns have, to borrow a phrase from the Betamax case, substantial non-infringing uses (specifically, game hunting and pest control). Handguns, on the other hand, have essentially no other use than to injure or kill other human beings3.

General purpose operating systems clearly fall into the former category. They can be used to hack into a nuclear power station’s control system, or clandestinely distribute images of child abuse, but they can also be used to sequence genomes, or administer complex financial instruments, or develop the processor for your next phone. They’re also vital as the back end for all of the web applications and cloud services that are the bread and butter of your walled garden devices. Crucially, and unlike sports shooting with handguns, these activities make a lot of money. A hell of a lot of money. Successive governments, with their talk of creative and knowledge economies, and their laser-like focus on STEM4 education, recognise this, and there’s no way that they’d kill the goose that keeps laying golden eggs so that a junior minister can have a favourable news cycle.

However, there is another possibility. You need a licence to own a shotgun. What if you needed one to own a non-locked-down computer? This couldn’t happen today - too many companies rely on selling products to computer owners - but in the future, when the man on the WiFi-enabled Clapham Omnibus is satisfied with just his iPad, it’s possible. The problem with such a move is that much of the innovation in computing comes from individuals and small companies, precisely because the barriers to entry are so low. Any country that implemented such a scheme would see a dramatic chilling effect in its software sector at least. Few governments would want this, but it’s a subtle enough point that they might blunder into it by accident. Fortunately, technology companies have in recent years learnt not to be so shy and retiring when it comes to lobbying for their own interests.

It’s also worth considering that the idea of “owning a computer” needn’t be limited to buying a box and plugging it in in the spare bedroom. Even if we reach the stage where Ken Olsen’s widely-quoted5 utterance is true, and there is no reason for any individual to have a (general purpose) computer in their home, that doesn’t mean they disappear entirely. With ever-improving connectivity, the device that does your computing doesn’t necessarily have to be the thing you’re staring at and prodding. There are significant advantages to your general purpose computer being cossetted in a data centre somewhere, where it can have air conditioning and a backed-up power supply, and make all the noise it wants. This doesn’t necessarily meaning ceding control entirely; for example, I rent a virtual server from ByteMark6, over which I have free reign. I get the benefits of their fast internet connection and other infrastructure, whilst retaining control of my software, and importantly, my data.

This leads on to a potentially more troubling aspect of Apple’s recent WWDC announcements; the dominance of the cloud. It raises the question: how happy are you about giving control of your data to a single hardware company? My answer would be: slightly happier than I am about giving it to a single advertising company, but still far from ecstatic. However, that’s an issue for a whole other post.


  1. Peter draws the distinction between “walled garden” systems, where all software must be installed through channels sanctioned by the OS vendor, and “general purpose” systems, where, once the original OS is installed, the user can install additional software as they see fit. This isn’t the terminology I would’ve chosen, but it’ll do for the discussion at hand.

  2. He doesn’t really address the question of whether it’s aiming to be a response to the iPad. Microsoft’s internal fractiousness and lack of a coherent, clearly communicated vision makes this far from obvious.

  3. The only non-violent use I can think of is sports shooting, but I remain to be convinced that this needs to be done with real handguns using live ammunition.

  4. Science, Technology, Engineering and Maths.

  5. Widely, and accurately, quoted, but generally misinterpreted. The context of the quote suggests that Olsen was talking about home automation, not personal computers. Snopes has more.

  6. Highly recommended, by the way.

Life

I’ve been messing around with JavaScript and the new HTML5 canvas element. After a couple of random experiments, I decided that I needed a well-defined goal, and I picked Conway's Game of Life. Here’s the result:

You'll need to turn on JavaScript (and have a recent, canvas-supporting browser) to see this.

Hopefully, the interface should be relatively self-explanatory (see the Wikipedia page linked above for details of the game itself). The Save button produces a string representing the game board; to go back to a previous state, paste such a string into the box and hit Restore. The whole thing should work in recent versions of Firefox, Chrome, Safari and Opera. It won’t work in IE, as that browser doesn’t support canvas.

Thing’s I’ve learnt in doing this:

  • The interface to canvas works pretty well, and I’ve not (yet) found any major gotchas between the browsers that support it.
  • JavaScript is surprisingly good (and fun) language, especially if you stick to the good parts.
  • JavaScript performance varies noticeably between browsers; in particular, Firefox (3.6) seems slower than the others. My hunch is that the difference is in the optimisation of JavaScript’s somewhat unorthodox handling of arrays - this is something I’ll have to look into.
  • The game works, glacially slowly, on my iPhone 3G, but the editing (which uses onclick) doesn’t. I might fix this.

The code is up on GitHub.

Update: I’ve added this entry as my first “thing” on Flattr. Be gentle with me.

Death by typing

When I first published the previous entry, I’d mistyped the quote in the title as “You Killed Anne L. Retentive With A Type?” I corrected this, but then an alternative script for the comic in question popped into my head. “Hang on,”, I thought, “hasn’t Dilbert just grown all sorts of groovy, funky Web 2.0 shenanigans that allow you do do those ‘mash up’ things the kids are talking about these days?” (Free tip: if you want to read Dilbert without all of the extraneous bells and whistles, try http://dilbert.com/fast/.)

It turns out that you can produce modified versions of comics, but only in fairly limited ways. However, I do still have the GIMP:

deathbytyping.png

(Original strip at Dilbert.com)

AppJet, or "You killed Anne L. Retentive with a typo?"

Tori and Ellie both warned me off it, which pretty much ensured that I would end up read Midnight’s Children by Salman Rushdie. I’ve not found it nearly as heavy going as predicted, but I did notice a couple of, well, idiosyncrasies that grate after a while. And I don’t like grating. Fortunately, at the pub on Sunday, we came up with a solution.

Even more fortuitously, an interesting way to implement said solution drifted across my radar a few days later. AppJet are a start-up that produce a really rather good web application platform. They’ve recently come to wider attention with their real-time collaborative editor EtherPad, but I thought I’d start with something more modest. So, without further ado:

The Comma Appeal

Read on for my thoughts on the AppJet platform itself. In short, it seems brilliant. I’ve only had limited experience with it so far, but even after a brief exposure it’s clear they’ve got a lot right. First off, it’s an online application, but you can start using it instantly - no sign-up, no e-mail back-and-forth, just click on the “create an app” link and start coding.

The coding itself is in JavaScript, a language that after a long time in the wilderness is now back in favour. This gives the whole thing the same sort of feel as client-side JavaScript, but augmented with a bunch of libraries to allow access to server-side features. So far, so familiar - there are a number of frameworks, platforms and doohickeys that provide similar facilities. Where AppJet shines, though, is the web-based IDE. The basic setup splits the window in half; the left hand side contains your JavaScript (and HTML and CSS) code, and the right hand side is a preview of the app in development. Whenever you want to see the fruits of your labour, hit the reload button and the preview is updated to reflect your changes.

This isn’t, on the face of it, and earth-shattering idea, but it leads to an incredibly fluid development process. Combine this with the easy creation and distribution tools, and the claims that it’s “the easiest way to program, host, and share your own web app” start to look fairly close to the truth. Even better, the back end is available for download, so you can run it on your own server and avoid putting your application and data at the mercy of the cloud if you want.

So, a very positive first look overall. However, it is only a first look, so maybe the early promise won’t pan out when you try and use it to build something more substantial. That certainly seems to have happened with EtherPad - apparently, they had to make substantial upgrades to the platform in order to get it working as well as they wanted (these changes are due to be released soon). Either way, I’ll definitely be keeping an eye on the platform to see what happens.

Oh, and Anne L. Retentive? Well…