Discussing this after the recent debate that involved big names such as Linus Torvalds and Miguel de Icaza may seem a little inappropriate, but I guess I'll have to count this against my usual laziness for writing stuff up when I think it instead of waiting for it to become a fashion.

Introduction

The reason why the topic has recently re-emerged (as it periodically does) has been a write-up by the afore-mentioned Miguel de Icaza, titled What killed Linux on the desktop.

According to the author of the rant, there are two main reasons. The first was the generally disrespect for backwards compatibility, in the name of some mystic code purity or code elegance:

We deprecated APIs, because there was a better way. We removed functionality because "that approach is broken", for degrees of broken from "it is a security hole" all the way to "it does not conform to the new style we are using".

We replaced core subsystems in the operating system, with poor transitions paths. We introduced compatibility layers that were not really compatible, nor were they maintained.

paired with a dismissing attitude towards those for which this interface breakage caused problems. The second problem has been, still according to de Icaza, incompatibility between distributions.

Miguel de Icaza then compares the Linux failure with the success of Apple and its operating system, which apparently did things “the way they should have been done” (my words), highlighting how Mac OSX is a UNIX system where things (audio, video, support for typical content formats) works.

Karma whoring

There's little doubt that “Linux on the desktop” is a hot topic with easy polarizing and bandwagoning. By mentioning three of the most famous problematic areas in the most widely adopted Linux distribution(s) (audio support, video support and support for proprietary audio and video formats), de Icaza basically offered the best bait for “me-too”-ing around the Internet.

And unsurprisingly this is precisely what happened: a lot of people coming up with replies to the article (or referencing the article) with little more than “yeah! exactly! these were exactly the problems I had with Linux!”.

An optimist could notice how many people have reacted this way, combine with those that have reacted the opposite way (“BS, I never had a problem with Linux!”) and be happy about how large the pool of (desktop) Linux users and potential users is. On the other hand, the whole point of the article is to (try and) discuss the reasons why many of these are only potential users, why so many have been driven off Linux despite their attempts at switching over to it.

Linus, Linux and The CADT model

The first point of de Icaza's critique is nothing new. It's what Jamie Zawinski coined the term CADT, Cascade of Attention-Deficit Teenagers, for. However, the way in which de Icaza presents the issue has two significant problems.

One is his use of «we», a pronoun which is somehow supposed to refer to the entire Linux developer community; someone could see it as a diplomatic way of not coming out with the specific names and examples of developers and project that break backwards compatibility every time (which would be ‘bad form’), while at the same time putting himself personally in the number of people that did so.

The other is how he tries to follow the dismissing attitude back to Linus Torvalds, which by position and charisma may be considered the one that «sets the tone for our community», assuming that Linus (and the kernel development community) feeling free to break the internal kernel interfaces even at minor releases somehow give userspace developers the entitlement to do the same about external interfaces.

These two points have sparked a debate in which Linus himself (together with other important Linux personalities) intervened, a debate that has made the news. And the reasons for which the debate sparked is that these two points are among the most critical issues indicating what's wrong with the article. Since in the debate I find myself on the opposite camp of Miguel de Icaza (and, as I found out later, mostly in Linus' camp), I'm going to discuss this in more detail, in a form that is more appropriate for an article than for a comment, as I found myself doing so far.

Kernel, middleware and user space

I'm going to start this explanation with a rough, inadequate but still essential description of the general structure of a modern operating system.

First of all, there's the kernel. The kernel is a piece of software that sits right on top of (and controls and receives signal from) the hardware. It abstracts the hardware from the rest of the operating systems, and provides interfaces to allow other pieces of the operating system to interact with the hardware itself. Linux itself is properly only the kernel, which is why a lot of people (especially the GNU guys) insist on calling it GNU/Linux instead; after all, even Android uses the Linux kernel: it's everything else that is different.

By application one usually means the programs that are executed by the user: web browsers, email clients, photo manipulation programs, games, you name it. These user space applications, which is what users typically interact with, don't usually interact directly with the kernel themselves: there's a rather thick layer of libraries and other programs that ease the communication between user space applications and the kernel. Allow me to call this layer ‘middleware’.

Example middleware in Linux and similar systems includes the first program launched by the kernel when it finished loading (typically init), the C library (libc, in Linux often the one written by the GNU project) and things that manage the graphical user interface, such as the X Window System (these days typically provided by the X.org server in Linux).

All the components of the various layers of the operating system must be able to communicate with each other. This happens through a set of interfaces, which are known as APIs (Application Programming Interfaces) and ABIs (Application Binary Interfaces), some of which are internal (for example, if a kernel module has to communicate with something else inside the kernel, it uses an internal kernel API) while others are external (for example, if the C library needs to communicate with the kernel, it does so using an external kernel API).

Interface stability and application development

Let's say that I'm writing a (user space) application: a photo manipulation program, an office suite, whatever. I'm going to develop it for a specific operating system, and it will be such a ‘killer app’ that everybody will switch to that operating system just for the sake of using my application.

My application will use the external interfaces from a number of middleware libraries and applications (for example, it may interface with the graphics system for visualization, and/or with the C library for file access). My application, on the other hand, does not care at all if the internal interfaces of the kernel, or of any middleware component, change. As long as the external interfaces are frozen, my application will run on any future version of the operating system.

A respectable operating system component never removes an interface: it adds to them, it extends them, but it never removes them. This allows old programs to run on newer versions of the operating system without problems. If the developers think of a better way to do things, they don't change the semantics of the current interface; rather, they add a new, similar interface (and maybe deprecate the old one). This is why Windows APIs have call names with suffixes such as Extended and whatever. This is why we still have the (unsafe) sprintf alongside the (safe) snprintf in the POSIX C library specification.

Let me take the opportunity to highlight two important things that come from this.

One: the stability of internal interfaces is more or less irrelevant as far as user space applications are concerned. On the other hand, stability of external interfaces is extremely important, to the point that it may be considered a necessary condition for the success of an operating system.

Two: it may be a little bit of a misnomer to talk about interface stability. It's perfectly ok to have interfaces grow by adding new methods. What's important is that no interface or method is removed. But we'll keep talking about stability, simply noting that interfaces that grow are stable as long as they keep supporting ‘old style’ interactions.

Are Linux interfaces stable?

Miguel de Icaza's point is that one of the main reasons for the failure of Linux as a desktop operating system is that its interfaces are not stable. Since (as we mentioned briefly before) interface stability is a necessary condition for the success of an operating system, his reasoning may be correct (unstable interfaces imply unsuccessful operating system).

However, when we start looking at the stability of the interfaces in a Linux environment we see that de Icaza's rant is misguided at best and intellectually dishonest at worst.

The three core component of a Linux desktop are the kernel, the C library and the X Window System. And the external interfaces of each of these pieces of software are incredibly stable.

Linus Torvalds has always made a point of never breaking user space when changing the kernel. Although the internal kernel interfaces change at an incredible pace, the external interface is a prime example of backwards compatibility, sometimes to the point of stupidity. { Link to round table with Linus mentioning examples of interfaces that should have never be exposed or had issues, but were still kept because programs started relying on the broken behavior. }

A prime example of the interface stability is given by the much-critiqued sound support, which is an area where the Linux kernel has had some drastic changes over time. Sound support was initially implemented via the ironically-named Open Sound System, but this was —not much later— replaced by the completely different Advanced Linux Sound Architecture; yet OSS compatibility layers, interfaces and devices have been kept around since, to allow old applications using OSS to still run (and produce sound) on modern Linux versions.

This, by the way, explains why Linus was somewhat pissed off at de Icaza in the aforementioned debate: if developers in the Linux and open source worlds had to learn anything from Linus, it would have been to never break external interfaces.

Another good example in stability is given by the GNU C Library. Even though it has grown at an alarming pace, its interface has been essentially stable since the release of version 2, 15 years ago, and any application that links to libc6 has forward-compatibility essentially guaranteed, modulo bugs (for example, the Flash player incorrectly used memcpy where they should have used memmove, and this resulted in problems with audio in Flash movies when some optimizations where done to the C library; this has since been fixed).

But what is the most amazing example of stability is the X Window System. This graphical user interface system is famous for having a client/server structure and being network transparent: you can have ‘clients’ (applications) run on a computer and their user interface appear on another computer (where the X server is running). X clients and server communicate using a protocol that is currently at version 11 (X11) and has been stable for 25 years.

The first release of the X11 protocol was in 1987, and an application that old would still play fine with an X11 server of today, even though, of course, it wouldn't be able to exploit any of the more advanced and sophisticated features that the servers and protocol have been extended with. The heck, Linux didn't even exist 25 years ago, but X.org running on Linux would still be perfectly able to support an application written 25 years ago. How's that for stability?

If the three core components of a Linux desktop operating system have been so stable, why can Miguel de Icaza talk about “deprecating APIs”, “removing functionality” and “replacing core subsystem”, and still be right? The answer is that, of course, there have been some very high-profile cases where this has happened.

The prime example of such developer misbehavior is given by GNOME, a desktop environment, something that sits on top of the graphical subsystem of the operating system (X, in the Linux case) and provides a number of interfaces for applets and applications to present a uniform and consistent behavior and graphical appearance, and an integrated environment to operate in.

Applications can be written for a specific desktop environment (there are more than one available for Linux), and for this it's important for the desktop environment (DE, for short) to provide a stable interface. This has not been the case with GNOME. In fact, the mentioned CADT expression was invented specifically for the way GNOME was developed.

We can now start to see why Linus Torvalds was so pissed off at Miguel de Icaza in the mentioned debate: not only the Linux kernel is one of the primary examples of (external) interface stability, so trying to trace CADT to Linus is ridiculous, but GNOME, of which Miguel de Icaza himself has been a prominent figure for a long time, is the primary example of interface instability.

The «we» Miguel uses to refer to the open source and Linux community as a whole now suddenly sounds like an attempt to divert the blame for a misbehavior from the presenter of the argument itself to the entire community, a generalization that has no basis whatsoever, and that most of all can't call for Linus as being the exemplum.

Ubuntu the Breaker

Of course, the GNOME developer community is not the only one suffering from CADT, and in this Miguel is right. Another high-profile project that has had very low sensitivity to the problem of backwards compatibility in the name of “the new and the shiny” is Ubuntu.

This is particularly sad because Ubuntu started with excellent premises and promises to become the Linux distribution for the ‘common user’, and hence the Linux distribution that could make Linux successful on the desktop. And for a few years, it worked really hard, and with some success, in that direction.

But then something happened, and the purpose of Ubuntu stopped being to provide a solid desktop environment for the ‘common user’, and it started being the playground for trying exciting new stuff. However, the exciting new stuff was brought forward without solid transition paths from the ‘old and stable’, with limited if any backwards compatibility, and without any solidification process that would lead the exciting new stuff to actually be working before gaining widespread usage.

This, for example, is the way PulseAudio was brought in, breaking everybody's functioning audio systems, and plaguing Ubuntu (and hence Linux) with the infamous idea of not having a working audio system (which it still has: ALSA). Similar things happened with the other important subsystems, such as the alternatives to the System V init traditionally used (systemd and upstart); and then with the replacement of the GNOME desktop environment with the new Unity system; And finally with the ‘promise’ (or should we say threat) of an entirely new graphical stack, Wayland, to replace the ‘antiquate’ X Window System.

It's important to note that none of these components are essential to a Linux desktop system. But since they've been forced down the throat of every Ubuntu user, and since Ubuntu has already gained enough traction to be considered the Linux distribution, a lot of people project the abysmal instability of recent Ubuntu developments onto Linux itself. What promised to be the road for the success of Linux on the desktop became its worst enemy.

Common failures: getting inspiration on the wrong side of the pond

There's an interesting thing common to the persons behind the two highest-profile failures in interface stability in the Linux world: their love for proprietary stuff.

Miguel de Icaza

Miguel de Icaza founded the GNOME project (for which we've said enough bad things for the moment), but also the Mono project, an attempt to create an open-source implementation of the .NET Framework.

His love for everything Microsoft has never been a mystery. Long before this recent rant, for example, he blamed Linus for not following Windows example of a stable (internal) kernel ABI. At the time, this was not because it ‘set the wrong example’ for the rest of the community, but because it allegedly created actual problems to hardware manufacturer that didn't contribute open source drivers, thereby slowing down Linux adoption due to missing hardware support.

As you can see, the guy has a pet peeve with Linus and the instability of the kernel ABI. When history proved him wrong, with hardware nowadays gaining Linux support very quickly, often even before release, and most vendors contributing open source drivers (more on this later), he switched his rant to the risible claim that instability of the kernel ABI set ‘a bad example’ for the rest of the community.

It's worse than that, in fact, since the stability of the Windows kernel ABI is little more than a myth. First of all, there are at least two different families of Windows kernels, the (in)famous Win9x series and the WinNT series. In the first family we have Windows 95, Windows 95 OSR2, Windows 98, Windows ME (that family is, luckily, dead). In the second family we have the old Windows NT releases, then Windows 2000 (NT 5.0), Windows XP (NT 5.1), Windows Vista (6.0), Seven (6.1). And not only are the kernel families totally incompatible with each other, there are incompatibilities even within the same series: I have pieces of hardware whose Windows 98 drivers don't work in any other Win9x kernel, earlier or later, and even within the NT series you can't just plop a driver for Windows 2000 into Windows 7 and hope it'll work without issues, especially if it's a graphic driver.

However, what Windows has done has been to provide a consistent user space API (Win32) that essentially allows programs written for it to run on any Windows release supporting it, be it either of the Win9x family or of the WinNT family.

(Well, except when they cannot, because newer releases sometimes created incompatibilities that broke older Win32 applications, hence the necessity for things such as the “Windows XP” emulation mode present in later Windows releases, an actual full Windows XP install within Windows, sort of like WINE in Linux —and let's not talk about how the new Metro interface in the upcoming Windows 8 is going to be a pain for everybody. We'll talk about these slips further down.)

But WINE and Mono will be discussed later on in more details.

Mark Shuttleworth

Mark Shuttleworth is the man behind Canonical and ultimately Ubuntu. Rather than a Microsoft fan, he comes out more on the Apple side (which is where Miguel de Icaza seems to have directed his attention now too). It's not difficult to look at the last couple of years of Ubuntu transformations and note how the user interfaces and application behavior has changed away from a Windows-inspired one to one to mimics the Mac OSX user experience.

This is rather sad (someone could say ‘pathetic’), considering Linux desktops have had nothing to envy of Mac OSX desktops for a long time: in 2006 a Samba developer was prevented from presenting on his own computer, because it was graphically too much better than what Mac OSX had to offer at the time.

But instead of pushing in that direction, bringing progressive enhancements to the existing, stable base, Ubuntu decided to stray from the usability path and shift towards some form of unstable ‘permanent revolution’ that only served the purpose of disgruntling existing users and reducing the appeal to further increase its user base.

The number of Ubuntu derivatives that have started gaining ground simply by being more conservative about the (default) choice of software environment should be playing all possible alarm bells, but apparently it's not enough to bring Canonical back on the right track.

The fascination with proprietary systems

So why are such prominent figures in the open source figures so fascinated with proprietary operating systems and environments, be it Microsoft or Apple? That's a good question, but I can only give tentative answers to it.

One major point, I suspect, is their success. Windows has been a monopolistically dominant operating system for decades. Even if we only start counting from the release of Windows 95, that's still almost 20 years of dominion. And the only thing that has managed to make a visible dent in its dominance has been Apple's Mac OSX. There is little doubt that Apple's operating system has been considerably more successful than Linux in gaining ground as a desktop operating system.

While there's nothing wrong with admiring successful projects, there is something wrong in trying to emulate them by trying to ‘do as they do’: even more so when you actually fail completely at doing what they really did to achieve success.

Windows' and Mac OSX' success has been dictated (among other reasons which I'm not going to discuss for the moment) thanks to a strong push towards a consistency between different applications, and the applications and the surrounding operating systems. It has never been because of this or that specific aesthetic characteristic, or this or that specific behavior; it has been for the fact that all applications behaved in a certain way, had certain sets of common controls, etc.

This is why both operating systems provide extensive guidelines to describe how applications should look and behave, and why both operating systems provide interfaces to achieve such looks and behavior —interfaces that have not changed with time, even when they have been superseded or deprecated in favour of newer, more modern ones.

Doing the same in Linux would have meant defining clear guidelines for application behavior, providing interfaces to easily follow those guidelines and then keeping those interfaces stable. Instead, what both GNOME (initially under Miguel de Icaza's guide) and Ubuntu (under Mark Shuttleworth's guide) tried to do was try to mimic this or that (and actually worse: first this then that) behavior or visual aspect of either of the two other operating systems, without any well-defined and stable guideline, and without stable and consistent interfaces: they tried to mimic the outcome without focusing on the inner mechanisms behind it.

In the mean time, every other open source project whose development hasn't been dazzled by the dominance of proprietary software has managed to chug along, slowly but steadily gaining market share whenever the proprietary alternatives slipped.

One important difference between dominant environments and underdogs is that dominants are allowed to slip: dominants can break the user experience, and still be ‘forgiven’ for it. Microsoft has done it in the past (Ribbon interface anyone? Vista anyone?), and it seems to be bound to do it again (Metro interface anyone?): they can afford it, because they are still the dominant desktop system; Appple is more of an underdog, and it's more careful about changing things that can affect the user experience, but they still break things at times (not all applications written for the first Mac OSX release will run smoothly —or at all— on the latest one). But the underdogs trying to emulate either cannot afford such slips: if they're going to be incompatible as much as the dominants are, why shouldn't a user stick with the dominant one, after all?

Linux and the desktop

And this leads to the final part of this article, beyond a simple critique to Miguel de Icaza's article. Two important questions arise here. Can Linux succeed in the desktop? And: does it actually matter?

Does it matter?

There has been a lot of talking, recently, about whether the desktop operating system concept itself is bound to soon fall into oblivion, as other electronic platforms (tablets and ‘smart’ phones) raise into common and widespread usage.

There is a reason why the so-called ‘mobile’ or ‘touch’ interfaces have been appearing everywhere: the already mentioned Metro interface in Windows 8 is a bold move into the direction of convergence between desktop and tablet and mobile interfaces; Mac OSX itself is getting more and more similar to iOS, the mobile operating system Apple uses on its iPods and iPads; even in the Linux world, the much-criticized Unity of the latest Ubuntu, and its Gnome Shell competitor, are efforts to build ‘touch-friendly’ user interfaces.

Unsurprisingly, the one that seems to be approaching this transition better is Apple; note that this is not because the Mac OSX and iOS user interfaces are inherently better, but simply because the change is happening gradually, without ‘interface shocks’. And there are open source projects that are acting in the same direction the same way, even though they don't try to mimic the Apple interface particularly.

The most significant example of an open source project that is handling the desktop/touch user interface convergence more smoothly is KDE, a desktop environment that in many ways has often tried (albeit sadly not always successfully) to be more attentive of the user needs. (In fact, I'd love to rant about how I've always thought that KDE would have been a much superior choice to GNOME as default desktop environment for Ubuntu, and about how history has proven me right, but that's probably going to sidetrack me from the main topic of discussion.)

If everything and everyone is dropping desktops right and left and switching to phones and tablets, does it really matter if Linux can become ‘the perfect desktop operating system’ or not?

I believe it does, for two reasons, a trivial one and a more serious one.

The trivial reason is that Linux, in the sense of specifically the Linux kernel, has already succeeded in the mobile market, thanks to Android, which is built on a Linux kernel. I'm not going to get into the debate on which is better, superior and/or more successful between Android and iOS, because it's irrelevant to the topic at hand; but one thing can be said for sure: Android is successful enough to make Apple feel threatened and to let them drop to the most anticompetitive practices and underhanded assaults they can legally (and economically) afford to avert such threat.

But there is a reason why the success of an open Linux system is important: when the mobile and tablet crazes will have passed, people will start realizing that there were a lot of useful things their desktops could do that their new systems cannot do.

They'll notice that they can't just plug a TV to their iPad and watch a legally-downloaded movie, because the TV will not be ‘enabled’ for reproduction. They'll start noticing that the music they legally bought from online music stores will stop playing, or just disappear. They'll notice that their own personal photos and videos can't be safely preserved for posterity.

They will start noticing that the powerful capability of personal computers to flatten out the difference between producer and consumer has been destroyed by the locked-in systems they've been fascinated by.

The real difference between the information technology up to now and the one that is coming is not between desktop and mobile: it's between open and locked computing.

Up until now, this contrast has been seen as being about “access to the source”, ‘proprietary’ software versus open source software. But even the closed-source Windows operating systems allows the user to install any program they want, and do whatever they want with the data; at worst it allowed you to replace the operating system with a different one.

This is exactly what is changing with the mobile market: taking advantage of the perception that a tablet or smartphone is not a computer, vendors have built the systems to prevent users from installing arbitrary software and doing whatever with their data. But the same kind of constraints are also being brought onto the desktop environment. This is where the Mac OSX market comes from, this is why Microsoft is doubling their efforts to make Windows 8 unreplaceable on hardware that wants to be certified: Secure Boot is a requirement on both mobile and desktop system that want to claim support for Windows 8, and on the classical mobile architecture (ARM), it must be implemented in such a way that it cannot be disabled.

Why this difference between ARM and non-ARM? Because non-ARM for Windows means the typical Intel-compatible desktop system, and this is where the current Linux distributions have waged wars against the Secure Boot enforcement.

And this is specifically the reason why it's important for an open system to be readily available, up to date and user-accessible: it offers an alternative, and the mere presence of the alternative can put pressure on keeping the other platforms more open.

And this is why the possibility for Linux to succeed matters.

Can Linux succeed?

From a technical point of view, there are no significant barriers for a widespread adoption of Linux as a desktop operating system. The chicken and egg problem that plagued it in the beginning (it doesn't have much support, so it doesn't get adopted; it's not adopted, so it doesn't get much support), in terms of hardware support, has long been solved. Most hardware manufacturer acknowledge its presence, and be it by direct cooperation with kernel development, be it by providing hardware specifications, be it by providing closed, ‘proprietary’ drivers, they allow their devices to be used with Linux; even though Linux is far from being the primary target for development, support for most hardware comes shortly if not before the actual hardware is made available.

There are exceptions, of course. NVIDIA, for example, is considered by Linus Torvalds the single worst company they've ever dealt with, due to their enormous reticence in cooperating with open source. The lack of support (in Linux) for the Nvidia Optimus dual-card system found in many modern laptops is a result of this attitude, but Linus' publicity stunt (“Fuck you, Nvidia!”) seems to have moved things in the right direction, and Nvidia is now cooperating with X.org and kernel developers to add Optimus support in Linux.

In terms of software, there are open source options available for the most common needs of desktop users: browsers, email clients, office suites. Most of these applications are in fact cross-platform, and there are versions available also for Windows and Mac OSX, and the number of people using them on those operating systems is steadily growing: for them, a potential transition from their current operating system to Linux will be smoother.

Some more or less widespread closed-source applications are also available: most notably, the Skype VoIP program (even though its recent acquisition by Microsoft has been considered by some a threat for its continuing existence in Linux) and the Opera web browser.

The WINE in Linux

There are, however, very few if any large commercial applications. A notable exception is WordPerfect, for which repeated attempt were made at a Linux version. Of the three attempts (version 6, version 8, and version 9), the last is a very interesting one: rather than a native Linux application, as was the case for the other two versions, Corel decided to port the entire WordPerfect Office suite to Linux by relying on WINE, an implementation of the Win32 API that tries to allow running Windows program under Linux directly.

The choice of using WINE rather than rewriting the applications for Linux, although a tactically sound strategy (it made it possible to ship the product in a considerably short time), was considered by many a poor choice, with the perception that it was a principal cause in the perceived bloat and instability of the programs. There are however two little-known aspects of this choice, one of which is of extreme importance for Linux.

First of all, WordPerfect Office for Linux was not just a set of Windows applications that would run under WINE in an emulated Windows environment: the applications where actually recompiled for Linux, linking them to Winelib, a library produced by the WINE project specifically to help port Windows applications to Linux. The difference is subtle but important: a Winelib application is not particularly ‘less native’ to Linux than an application written specifically to make use of the KDE or GNOME libraries. Of course, it will still look ‘alien’ due to its Windows-ish look, but no less alien than a KDE application on a GNOME desktop or vice versa, especially at the time (2000, 2001).

The other important but little-known aspect of Corel's effort is that it gave an enormous practical push to the WINE project. At the time when the port was attempted, the WINE implementation of the Win32 API was too limited to support applications as sophisticated as those of the WordPerfect Office suite, and this led Corel to invest and contribute to the development of WINE. The result of that sponsorship are quite evident when the status of WINE before and after the contribution is considered. Since Corel was trying their own hand at distributing Linux itself with what was later spun off as Xandros, the improved WINE was to their benefit more than just for the ability to support the office suite.

In the Linux world, WINE is a rather controversial project, since its presence is seen as an obstacle to the development of native Linux applications (which in a sense it is). However, I find myself more in agreement with the WINE developers, seeing WINE as an opportunity for Linux on the desktop.

It's not hard to see why. Desktop users mostly don't care about the operating system; they could be running PotatoOS for all they care, as long as it allows them to do what they want to do, and what they see other people doing. What users care about are applications. And while it's undoubtedly true that for many common applications there are open source (and often cross-platform) alternatives that are as good when not better of the proprietary applications, there are still some important cases where people have to (or want to) use specific applications which are not available for Linux, and possibly never will. This is where WINE comes in.

Of course, in some way WINE also encourages ‘laziness’ on the part of companies that don't want to put too much effort in porting their applications to Linux. Understandably, when Linux support is an afterthought it's much easier (and cheaper) to rely on WINE than to rewrite the program for Linux. And even when getting started for the first time, it might be considered easier to write for Windows and then rely on WINE that approaching cross-platformness with some other toolkit, be it using Qt, whose licensing for commercial applications makes it a pricey option, be it using GTK, whose Windows support is debatable at best, be it using wxWidgets, one of the oldest cross-platform widgets, or any less-tried option. In some sense, the existence of WINE turns Win32 into a cross-platform API, whose Windows support just happens to be extremely superior to that of other platforms.

It's interesting to observe that when LIMBO was included in the cross-platform Humble Indie Bundle V, it caused a bit of an uproar because it wasn't really cross-platform, as it relied on WINE. Interesting, Bastion, that builds on top of the .NET Framework (and thus uses the aforementioned Mono on Linux), didn't cause the same reaction, despite being included in the same package. Yet, to a critical analysis, an application written for the .NET Framework is not any more native to Linux than one written for the Win32 API.

If anything, the .NET Framework may be considered “not native” in any operating system; in reality, it turns out to be little more than a different API for Windows, whose theoretical cross-platformness is only guaranteed by the existence of Mono. It's funny to think that Mono and its implementation of the .NET Framework is seen in a much better light than WINE and its implementation of the Win32 API, even though under all respects they are essentially the same.

The lack of commercial applications

In some way, what Miguel de Icaza's rant tried to address was specifically the problem of the missing commercial applications, on the basis that no applications implies no users, and therefore no success on the desktop market. While the instability of the interfaces of some high-profile environments and the multitude of more or less (in)compatible distributions are detrimental and discouraging for commercial developers, the overall motivations are much more varied.

There is obviously the software chicken and egg problem: Linux doesn't get widespread adoption due to the lack of applications, applications don't support Linux because it doesn't have widespread adoption.

Another important point is the perceived reticence of Linux users to pay for software: since there are tons of applications available for free, why would Linux users think about buying something? Windows and Mac OSX users, on the other hand, are used to paying for software, so a commercial application is more likely to be bought by a Windows or Mac user than by a Linux user; this further reduces the relevance of the potential Linux market for commercial interests.

This line of reasoning is quite debatable: the Humble Indie Bundle project periodically packages a number of small cross-platform games which users can buy by paying any amount of their choice, and the statistics consistently show that even though there are significantly more bundles sold for Windows than for Linux, the average amount paid per platform is distinctly higher in Linux than in Windows: in other words, while still paying what they want, Linux users are willing to pay more, on average, which totally contradicts the perception about Linux users and their willingness to pay.

There's more: if piracy is really as rampant in Windows (starting from the operating system itself) as many software companies want us to believe, one should be led to think that Windows users are not particularly used to pay: rather, they are used to not paying for what they should be paying, in sharp contrast with Linux users who would rather choose applications and operating systems they don't actually have to pay for. In a sense, choosing free software rather than pirated one becomes an indicator of user honesty, if anything. But still, the perception of Linux users as tightwads remains, and hinders the deployment of applications for the platform.

It's only at this point, in third place so to say, that technical reasons, such as the instability of interfaces or the excessive choice in distributions and toolkits, become an obstacle. Should we target Linux through one of the existing cross-platform toolkits, or should we go for a distinct native application? Should this be targeting a specific desktop environment? And which toolkit, which desktop environment should be selected?

However, the truth is that these choices are not really extremely important. For example, Skype simply opted for relying on the Qt toolkit. Opera on the other hand, after various attempts, decided to go straight for the least common denominator, interfacing directly with Xlib. And of course, for the least adventurous, there's the possibility to go with WINE, in which case just contributing to WINE to help it support your program might be a cheaper option than porting your program to Linux; this, for example, is the way Google decided to go for Picasa.

Finally, of course, there are applications that will never be ported to Linux, Microsoft Office being a primary example of this. And for this there is sadly no direct hope.

Pre-installations

There is one final issue with Linux on the desktop, and this is pre-installation. Most users won't go out of their way to replace the existing operating system with some other, because, as already mentioned, users don't usually care about the operating system.

This is the true key for Linux to the desktop: being the default operating system on machines as they are bought. However, none of the major desktop and laptop computer companies seem particularly interested in making such a bold move, or even make an official commitment at having full Linux support for their hardware.

One notable exception in this has been Asus, whose Eee PC series initially shipped with Linux as the only option for the operating system, even though later strong-arming from Microsoft led to it shipping both Linux and Windows machines (with the Windows ones having inferior technical specifications to comply with Microsoft's request that Windows machine shouldn't cost more than Linux ones).

It's interesting, and a little sad, that there are vendors that sell desktops, laptops and servers with Linux preloaded (see for example the Linux Preloaded website). The question is: why don't major vendors offer it as an option? And if the do, why don't they advertise the option more aggressively?

I doubt it's a matter of instability. It wouldn't be hard for them to limit official support to specific Linux distributions and/or versions that they have verified to work on their hardware: it wouldn't be different than the way they offer Windows as an option. And this makes me suspect that there's something else behind it; is Microsoft back to their tactics of blackmailing vendors into not offering alternatives?