The Sorry State of the Open Source Today

April 13, 2007

Published simultaneously in The Jem Report and on beranger.org. See the page footer for the legal notice.

Foreword

I have been using open source software since the beginning of 1995. It was about Linux (starting with Slackware, after an initial apprenticeship with SLS), then some FreeBSD and NetBSD, to continue with several Linux distributions. What a choice! The future was bright, as the Linux kernel experienced a lot of improvements, the number of the distributions climbed up to the sky, advanced desktop environments appeared, and the StarOffice suite metamorphosed into OpenOffice.org, a very decent alternative to Microsoft Office.

In those times, the possible adoption of Linux or of FreeBSD wasn’t hindered by any patent issues: the SCO Group was still living from software, not from lawsuits, nobody was questioning the possible inclusion of Microsoft software patents into the Linux kernel, the restricted multimedia codecs weren’t an issue either (what codecs?!), as the desktop was not that multimedia, after all. And Linus Torvalds was living in Helsinki.

We’re now more than a decade later than the moment when I judged the open source to have gained a decisive momentum — 1996-1997, when Slackware was the reference, Red Hat was “the other choice”, KDE and GNOME were just emerging, Walnut Creek was selling CD-ROMs, and SunSITE mirrors were the home of most of the relevant software. The worst thing that happened was that Yggdrasil Linux died. But the Earth kept spinning…

Ten years ago, Perl was the cherished jewel; now, the gourmets are enjoying Python, the teens are into the PHP-mania, and the most trendy are going Mono. The question is not anymore Emacs or vi?, but SUSE or Ubuntu?

Actually, Linux is now at 15, and Debian Etch was just released. Novell made a deal with Microsoft and no Armageddon followed, so why do I believe that the open source is in a sorry state?

As with many other things in life, gradual changes are never properly noticed, then it comes a day when something like 9/11 or something like Katrina happens, and we’re all bewildered: how could this happen? Were we that blind?

We usually are. We’re just humans.

1. The kernel and friends, take one

I remember how much I loved the Linux kernel 1.2.13, and how surprised I was of the stability of the “unstable” 1.3.18. In times when being conservative in software was still a good thing, I was wondering why `ls‘ was not having `–color‘ by default in FreeBSD. I wanted a greater speed of the changes.

Now I wouldn’t feel the same. The first shock was to see the buggy Linux kernel 2.0.0.0, and the weird dance of the fixes: 2.0.0.7 was better, but wait, it then got worse, so 2.0.034 was really the one to use. Or maybe not. Still, due to the improvements in the kernel, and of the inertia of the mind, I thought it was a progress anyway.

I have the strange feeling that the GPL vs. BSD flame wars were milder in those times. Or maybe I was just younger.

The 2.6 kernel stated that the dichotomy between “stable” and “unstable” should be history. In a world of an increased complexity, the kernel team was declared as being able to only issue “stable releases” How humble of them. A fourth version number was however added, to allow for severe fixes to be marked.

Three years after 2.6 was released, a few people are still considering it too buggy for production use, although 2.6.9 was a very solid one, as users of Red Hat’s Enterprise Linux 4 and of its clones can witness. Patrick Volkerding eventually decided that the next post-11.0 Slackware will only feature a 2.6 kernel, as slackware-current dropped the 2.4 line. A tremendous change for some of us.

Unhappy people with the 2.6 kernels objected that not having a stable API is harmful for a sustainable development of drivers. Nota bene: it’s only about a stable Application Programming Interface, not about an Application Binary Interface (ABI). No matter. Greg Kroah-Hartman expressed the kernel team belief that a stable API would be a nonsense. I couldn’t tell whether this decision was made to prevent binary-only kernel modules, or it was just another irresponsible decision, but in good faith.

As one of the main complaints and alleged reason for not using Linux on the desktop was the lack of the hardware support for some new devices, we have to notice that, while not being able to influence hardware vendors, a tremendous effort was made to add more and more supported devices to the Linux kernel, and Greg K-H was one of the people responsible with that, especially on the USB side.

Every once in a while, “progress” rhymes with “breakage”, and every now and then either the end-user, or the makers of a distribution have had to fix some udev rules. As a proof that it isn’t just me, Pat has provided some udev-alternate-versions with Slackware. You can’t sat Pat is not knowing what he’s doing, can you?

Progress comes at a cost, and unexpected breakage is one of them. Is this acceptable in a world when OSS adoption relies on the “It Just Works” metaphor? Am I trying to say there is something wrong with changes in the kernel? No, I am not. Except that I suppose you remember Andrew Morton saying the “2.6 kernel is slowly getting buggier” (at LinuxTag 2006). To some extent, Linus seemed to agree.

2. The bugs, in the open

Except for the added freedom (not without some strings in the case of the GPL), the open source software is supposed to provide you with some welcomed advantages over closes-source software.

Say, once the code is in the open, bugs can be easily noticed, and the necessary fixes and cleanup come at ease. Well, at least in theory, as nowadays’ code is complex.

Security fixes are indeed benefiting for having the code in the open, but this also has a price: security advisories are issued more often than ever, as everyone can dig for weaknesses. Hackers don’t have to try blind attacks anymore. Therefore, once a security patch is issued, the system administrator should really, really apply it ASAP. We’re living in a highly networked world, never forecasted by the Sci-Fi writers of the ’60s and ’70s.

Surprisingly enough, this never prevented some stunning security holes to pop up because of hilariously simple coding errors: 13 years after a rlogin -froot remote authentication bypass vulnerability, the mostly unused Telnet daemon had a terrible bug in Solaris 10/11, just a couple of months ago.

The affected Telnet daemon is derived from BSD source code, and while Solaris was traditionally a proprietary OS, starting with version 10, you can get its source code from OpenSolaris.org.

Another proof that the OSS mantra is not always having the expected outcome is OpenOffice.org. I am using it almost every day, and it is indeed a good office suite. Yet the fact that its Bugzilla is public does not only allow me to file bugs with them (you could never do such a thing with Microsoft Office!), but also to notice how many old bugs are still unfixed.

Let’s admit that a public bug tracking system leads to a better feedback, and to a better project management from the QA standpoint, with the side effect of having zillions of bugs reported, many of them duplicates or NOTABUG Is this a good reason enough for not fixing some everlasting Oo2 bugs such as not being able to have an easy way to change the default paper from Letter to A4 in all the Oo.org and to have it stick this way (Bug #39733), or sticking with the design flaw that limits your paragraph to 65,534 characters, as if it were under Windows 3.1 (Bug #17171)?

I don’t think this is a good excuse. Notice that no matter Oo.org is open source, nobody is going to fork it just to fix such annoying bugs. Once a product goes mainstream, it’s almost like it becomes proprietary: at least for the sake of the compatibility, everybody is going with the flow.

So long about the myths of freedom in an open source world.

3. Our friends, the software patents

While some people believe that the diminishing factors in the mass adoption of Linux and of other open-source operating systems are the lack of the drivers for some of the devices used with Windows, or the lack of enough games under Linux, or even the apparent difficulty in setting it up on some platforms, all I can see as a major hindrance is this one: software patents.

While the United Kingdom’s PM had a prompt response to a petition, just to confirm that «the Government remains committed to its policy that no patents should exist for inventions which make advances lying solely in the field of software», and that «although certain jurisdictions, such as the US, allow more liberal patenting of software-based inventions, these patents cannot be enforced in the UK», the major Linux actors are U.S.-based (Red Hat, Novell), and even the FreeBSD foundation is an American subject (a notable exception: OpenBSD is a proud product of Calgary, AB, Canada).

Therefore, the mainstream Linux distributions cannot afford to include patent-encumbered code, no matter what third-party extra packages might you add, as a non-American end-user.

This wasn’t such a big problem a decade ago, but it is now. Except for patent-free multimedia formats such as OGG/Theora, the vast majority of the most popular audio and video formats are either covered by one or more patents, or they’re not open-source at all: MP3, MPEG, WMA, WMV, AVI. Almost 100% of the websites will only provide media in formats like WMV or MP3, which is something delicate for the enterprise Linux user, especially when in the United States.

But wait! Aren’t the patents a Capitalist system meant to protect innovation and to provide with the necessary incentive to keep it happening?

It was indeed supposed for a patent to protect the creator and to reward the creativity. The US Constitution states in Article 1: «Clause 8: To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.» Abraham Lincoln, himself a patentee, said: “The patent system added the fuel of interest to the spark of genius.”

That was once, over 200 years ago, when software was not born. It’s the fault of the American mentality, tributary to the idea “What was said by the Founding Fathers of the Nation should not be questioned, the same way we don’t question the existence of God.” Indeed, the U.S. Constitution (with its Amendments) is like a second Bible to Americans. Contrary to the very basic principles of the democracy, there is this religious feeling that the Constitution should never change.

This makes the U.S. Patent and Trading Office practically an institution appointed by the Constitution. And nobody would ever manage to make it refuse to register software patents!

Maybe sticking to the tradition is a good thing. What makes a software patent so different?

The first possible answer is a rational one: some people consider software as a collection of mathematical algorithms and data structures, and none of them are patentable under the current law. The USPTO is however part of the Department of the Commerce, and it seems they can’t understand mathematics, nor programming. As the number of the patents granted in the last years increased exponentially, they were obviously poorly processed, so that they granted both patents closed to the perpetuum mobile, and software patents about linked lists (PN/7028023, PN/5263160, PN/5446889, PN/5671406, PN/5950191, PN/6301646, PN/6581063, PN/6687699, PN/6760726). Notice how the undertrained patent examiners are easily fooled by the magic wording “method and apparatus“.

The second answer would be an experiential: software patents are bad because we can see how detrimental they are to the advance of the IT!

A recent situation involved some of the mainstream distros (openSUSE, Fedora) disabling a ClearType-like pixel sub-hinting method in the open-source FreeType2 library: openSUSE Hobbled by Microsoft Patents. While the details of the issue are beyond the scope of this discussion, this reminded me of a third possible argument: why should the end-user bother with patents included in the software he’s using?

Since the most absurd lawsuits of the latest two millenniums, the SCO vs. IBM, SCO vs. Novell, SCO vs. Daimler-Chrysler, and SCO vs. AutoZone, proved that common sense and the U.S. law system have not met yet, some vendors of open source software and services became aware that corporate customers might feel unsafe to use open-source software, because as long as the code is widely available, possible patent-infringing code snippets could be identified at any moment.

This is only one of the reasons the last year’s Novell-Microsoft interoperability agreement and patent covenant was signed: Novell wanted to give to the customers of its commercial Linux solutions the feeling of safety: no, they will never be sued by Microsoft for any alleged patent that Linux might infringe!

Given the number of the lines of code in Linux, nobody can tell how many algorithms might be covered by a U.S. patent belonging to a Fortune 500 company. As a matter of fact, Steve Ballmer explicitly stated that Linux infringes Microsoft patents.

And here we go back to the previous question? Why should the end-user be liable for a possible patent infringed by the software he is using, being it free or commercial software, as long as it was bought and used in bona fides?

Common sense and the logic of the judicial system don’t mix that often They might say: because the software is open source, so you have had the means to become aware it’s infringing some patents! Along with the vendor (if any), you are liable.

This quick reasoning might work for situations when you’re buying a pack of cocaine labeled “Soap”, and you claim innocence because what you bought is soap! Of course nobody would buy it, because it’s trivial to notice you have not bought soap, but cocaine, thus possessing it is a clear violation of the law.

But is the end-user of a software product responsible for examining it against possible patent violations? Just how far can we go with the absurdity?

Following the same rationale that says Linux users should be liable for patents infringed by Linux, consider the following case: you’re buying a car manufactured by Ford, and it happens that the car includes some special screw that is patented by Toyota, and illegally used by Ford. Should you, the end user, go to jail for the patent infringed by a screw present in the car bought by you?

Don’t let an American attorney of law read this: he might answer “yes”.

4. Devil’s advocate: what if…

Let’s play the devil’s advocate for a change. Let’s forget how important is open-source software to us, and how annoying is to have it restricted by software patents. What if software patents are necessary?

You might have read Jem Matzan’s interview with the patent attorney Jack Haken. Should you have done so, here’s a catch phrase you can’t miss: «The TRIPS treaty properly requires that countries cannot use their patent laws to discriminate in favor of one field of technology over other technologies. Thus, the same patent protection needs to be available for a technologic device, for example an anti-skid brake actuator in an automobile, even after a competitor ports an original hardware logic implementation into firmware.»

Quite valid as argument. The Agreement on Trade Related Aspects of Intellectual Property Rights (TRIPS) is a treaty that sets down minimum standards for intellectual property (IP) regulations. As it’s administered by the World Trade Organization (WTO), your country must be applying it, so face it: it is the law.

The actual requirement is stated as follows at Art. 27(1): «patents shall be available for any inventions, whether products or processes, in all fields of technology, provided they are new, involve an inventive step and are capable of industrial application.»

Would then software be a “technology” apart from all the others, and would the lack of patentability of software be a discrimination that infringes the TRIPS?

Are algorithms “inventions”? Or are they rather “non-technical”, thus not in a “field of technology” and outside the scope of TRIPS?

Judging by the number of software patents issued in the United States, it seems that we could take “software is technology” for an answer.

Judging by the previously mentioned position of Downing Street, software is unlikely to be a technology, as long as it is not patentable.

The European Patent Convention (EPC) defines what inventions are not patentable in Article 52, paragraph 2: «(a) discoveries, scientific theories and mathematical methods; (b) aesthetic creations; (c) schemes, rules and methods for performing mental acts, playing games or doing business, and programs for computers; (d) presentations of information.»

To me, it’s even redundant, as “programs for computers” (i.e. software) contains both mathematical methods and “scheme, rules and methods for performing mental acts”, it’s just you don’t do them mentally, you’re using a computer for that. To the extent of this definition, the mental acts involves using your brain as a finite state automaton, hence the use of hardware is only an equivalent.

The UK Patent Office is subject to the Patents Act 1977, which incorporates EPC’s Article 52 accurately.

Why is the United States interpreting the TRIPS differently? Why are you more likely to accept a software patent as an American?

Wikipedia is quoting Howard T. Markey (chief judge of the United States Court of Customs and Patent Appeals and later of the Court of Appeals for the Federal Circuit) for having listed the four primary incentives embodied in the patent system: (1) the incentive to invent in the first place, (2) the incentive to disclose the invention once made, (3) the incentive to invest the sums necessary to experiment, to produce, and finally get the invention on the market, (4) the incentive to design around and improve upon earlier patents.

Let’s prove that software patents are actually deterring the innovation, instead of the intended stimulation of them!

Following the rationale as for the technology patents, open source software should be patented, otherwise what is the incentive to (2) “disclose the invention once made”, and to (3) “invest the sums necessary to produce, and get the invention on the market”?

Decades of unpatented open source software are the best proof that the patent incentive in not necessary for software.

Decades of closed-source software are also proving that for some companies, the patent incentive is not enough.

Software is really different.

Given the exponential increase of the patents granted every year, the current patent system might be a hindrance for industrial inventions altogether. Instead of providing the incentive to invest in R&D, the huge number of valid patents increases the costs significantly. Before trying to design and manufacture a simple item such as a toothpaste recipient, you must pay attention to the registered patents and patent applications under examination: you might need dozens of engineers and patent attorney just for that!

Simply put: you might be able to design a simple item like a recipient in 10 minutes, you might be ready for production the next day. All you want is to pack some toothpaste and to sell it, you’re not going to patent anything. We’re not anymore in the Stone Age, creations that would qualify for inventions could even be done by kids!

But no, you have to pay tribute to the patent system. As the USPTO grants patents just about everything, the cost for developing the simplest items gets higher and higher: patent attorneys are expensive.

Two thousand years ago, Tacitus wrote: «The more corrupt the state, the more numerous the laws.» Should he have lived today, I bet he would say: «The more corrupt the Establishment, the more numerous the patents.»

Patent systems that were designed in the pre-IT era are definitely obsolete. We can see they’re unpractical even for the industry, what to say then about software patents? Is there anyone who can examine the millions of lines of code in a project and to identify the applicable patents?

«Consistency is contrary to nature, contrary to life. The only completely consistent people are the dead.» — Aldous Huxley, in Do What You Will (1929).

The consistency of the the U.S. patent system might lead to the death of open source. Reforming it is only for the best. After all, we’re talking about a country “by the People, for the People”. Or is it a “corrupt state”?

I wonder whether someone has patented the idea of using a plastic washbowl to cover your head when it rains and you don’t have an umbrella. I guess I have to pay a patent attorney.

5. Detrimental to Linux at large

Let’s put aside all the heated revolted reactions of the open source community on the “betrayal” committed by Novell when they signed the patent covenant with Microsoft. I am on the side of the believers that it was a betrayal indeed, but this is not the point.

The point is that the patent cooperation agreement could be seen by some as beneficial for Linux, because “hey! Microsoft has just acknowledged that Linux is a real competitor!”. As long as Microsoft agreed not to sue the customers of Novell for the «use of specific copies of a Covered Product as distributed by Novell or its Subsidiaries (collectively “Novell”) for which Novell has received Revenue (directly or indirectly)», more and more users would have the confidence to buy Novell’s Linux solutions, hence Linux will benefit!

Flawed argument. And a very poor one.

Indeed, the Linux solutions you would buy from Novell will make you immune to patent claims. But the use of the free counterpart, that is openSUSE, is not giving you immunity! (To make things even weirder, the individual contributors to openSUSE are still protected, whereas Novell as a company can still be sued by Microsoft, and vice-versa.)

The worse of all: this creates a unilateral, competitive advantage of Novell in the Linux world. Like in any other monopolistic practices (let’s say oligopolistic, by also counting Microsoft), this is only beneficial to Novell Linux (SLED, SLES) and Novell Linux alone! Once again, only the commercial solutions qualify.

This is sabotaging the advancement of Linux as an open-source project, because open-source is about choice, and the Novell-Microsoft agreement is about suppressing the competition — being it Red Hat or the 100% free Linux distributions.

Even worse, should it be still possible: this undermines the very spirit of the open and FREE software, as long as Linux distributions you won’t pay for are excluded from any protection. By signing this covenant, Novel acquiesced that Linux users can be sued (otherwise, why protecting them?), and this is going to create panic and recession in the Linux world.

Thank you, Novell. You are really a community leader. When you lost Jeremy Allison and Guenther Deschner from the Samba team, it was certainly because people adore to see how patents are used as competitive tools in the free software world.

6. The lost battle of the GPLv3

I am one of the believers in the idea that the BSD licenses and the closely related MIT/X11 license are the true promoters of freedom in software, and not the Gnu Public License (GPL). Some may argue that the status of the Linux today, as compared to *BSD, is the best proof that copyleft licenses in general, and GPL in particular, are the only licenses that generate progress in the FOSS world.

I will not waste my time with endless arguments leading nowhere, but I will notice that in GPL, Freedom Three is an Obligation! As stated by Richard M. Stallman, «Freedom Three is the freedom to help build your community by publishing an improved version so others can get the benefit of your work.» All the freedoms that define the GPL are offered by the BSD-style licenses too, but GPL makes an obligation out of it. You must give away the modified code, should you want to redistribute the built binaries.

This is not the best definition of freedom by my book. What resembles the most with the GPL is the way the Communist governments in Eastern Europe started the social reform after 1945, inspired by the Bolshevik ideology: only the poor should have access to the available resources, the rich shouldn’t have anything and they should go to jail and to detainee camps!

The way the GPL denies any proprietary usage of the licensed code looks really Bolshevik. The BSD license doesn’t make any discrimination.

But what is the most useless part of the whole process of creating a GPL version 3 is the futile attempt to prohibit “TiVoization”, DRM, Novell-MS-like agreements, binary-only drivers and whatever else would make RMS happier.

A few optimists like Bruce Perens happen to believe that GPLv3 will force Microsoft to give away the right for all GPL software to use any of their patents that are presently in GPL software distributed with SUSE. I wouldn’t be that optimistic, but I am not a lawyer. Novell is not that pessimistic about GPLv3, which makes me stand to my beliefs.

Instead of spending so much energy on the contradictory GPLv3 (which is unlikely to be adopted by the Linux kernel, as it is under “GPLv2”, not “GPLv2 or later”, and the majority of the kernel team is against going with GPLv3), the Free Software Foundation should have focused to better ways to fight the real enemy: software patents.

Speaking about open source and the FSF: does anyone have any news of the legendary and illusory GNU Hurd operating system? I guess not. What a triumph of the FSF…

7. The business model

I would not try to analyze the business model behind open-source software like the web browser Mozilla Firefox (which receives a tiny amount each time a search on Google is made using its search box), nor of OpenOffice.org.

The open source operating systems are our main interest. Except for the *BSD family, whose members are either backed by 501(c)(3) non-profit organizations like The FreeBSD Foundation or the NetBSD Project, or the task of individuals like Theo de Raadt for OpenBSD and Matt Dillon for DragonFly BSD (by the way, your donations to either of them are appreciated), the 500+ Linux distributions fall roughly into two main categories: the vast majority of the distributions are made by the enthusiasts, for the enthusiasts, and a given number of them are mainstream distros, supposed to be trustworthy and polished enough to satisfy both the corporate-minded user and the home user.

As perceived by Christopher Negus in The Linux Bible 2007, the mainstream Linux distros are the following: Red Hat Enterprise Linux, Fedora Core, Debian, SUSE (SLED, SLES, openSUSE), KNOPPIX, Yellow Dog Linux, Gentoo, Slackware, Linspire and Freespire, Mandriva, Ubuntu (and Kubuntu, Edubuntu).

Out of them, having or trying to have a business model are: the Red Hat family, Novell’s SUSE, Linspire, Mandriva, and Ubuntu.

From this short list, Ubuntu has the most controversial business model. Supported by the South-African billionaire Mark Shuttleworth and his company Canonical (headquartered in the Isle of Man, a fiscal heaven), Ubuntu is based on Debian (it has even taken some of the Debian’s developers) and has gained an unexpected momentum especially because of the ShipIt program: free CDs were shipped free of charge to anyone wishing to receive them, worldwide.

As this was contested by some as creating an unfair competitive advantage for Ubuntu (as no other distro can afford such an enterprise), the overall outcome is positive for the mass penetration of Linux as a whole.

Nevertheless, despite its relatively recent offering of commercial support, and of the release of a Long-Term Supported version last year, there are no proofs of Canonical having a positive balance. Ubuntu is still relying on the wealth of a billionaire, which is hardly a business model.

Novell, Linspire and Mandriva had disappointing yearly and quarterly results lately, to the extent of my knowledge. As Mandriva’s CEO François Bancilhon briefly explained before presenting Mandriva’s business model, «Further to my knowledge, the only company making money is Red Hat.»

It is indeed very difficult to be profitable by selling support services associated with free, open-source software. The individual customer is unlikely to be willing to pay, and the corporate penetration of Linux is mediocre, especially in North America, where Red Hat would be the only Linux trusted for large-scale deployments, with a second choice being Novell.

Mandriva is left as the only purely European Linux vendor, struggling for profitability while the recent releases 2007 and 2007.1 Spring seem to be less buggy than usual.

8. The package management

Along with the response time in providing with security fixes and the supported lifetime, this is often a factor neglected by the “distros for the enthusiasts”. As Linux is generally insensitive to the worms and viruses that threaten the Windows systems, most home users only care about a newer version of a package if it’s about an application of interest to him. Errata and patches are not his concern. This also allows him to switch distros just because a given release of a certain distro has a better hardware support.

Responsible use of a Linux system should consider that (again, along with the errata response time), the reliability of the package management system is of the utmost importance, as the integrity and the availability of the system depends on it. The quality of the provided packages is also important, otherwise automated system updates might break the system (this happens more than once in a while with Ubuntu, which questions its suitability for the enterprise).

The good news is that most of the system use either the Debian APT system (apt-get & friends, Synaptic, aptitude), or RPM-based utilities like yum, Yumex, YaST. The over-hyped SMART package manager is able to deal with sevral types of packages and repositories, yet I have difficulties to understand the high level of praise it gets. Because, let’s face it: the alleged “RPM hell” is a myth. There is also a “DEB hell” (I was in there myself), and the real problem with the unsolved dependencies is the lack of quality of the repositories. This “repository confusion” is even easier to attain by users who follow the friendly advices from the countless Ubuntu-related forums and blogs and add dozens and dozens of unreliable third-party repositories.

The bad news is that some distributions seem to have a masochistic pleasure in regularly break the package management tools.

This happened in the most unpleasant manner with openSUSE: defocusing from the traditional YaST-based package management, Novell brought the alternate libzypp-based generation of Zen tools. When first released with openSUSE 10.1, they achieved the rare performance to have both package management systems rather broken than functional. Things were mostly fixed later, but some disappointed customers leaved SUSE (that was before the Novell-Microsoft agreement), as it’s hard to provide a worse message than “we don’t know what package manager we want to offer you, and we don’t know how to make them work either”. The simplicity was also affected by the increasing number of package management tools: rug, zypper, zen-updater, zen-remover, zen-installer…

Fedora Core started the new generation of GUI tools Pup (updater) and Pirut with FC5. They were both very buggy, and while things have improved in the meantime, their design is horrendous. To me, Pirut is more like a joke than like a package manager. The GUI for yum that I liked, namely the version of Yumex that came with CentOS4 (yumex-1.0.2), was replaced in newer Red Hat distros by either the much slower yumex-1.2.2, or by the unfinished, buggy yumex-1.9.5. Definitely, some steps backwards.

Mandriva has the misfortune to ship its 2007 release with a broken RPMdrake. The command-line interface tools worked fine, but RPMdrake 2007 was systematically breaking the list of the available, not installed packages. RPMdrake was fixed for several times, and broken for about the same number of times. The last time I checked it, it was working fine though.

Apparently, the APT tools are still one of the best choices. Newbies can enjoy the extremely friendly Synaptic, while more advanced users can use the CLI. Well, it’s known that the intelligence of Synaptic is limited: no matter how pleasant its interface might be (with a lot of information for the advanced users too!), the only tool that can fix all the problems is the ncurses-based aptitude. Which is horrendous from the usability POW.

And you guessed right: instead of extending Synaptic with the intelligence bits from aptitude, the Debian guys have decided to declare aptitude as the default package manager. Isn’t this a charming decision?

Some will declare the Smart Package Manager as the savior. I don’t think so: it is marginal, and the study cases that claim to prove its superiority are unconvincing.

9. What does it mean to be stable

Being stable might just mean to run without crashes, but it can also refer to a lifetime model involving a STABLE branch, and one or more testing, unstable, -current evolving branches.

Usually, small distros can’t be trusted for having enterprise-grade stability, mainly because of the scarcity of the resources, as manpower. There is also another rule of the thumb that usually works: the “original” distro is less buggy than the derivatives.

This works better with Slackware, which is practically bug-free (not counting the -current branch). The derived distros (SLAX, Wolvix, Vector, Zenwalk, etc.) provide some extra polish, customizations, extra packages, and… bugs.

Even Debian’s “testing” branch is usually more stable than Ubuntu’s releases.

What is rather strange is that some distributions have chosen to be “unstable by choice”: Sidux, recently Arch… In a world where the increasing complexity of the software makes the bug management more and more difficult, choosing to be unstable just for the sake of being “on the bleeding edge” is not my cup of coffee.

This kind of instability doesn’t mean the distribution should not behave as expected. The open source has brought the ability to always have the last version of an open-source software package, either by installing a binary package provided by the distro makers, or by building it yourself from sources. Having an entire distro always kept up-to-date is desired by many individual users, and it’s also a distinct possibility: free of any charge for most. Does this mean it is a desirable approach from a corporate point of view?

It’s rather the other way around. In a business environment where stability and system availability are of the utmost importance (some people really think of “business continuity”, not of spinning cubes), the primary choices are something like Red Hat Enterprise Linux, Debian stable, SLED and SLES, Mandriva Corporate Server and Desktop. What do all of them have in common?

An assumed approach of stability, that is. For instance, when RHEL4 was released, only OpenOffice.org 1.1 was available. Throughout the supported lifetime of 7 years, RHEL4 will never get OpenOffice.org 2.x, regardless of the improvements it could bring. Only security patches and bug fixes will be applied. When deemed necessary, selected new features are backported to older releases, but the general rule is: make as much as possible to prevent the breakage of what is already working. This might be annoying to be unable to use the new ODT document format, but at least consistency is ensured within a multinational company that uses a given release of an enterprise distro. On occasions, some applications can get upgraded to the next major number: this happened with Firefox 1.0.x going Firefox 1.5.x with Update 3 of RHEL4.

Stability doesn’t mean obsolescence. With RHEL (but also with other enterprise distros), relevant improvements are backported to the existing base system when it is essential for customers. During the lifetime of RHEL4, the included kernel 2.6.9 (a very stable one, BTW) will not get a version upgrade. When Dell included in its Precision 390 workstations the BCM5754-based Broadcom Ethernet Controller, which only started to be supported with lernel 2.6.17, Red Hat has backported the corresponding changes to the existing 26.9 kernel, thus making possible to be certified for the Dell products that use BCM5754. You don’t have to worry when buying a new Dell computer: it will work with your software.

This business definition is despised by the regular Linux user, and this might explain why the Arch Linux aficionados were so vocal to defend their “rolling-release never to reach 1.0” development model.

From a corporate standpoint (and corporate adoption is essential for the survival of Linux!), using Linux does not mean doesn’t only mean browsing the Internet, listening MP3s and watching DVDs, and writing CDs/DVDs.

10. Eye-candy: competing with Vista?

This is something I never thought I would experience from something like Linux: deliberately going with the shallowness, the eye-candy, the superficial.

As a “semi-veteran” in computing, I was expecting from Linux to try to prevail through quality and performance, if not from simplicity too (K.I.S.S. is one of the principles of UNIX). This used to happen 10 years ago, why isn’t it possible anymore?

Maybe the first sign was when FreeBSD tried to mimic Linux and to be “more user-friendly”. It was rather wrong, and I can only hope they realized the mistake. The gradual recovery of the once legendary FreeBSD stability can be seen with 6.1 and 6.2. Hopefully the trend will not be reversed again.

The point of no return was generated by Ubuntu. More and more IT-illiterate users, some of them not being able to find their way in Windows if an icon is moved from its initial place, were now “into the Linux thing” Ubuntu was already a polished, user-oriented Debian, now demands were to make it even more user-friendly.

What’s almost tragic for the accuracy-minded people is that in various Ubuntu forum threads I could read in the past 2 years plenty of plainly wrong explanations and speculations regarding the cause of some annoyments, and about “how Linux works”. The democratization of the access to UNIX-like systems engendered a lot of garbage from people not knowing anything, yet believing they have found the Holy Grail. You shouldn’t take a Ubuntu forum for Wikipedia.

When polls on popular Linux sites ask: “What is the most newbie-friendly distro”, you know you are in the 21st century, when people need nice GUIs, no matter what bugs are hidden therein.

Improving the user experience is not a bad choice per se, exaggeratedly focusing on eye-candy is however wrong. No common sense can explain how so many people believe they actually need those dozens of Superkaramba widgets that make their desktop look like an Airbus cockpit. No common sense can explain why so many people claim they need unstable 3D desktops like Beryl/Compiz, for they believe that having the applications windows on the faces of a spinning cube, or having “wobbly” windows does actually help them.

I have seen some rational voices too, who confirmed that most of the 3D innovations are actually distracting them, thus decreasing the productivity. But, you know, you can’t fight with the masses of teenagers that can’t make the difference between the games they play and a productivity desktop.

Some of those youngsters happen to be Linux developers. Can you trust such childish developers for creating quality software?!

Sometimes, you can.

Don’t misunderstand me. I have had rather pleasant experiences with both Compiz and Beryl: they worked. AIGLX doesn’t always work better than XGL, but at least it doesn’t break 3D rendering for games and for anything that needs to do 3D rendering: OpenGL screen savers, animation and CAD/CAE programs, that sort of thing. Functionality has been sacrificed for cosmetics, and totally unnecessary cosmetics at that. I am happy I don’t have to use them, but I am unhappy to see that RHEL5 comes with Compiz and AIGLX.

The cherry on the top of the cake is the bizarre idea that Linux has to compete with Vista. As if Vista wasn’t an aberrant OS anyway, the American way of thinking saw everything through a comparison. Linux is not supposed to gain momentum “because it is good”, but because “Vista has the GUI feature X, and Linux does it better”.

And what is Vista, if not eye-candy, GUI, and useless resource consumption? If this is where Linux is heading…

While many minimalist window managers still exist, the very small group of users who believe in the power of the simplicity is a decreasing quantity…

11. The security model

If “everything is a file in UNIX” is common knowledge, there is also common knowledge that the security model in Linux and BSD follows the UNIX model, not the old Windows approach. Constantly using the system as the privileged user is not an option.

A first attempt to make the users life easier was made by Freespire. Most readers should know that the root account is disabled in Ubuntu, the user needing to gain temporary superuser privileges through the sudo command. Well, as allowed by the architecture of sudo, Freespire is configured to accept sudo without requiring the user to enter any password! (Yes, there is an infamous NOPASSWD somewhere in /etc/sudoers).

Given that by enabling the “Admin Approval Mode”, a similar operation mode is available in Vista too (you could just press ENTER or click a button to accept administrative tasks), I would refrain myself from declaring Freespire’s defaults as inspired by Windows 95. They can be easily changed anyway.

A much more severe breach of the traditional UNIX security model was brought by the (otherwise very promising) Pardus Linux.

Part of their set of innovative features, a new concept was created around the new configuration manager COMAR: the first user (the one who installed the system) is granted some special administrative privileges never seen before. It is practically half-root, because he can perform a wide range of administrative tasks (adding/removing packages, starting/stopping services or the firewall, etc.) without being ever asked for a password!

Pardus developers explained that this feature could be offered as an option only with the next release, however this doesn’t change anything at all: the evil was done. For the sake of the user’s convenience, basic Linux principles were broken. Should people get used with this, they will ask from other distributions to provide them with such a feature.

Sadly, whereas even Microsoft tried to improve the security in Vista and to educate the home user not to use an administrative account, there is a Linux project trying to do exactly the opposite.

Requiring either the user password or the root password before performing administrative tasks was already possible through sudo (also kdesu, gksudo) or an appropriate PAM-based authentication (consolehelper). Granting a user the right to “sudo” without a password was already possible, although hardly a good choice. A possibly good feature would be to configure sudo to accept more trivial tasks such as changing the system time (not critical for home use) without a password, but not more.

Irresponsible approaches made from the highly praised (even by me) Pardus a black sheep from a security standpoint. At times, open source rhymes with thoughtless design and severe flaws.

Just forget about the name of the distro I just mentioned, although it is going to be a trend-setter. In a few years, on public request, half of the 500+ Linux distros will have the security features perverted.

12. Hype vs. real needs

Once a business, always a business: Linux is subject to the inflation of the buzzwords and the false “needs” you encounter with anything that’s marketable. As Miguel de Icaza was telling two years ago to Linux Format, «Linux used to be hip, but it’s lost its hipness. The new thing is Mono – Mono is hip!» So Mono is the new kid in town.

Have you noticed how the IT industry doesn’t seem to know whither to go, where to head to? The enterprise stack includes J2EE, but it also includes .NET. Expensive Oracle deployments and endless SAP implementations are still selling, but the quality of the software is not better than in times of COBOL. The more complex a software infrastructure is, the more painful it will be, but the ROI is hardly a certitude. Dozens and dozens of CRM and ERP solutions do exist, but I have never met a satisfied customer so far.

This is when buzzwords sell more. Because not everyone was happy with Java, and because the .NET technology that relies on a Common Language Infrastructure (CLI) pioneered by Microsoft is now an ECMA standard (ECMA-335, ISO/IEC 23271), someone thought that an open-source clone of .NET, namely Mono, would make the best choice for running .NET client and server applications on Linux, Solaris, Mac OS X, Windows, and Unix at large (Mono is a project sponsored by Novell).

Are we really depending on a Microsoft concept to save the enterprise services in Linux? Some people don’t seem to be bothered by that.

Sadly enough, Mono is nowhere close to replace the enterprise Java. Nowhere close to that. After all these years of fuss and increasing enthusiasm, Mono has not replaced ASP.NET in the companies that are using .NET on Windows platforms. The most prominent use of Mono in Linux is the use of C# for standalone GUI projects, namely through Gtk#.

As a matter of fact, the most important result was that GNOME got contaminated with Mono. Initially emerged because of license worries with regards to KDE (so it was about freedom!), GNOME is now tributary to a Microsoft-inspired technology! Unbelievable, yet true.

So Mono has not fulfilled the initial expectations. On the contrary, it has made a lot of Linux users to be dependent of Gtk# (through Beagle, Tomboy, etc.), whereas excellent solutions for desktop tools already existed, e.g. PyGTK or PyQt.

While Mono is available under Red Hat’s community distribution Fedora, the company’s flagship Red Hat Enterprise Linux 5 does not include Mono! Is it because Red Hat does not venture into a “Novell business”, is it for they can’t go for a “Microsoft-designed business”, or is it that I am not the only paranoid in town?

Another buzzword very cherished nowadays is about virtualization. This is what I call “Xen-mania”.

The IT press is full of reports on how Xen performs under SUSE or with Red Hat. Everybody is “just trying it”, yet very few people really need it. Red Hat was initially reluctant to Xen, as it was dubbed “not ready for production use” (or maybe this was because Novell was the first to offer it), but now they just gave to the market what the market wanted.

When you will find about a decent number of companies really using Xen, drop me a note. Xen is still a buzzword meant to boost the sales, and this works very well in North America, where ingenuous CEOs and CTOs buy Red Hat because “it’s corporate Linux” the same way they buy Oracle’s broken Linux clone “because it’s oh, Oracle”, yet they don’t make use of them. They were buying IBM with Microsoft in the past, so they’re buying now what’s trendy. While this can be seen as a commercial success, it is not what Linux needs.

What is really used in the real word, and it’s not new technology: chroot jails in Linux, BSD jails, Solaris Zones. They just work, and they’re cheaper in resources.

On the other hand, while Novell is supporting the Mono project, they have dropped the support for Hula, their own open source mail and calendar project. Novell was unable to manage such a simple project, while smaller companies had the skills to provide with excellent groupware solutions like Zimbra and Scalix that can successfully replace Exchange (I liked Zimbra, and I was short to install Scalix).

The crappy management at Novell can also be seen by looking at the most advanced mail client, Evolution. Advanced or not, Evolution lacks an elementary feature present for ages in all the Windows e-mail clients: a tooltip notification when new mail arrives, possibly providing you with details on the received messages.

I was suggested to use third-party programs to be notified of new mail, but this is not the point. When a MUA collects the mail, it is supposed to notify me synchronously (“hey! I have new mail for you!”). The third-party solutions are asynchronously checking for new mail, which means the moment they will notify me has nothing to do with the moment when Evolution has collected my mail.

The lack of focus on basic user needs is deplorable. We can have spinning cubes in Linux, yet Evolution is unable to provide with visual notification when new mail arrives. Even Microsoft is better at that.

The IT press features a stunning mimicry: a recent article describing how the Kubuntu-derived Pioneer Linux Basic Release 2 fails to differentiate enough to have a raison d’étre finds the “supreme argument”: Automatix2 (a questionable concept in itself) doesn’t misses a few packages, especially the virtualization product VirtualBox. I am positively sure that Al Bundy (part of the target group of Pioneer Linux Basic) can’t live without virtualization.

Some other times, the needs might be real, but the solutions will deceive an uneducated user: I am thinking of the new Desktop Search metaphor.

The latest years have brought to the public Beagle, Strigi, Tracker, Pinot, Recoll, Doodle, and a few more Linux equivalents of the Windows counterparts Copernic Desktop Search, Blinkx, or Google Desktop. These are engines that make use of a daemon to periodically index the documents on your hard disk, so that a later search will provide with faster results than a more classic inquiry using find, grep, locate, or the desktop own “Search in Files” GUI tool.

What’s wrong with the new desktop search tools?

The first wrong move was to have the corresponding daemons started by default in some distros. This would generate a “mysterious” and intensive extra disk activity in idle periods. For the worse, SUSE 10.0 has replaced GNOME’s own gnome-search-tool with a Beagle search in Nautilus, thus breaking the common usage pattern.

Any user that will expect for accurate results will be deceived. The file you have just created or modified a few minutes ago, the new mail and whatever else has not been reindexed will be missing or showing the wrong contents in Beagle’s result list. This is because, contrary to the “good ol’ grep” that searches every possible matching file, Beagle only looks in the database with the indexed files (usually your home, plus your Evolution mails). When they get indexed!

Personally, I just cannot rely on results which depend on a database of (not yet) indexed data instead of a real search. These “approximate” searches have the advantage of the speed and they can take into account meta-data from several types of files. The “meta-search” capabilities come with a price though: the price of ruining the image of Linux as an accurately working UNIX system.

The last time when I used Beagle I couldn’t persuade it to reindex a file that was modified for two weeks already, so that the search results showed a preview with non-existent contents.

All the wrong solutions to the unnecessary needs seem to come from a Windows-like approach of the computing.

13. Our friends are our foes

Sun Microsystems is a friend of the open source community, and a friend of you. After all, aren’t they the originators of OpenOffice.org? They have also decided to open-source Java under GPLv2, they have open-sourced Solaris, and now Ian Murdoch (the creator of Debian) is with Sun, to help them understand what the general public is expecting from an operating system.

Well, maybe. I have not used OpenSolaris as embodied by GNU/Nexenta, and BeleniX was not very convincing, although it was fully functional. I have read however the Entitlement for Solaris Express, Developer Edition. They say it’s an “OpenSolaris-based distribution”, however it’s still partially proprietary. Fine so far, but have you noticed how outrageously threatening and absurd are some provisions of this EULA?

E.g. : «(iv) You will indemnify and defend Sun and its licensors from any claims, including attorneys’ fees, which arise from or relate to distribution or use of Developed Programs to the extent these claims arise from or relate to the development performed by You. This Section 2.0 does not apply to the Sun Java etc.» Me, John Doe, defending and indemnifying Sun?! In a future life, maybe.

More threats: «11.2 It is understood and agreed that, notwithstanding any other provision of this Agreement, any breach of this Agreement may cause Sun irreparable damage for which recovery of money damages would be inadequate…» If money is not enough, maybe they want my life, right?

Or, something that not even Microsoft has not thought of: «14.0 During the term of the SLA and Entitlement, and for a period of three (3) years thereafter, You agree to keep proper records and documentation of Your compliance with the SLA and Entitlement.» Wow. Three years after the 6 licensed months, the FBI, the Interpol, the United Nations and the Martians will come to my door and arrest me for not having kept proper records to prove I am not guilty of any infringement of this EULA! The fine blend of freedom. Orwellian freedom, that is.

Also from out friends at Sun, there is a fine document: Solaris is a better Linux than Linux. I wouldn’t mind if Solaris were a better Linux than Linux. Maybe it is, actually. What I do mind is the way they’re trying to promote their OS: through lies. There is no way to have OS Support Fees of $10/mo for Solaris versus $106.67/mo for Linux! They have simply took the most expensive RHEL option, as if there wouldn’t be any other choice!

With Linux, costs start at $0/mo, pass through $50/yr (SLED) or $80/yr (RHEL Desktop), and would only go to $2,499/yr as the most advanced available service option ever.

They manage to forget about the 10:1 cost ratio in other place on their web site, which says: «Solaris 10 Subscriptions up to 50% less than equivalent offerings from Red Hat.» Now, that’s a different story.

So long for affinities in business. Or ethics, as the bare minimum. It’s only business, dude.

14. You can leave your hat on

Believe it or not, some Linux actors can’t always be totally in good faith. For a fistful of money…

As IT professionals should know, when a certain OS platform is certified for a given database, this happens for a particular version of the OS, and for a particular version of the database server. This should be obvious anyway.

When Ubuntu 6.06 LTS was released in June 2006, Canonical has immediately issued a press release where, amongst the praised features (a 15-minute away LAMP solution with Ubuntu Server Edition, support for up to 5 years), they said: «The Ubuntu server platform has been certified for IBM’s DB2 and MySQL.»

At that time, the IBM Recommended and Validated Environments for DB2 9 did not mention any Ubuntu release, and the only relevant certification was with Recommended and Validated Environments for DB2 V8.2. That is, DB2 v8.2 was “Validated” on Ubuntu 5.04 (“Recommended” means IBM actually builds, tests and deploys on them; “Validated” means IBM certifies the full compatibility and gives official support).

And here comes the truth: “Ubuntu server” first appeared with version 5.10, so the claims about “the Ubuntu server platform being certified for IBM’s DB2” can’t refer to the 5.04 certification. Furthermore, there was no official record of IBM supporting DB2 on any newer Ubuntu release, so basically Canonical was lying and neither Ubuntu Server 6.06 LTS, nor Ubuntu Server 5.10 were certified for DB2!

In the meantime, Ubuntu 6.06 LTS was validated for DB2 9, but this happened several months later. The lie was standing unnoticed by anybody (or nobody cared about the deceiving advertisement). Mr. Shuttleworth simply ignored my questions on the matter. What would happen with all the mainstream Linux distro would simply lie to increase their market share?

It’s only a coincidence that while I am writing this, Canonical announced a tighter integration with DB2: the latest IBM database will now download and deploy easily from the Ubuntu desktop. This time in full truth, we hope.

Also blamable from Canonical is the use of the (partially) closed-source Launchpad for managing the bugs in Ubuntu. As if Bugzilla, Trac the other open-source bug tracking systems were not good enough, or as if open-source can’t be trusted enough. All this, coming from the distro ranked #1 on Distrowatch! Far from being the right message to pass to the community…

Another unexpected surprise is coming from Novell, the open source company. While the EU Europass site features downloads as both Microsoft Word and OpenOffice 1.1 documents, the Novell Buying Programs page has all the price lists as XLS files! None of them is duplicated as either ODT or SXW. This tells volumes on their “integration” with Microsoft. In times of NLD9, they had that objective of having all Novell employees on Linux and OpenOffice by Q1 2005. Or maybe not.

A last glitch is coming from the real #1, Red Hat itself. Part of their Patent Policy, there is a Patent Promise. It’s not difficult to notice that the BSD and MIT licenses are missing from the list of the Approved Licenses: «GNU General Public License v2.0; IBM Public License v1.0; Common Public License v0.5; Q Public License v1.0; Open Software License v1.1.»

Once again, a wrong message.

15. Documentation, at large

Summer 2005, Alan Cox admitted in an interview that we have reasons indeed to be dismayed by the poor state of documentation of most of open source software. What is worse, this is unlikely to change: «I don’t think anyone’s going to do a grand initiative.»

As predicted, things have not changed since then. Currently, the best documentation on open-source software is The FreeBSD Handbook, and second-best is The OpenBSD FAQ.

Quality end-user Linux documentation is issued by Red Hat and by Novell, but this is often far from the quality of the old-style UNIX printed manuals. KDE and GNOME always ship with incomplete and obsolete help files (and pretty useless: thank you, I know how to read a text label, maybe I wanted for more), and Linux has gradually lost the classical respect for man pages, a long-time UNIX tradition.

For quality man pages, you need to go to the *BSD land.

Let’s switch now from the end-user or the system administrator to the guy that should decide what Linux distro will your company buy and use. Is enough first-hand information available? You know, a few clicks away, as we are in a global Web world.

To our surprise, the major actors are deceiving in this department. Should you want to buy SLED, you can’t find out what exactly comes with it! In other words: what are you paying for? Is it more than a plain desktop? I dare you to find an easy link to that. You have to go to the package list page to notice that the Samba server is included. Unfortunately, things are not any better with RHEL.

A conspiratorial theory would say that the lack of a proper documentation is made on purpose.

When you purposefully design software that is difficult to use, lacks documentation, and/or is nearly impossible to install and configure, you are doing so for creating the opportunities to sell support contracts and consulting services. When the beauty of the GUI is only skin-deep, here comes the big, fat support contract.

Leaving the conspiracy land, I bet both Novell and Red Hat suffer from the Microsoft-like arrogance: we are the #1, come and buy from us, you don’t really need all those details. Our solution is what you want anyway, let’s drop the technicalities. Or you can have one of our commercials call you.

Is it that hard to put a page with all the relevant info, in easy access? Or are the buyers deemed so stupid?

The pleasant surprise comes from the French Mandriva. Their Corporate Server 4 has a very practical specifications page, and the accompanying Product Sheet is the best product presentation I have seen in my whole life! All the relevant info you need to take a decision is there, in a good presentation, just the way you’d expect it to be. (Incidentally, it’s a good server indeed.)

Fortunately, there is life outside North America too, contrary to the common belief.

16. Fixing bugs by not fixing them at all

With the current complexity of the software, having lots of unfixed bugs shouldn’t be a major concern. All the publicly-available Bugzillas are not there to show you how weak is the open-source software; on the contrary, they allow you for a better interaction with the developers. The fact that Microsoft and Oracle don’t have any bug tracking system open to the public does not mean their products have fewer bugs.

There are however some questionable reactions to some particular bugs.

Let’s take a popular media player for KDE: Kaffeine. For a long while, it was the default media player in many distributions, including Mandriva. Then it stopped being the default in Mandriva. The reason? An old bug that allowed Kaffeine to crash Konqueror under certain circumstances.

Thread-safe or not, a good design of Konqueror should not allow it to crash when a child process is going stray. Any crash of Kaffeine should be tolerated by Konqueror, otherwise that means it follows the Windows approach: when Explorer crashes, everything goes wild.

Things can be even worse. At some point, I managed to make Kaffeine crash the whole X server with a beta version of Kubuntu, which is definitely something that should never have happened with Linux (it’s more like Windows Millenium Edition to me). The phenomenon stopped happening with a later update of the system, but nobody has a clue of what happened and how was it possible.

The “embedded Kaffeine crash” bug was fixed mid-February, but only in theory. The fix involves the use of xcb instead of xine, so the actual bug was not fixed (bug means improper design or improper error handling here). Obviously, the Kaffeine you might be using proudly carries the eternal bug.

Is this a good reason enough to change the default media player in a KDE version of the latest Mandriva from a KDE player (Kaffeine) to a GNOME player (Totem)? When the user reqires a KDE environment, is it fair to provide him with a mixed environment?

Another questionable decision was to remove Klipper from the list of the applets automatically launched with KDE, also in the latest Mandriva. Once again, it has been said that Klipper (a clipboard manager) presents a severe bug in relation with Konqueror. To avoid some unpleasant crash, it is not started by default.

Klipper is one of the small tools that make the difference when comes to KDE vs. GNOME debates. Each and every KDE user will want it started, and will most likely manually start it on systems where it’s missing. The vanilla KDE starts it by default. The default KDE installations of both Red Hat Enterprise Linux 5 and Fedora Core 6 have Klipper started by default. Why is then Mandriva irritating the users, while trying to avoid some rare crashes to happen?

Sweeping the dust and hiding it under the carpet is one way. Another innocent way to avoid fixing a bug is to concoct an ugly workaround, as with a printing hack in KPDF 3.5.7. Not bad from a practical point of view, but it doesn’t address any bug. Nobody will ever try to fix the bug if an ugly hack is available, will they?

17. The Debian kindergarten

If Richard M. Stallman and the Free Software Foundation are known for having unmitigated views on what is “freedom as in free speech”, not “freedom as in free beer”, the Debian Free Software Guidelines are also known to have some extremist views with regards to what is free and what is not free.

I used to call this license-Nazism. For instance, no matter the GNU Free Documentation License is a FSF creation, Debian is more Catholic than the Pope, and only considers it conditionally free!

The famous democratically-organized, legally-aware, freedom-committed Debian Project has serious problems with the idea of trademark licensing and protection. It seems to be problems with the understanding of their own license.

Last year, Linux New Media AG from Munich (the publishers of Linux Magazin, Easy Linux and Linux User) has issued at CeBIT 2006 an extended version of Debian Sarge, compiled by German members of the Debian Project to include packages or versions of them not present in the official Debian Sarge: X.org, SpamAssassin, Firefox 1.5 and more (Sarge has Firefox 1.0.x and it uses XFree86, not X.org).

This edition was still labeled “Debian Sarge”. And they had no problem with that!

To force a simile, it’s like you’re a customer of Mercedes Benz, and you notice that a modified car is still labeled “Mercedes” (and not “SsangYong, powered by Mercedes”, but simply “Mercedes”). And Mercedes is doing nothing about that issue. To them, it’s even OK to have the fake car labeled “Mercedes”, as long as it’s not labeled “Genuine Mercedes”. Wouldn’t you be offended that Mercedes doesn’t care about that? Would you keep buying from Mercedes, as long as they don’t care to protect their trademark? Probably not. How would you tell which Mercedes is genuine and which is not?

Debian fails to understand that even the user of a free product is still a customer. Red Hat understands that, and it would react in defense if a modified Fedora Core 6 would still be labeled “Fedora Core 6 Zod” instead of something like BLAG, for instance.

It should come then at no surprise that Debian failed to accept Mozilla’s Trademark Policy. While everybody else had no problems with it, the Debian people found it in violation of DFSG #8, so they had to come up with a whole different Zoo to “fix” the bug: you don’t have anymore Firefox, Thunderbird and Seamonkey in Debian, but Iceweasel, Icedove and Iceape.

Would you rather have a web browser with 15% of the market, or 15 privately branded Firefox-based browsers each with 1%? Except for finding more about the Earth’s fauna, I can see why many people got angry about the change of the names in the Mozilla products shipped with Debian.

Going to the kindergarten now. Dunc-Tank was an experiment to see how targeted fund raising can improve Debian, and whether Debian 4.0 etch, could be released on schedule on the 4th of December, 2006 (which of course did not happen).

It wasn’t all about money though. As the then DPL Anthony Towns revealed, «Steve Langasek, who is one of the release managers, was basically working 18 hours a day on his day job to do PHP coding, and he wasn’t having enough time after that to look into some of the release critical issues that we were having.» Just how bad can a project management be, when you appoint as a Release Manager someone who is already busy for 18 hours a day with something else?!

It can be understood that the majority of Debian developers were not happy with the decision to have someone be paid for a work the rest of them are doing for free and for fun. There is nevertheless a long way from disgruntling to the kindergarten-like mutiny that was the Dunc-Bank, «an experiment to see how aggressive bug reporting can delay the release of Debian Etch. We hope that by finding more and more RC bugs in Debian we can delay Etch

If this is not deliberate sabotage by your books, we’re definitely playing different games.

The Dunc-Bank sabotage was initiated by Sam Hocevar, a well-known Debian developer. Then, Sam Hocevar was elected as the new DPL for 2007! It’s useless to ask “how many voted for Sam”, because the Debian elections are using an advanced Condorcet voting system with Schwartz Sequential Dropping, to guarantee that the winner is the candidate that is the less hated, if I am allowed to put it this way. That means Sam was the best choice for Debian, as expressed by the voice of the Debian developers. In my opinion, Sam is the worst thing that could happen to Debian, and a clear sign that Debian is going nowhere.

This is suicide for Debian. Mere suicide. As Sam was elected by the Debian developers, Vox Populi, Vox Dei.

Debian will continue to be largely unaffected by this decision, but electing the main saboteur as your leader is ruining the last attempt to trust the Debian Project.

You will trust Debian even less once you will find that, while their repositories include plenty of obsolete or orphaned packages, the Debian New Maintainer page is currently listing 102 Applicants in Process, and you can also see that an applicant can wait up to 814 days to pass all the procedures!

18. Freedom and myths

Our friends, the software patents, are imposing known restrictions to what a Linux distro can ship with. The Digital Millennium Copyright Act is having its own contribution, so that the only legal solution (especially in the United States) to play proprietary media formats or zone-coded DVDs is to use commercial software on top of your free of charge Linux distro.

GNU/Linux (the way Richard Stallman likes to call it) is primarily about freedom, so that a Linux user should ideally refrain from playing MP3, MPEG, AVI, WMV or WMA files or streams. He or she should also not be using closed-source software such as Acrobat Reader, Flash, RealPlayer, or maybe Opera too.

I fully agree that everyone should be encoding in OGG instead of MP3, but what to do when this is not the case? Is everybody having such a “high conscience” to only accept the utmost expression of the freedom, and to simply refuse to watch the Web clips on sites like CNN? (Swfdec started to be able to play YouTube videos recently.)

There are not so many heroes in the real life. Probably 90% of the individual users are using what should be called “illegal codecs” (some small distros that are not backed by any company can even provide them out of the box), and some corporate users (and some of the individuals too!) have chosen to pay for the commercially-licensed players offered by Mandriva, Turbolinux, and Linspire.

Freedom is too a matter of choice. Practical choice, not revolutionary programs. This makes the “purified” gNewSense distro sponsored by FSF a political product, and nothing more. Fortunately, the future Ubuntu 7.10 Gutsy Gibbon will feature a new flavour of uncompromising freedom that renders gNewSense pointless, for it «takes an ultra-orthodox view of licensing: no firmware, drivers, imagery, sounds, applications, or other content which do not include full source materials and come with full rights of modification, remixing and redistribution».

At the opposite end of the rainbow, another myth appeared. Click’N’Run, the service provided by Linspire, is supposed to offer soon a universal repository, starting with Debian, Fedora Core, Freespire, Linspire, openSUSE, and Ubuntu. Both free and commercial packages will be offered.

While providing packages for the whole Universe is not a trivial task at all, and one may wonder whether Linspire has the required manpower to ensure a proper QA so that updating from CNR doesn’t break the system, there are some high expectations that will not be met.

A good deal of the users who currently use third-party repositories like the Penguin Liberation Front are cherishing the unexplained belief that CNR will allow them to get rid of all the annoyances they were having, by providing them with packages not in the original distro of their choice.

Sure thing, this may happen, but for a price! What is currently patent-encumbered will fall under the commercial offerings umbrella: the “harmless” packages will be free, but the others might cost you fees like $49.95. Many users are not conscious of that.

As the MP3 (MPEG-1 Audio Layer 3) patent status is the most fuzzy of all, it is also the audio format that offers you more than a single choice. There are even two free ways to play MP3 legally.

The first one comes from Mandriva: all their distros are MP3-aware out of the box, even the Free download editions. As part of a commercial deal, they can offer you legally-licensed MP3 decodecs for free.

The second choice is from Fluendo, whose Web Shop not only offers paying decodecs for Dolby AC3, MPEG4 Part 2 Video, MPEG2 Video, Windows Media Video and Windows Media Audio (the complete set of playback plugins for Gstreamer is priced EUR 28), but the MP3 plugin is free of charge!

I bet everyone is willing to pay royalties for a mathematical method or for a file format. Feel free to do it.

There is one more thing that might make the defenders of freedom feel uneasy. Fedora had the intention to provide Totem with a CodecBuddy feature that would inform the users that restricted formats can not be played, and to direct them to the legal choices provided by Fluendo. As Red Hat and the Fedora project are American subjects. they can not afford to be accused of contributory infringement of any kind. See a possible screenshot here.

And still, the MP3 playback plugin can be downloaded for free from Fluendo.

What is defective with this approach? What would be extremely annoying to me (should CodecBuddy behave as described) is to see how Fedora, traditionally a freedom-oriented, neutral distro, is going to become nothing more than a Fluendo Webshop front-end! I can predict how some FSF-enrolled people will stop using Fedora the next day.

19. The kernel 2.6.20

Somewhere on the road from 2.6.19 and 2.6.20, a major change happened: «First, the big change is the migration away from the crusty old parallel ATA drivers to shiny new ones that use the same libata infrastructure as the SATA drivers. A side effect of this is that /dev/hda becomes /dev/sda. This isn’t a problem if you’re using ‘mount by label’ (which has been the default in Fedora since forever). If you aren’t, well, it’s going to be fun.»

Manually compiling a 2.6.20 kernel took by surprise some people, when «the harddrive had switched dev file from hda to sda and the DVD drive had taken the place of hda

As a matter of fact, I have experienced some moody behaviors with Ubuntu Feisty (Beta):

  • kernel 2.6.20-12: IDE (PATA) /dev/hda goes /dev/sda
  • kernel 2.6.20-13: IDE (PATA) is seen again as /dev/hda
  • kernel 2.6.20-14: IDE (PATA) /dev/hda goes again /dev/sda

The /dev/hdb IDE CD-ROM is seen as /dev/scd0 by the unified driver.

My first reaction believe that starting with 2.6.20, the Linux kernel shoud have had a new tagline: «How Do You Want Your Devices To Be Called Today?»

And indeed, forcefully treating a device as a different device, and changing its identification in the process is revolting: an IDE PATA hard-disk and an IDE CD-ROM will never have assigned SCSI-like device names in BSD-based operating systems such as FreeBSD or Solaris!

At least, device names have a meaning in BSD-land.

I agree that using a SATA driver with a PATA disk works, but why forcing it? I thought the philosophy of forcing the users was a Microsoft one, not an open source concept. Even so, when Microsoft did the same thing in Windows NT, replacing the “scsidisk” driver with “disk”, it was the other way around: a more generic name was used, which made the value of truth unchanged. Any SCSI disk is indeed a “disk”, but not all the disks are SCSI disks!

Chuck Ebbert from Red Hat told me that dropping the old PATA driver was justified, as there are horrible IDE bugs with hot-pluggable drives. However, there is a problem: «The real problem is that SCSI only supports 15 partitions per disk while the old IDE driver supports more (31?) Nobody has an answer for that one…»

To me, this is just another proof that the decision to treat everything through the SCSI driver was a hasty one. Automated updates of a production system that go into hdX becoming sdX will break things like automated backup, mounts, exported NFS, and even RAID.

Mounting by label is supposed to be immune to device name changes. And Red Hat distros are using it by default. There are however two CONs to that answer.

First, some other distros (Ubuntu) are using udev-provided UUIDs in GRUB instead of LABELs when with the SATA driver (root=UUID=c18758e8-a5c2-4921-816a-f235542574a2). One size does not fit all.

Secondly, there is a lot of software that will still break, for it’s using the more intuitive device name. Some examples include raidtools (raidtab) and mdadm (mdadm.conf): they both use /dev names and will need manual fixes.

The irresponsible changes in the Linux kernel contrast with the conservative approach of the *BSD family of operating systems, whose wisdom says that something that worked yesterday is not supposed to break tomorrow.

Unfortunately, the kernel team does its job and takes the decisions without a previous poll with all the mainstream distro makers, albeit some of them (Red Hat, Novell) are employing kernel developers. This is decentralized software development.

Coming after a 2.6.19 kernel that was breaking all distros older than ten months, I wonder whether you’re supposed to be expecting for less every day when running Linux. I for one am expecting for more.

Therefore, 2.6.20 might be the sign that you should switch to *BSD.

20. 2.6.16 and 2.4.35: does anyone care?

For the stability-minded people (remember, the Linux kernel does not have a stable API), it was established that 2.6.16 will be the stable kernel for 2 or 3 years. Still, no stable API/ABI for external modules!

The 2.4 kernel has a stable version in 2.4.34, maintained by Willy Tarreau. More than three years ago, Marcelo Tosatti (the 2.4 maintainer at the time) said that after 2.4.24 is released, the 2.4 kernel will go into maintenance mode, but changes continued to appear. Ironically, the 2.4 kernel is still used by the recently released Debian Etch, and Slackware 11.0 is the last Slackware release to have it by default (the next release will completely drop the support for 2.4).

Some people seem to still care about 2.4. Is anyone caring of the long-term supported 2.6.16? I am afraid not: RHEL5 shipped with 2.6.18 (and will stick to it for 7 years), Debian Etch has chosen 2.6.18 too.

How expressive for the good coordination between the major distributions and the kernel team!

21. KDE vs. GNOME

Go to any Linux forum or mailing list and you will be able to start a “KDE vs. GNOME” flamewar without the slightest effort.

When KDE was founded in 1996, the cross-platform Qt toolkit was chosen for its richness of features. Qt is still believed today to be the best toolkit available, although it’s worth mentioning that GIMP, the open-source rival of Adobe Photoshop, is using the GTK+ toolkit, which constitutes the foundation for “the other major desktop environment”, GNOME.

Due to some concerns expressed by the Free Software Foundation with regards to the then non-free Qt license, GNOME was launched in 1997 by the GNU project, to provide with a really free desktop environment. In the meantime, Qt has changed its licensing model, and starting with 4.0, a GPL edition is available for Windows too, yet “a friendly competition” is still opposing KDE and GNOME.

Miguel de Icaza, the initial GNOME project leader and the creator of the popular Midnight Commander clone of Norton Commander, is now more interested in the development of Mono, which explains in part the contamination of GNOME with Mono.

While KDE has decided to only provide the stable 3.5 branch (currently, 3.56) with minor bug-fixes and improvements, as the upcoming KDE4 will be a major breakthrough, GNOME makes slow but steady advancements. Nobody knows what an illusory GNOME 3.0 might bring (there are no plans for 3.0), but the traditionally under-featured GNOME seems to have reversed the trend: at least for the time being, it shows visible changes from a stable version to the next one.

Red Hat was originally “GNOME and proud to be so”, whereas the European distros focused on KDE. It has been said that GNOME is business-oriented, and KDE is aimed at cutting-edge technology, but things have gradually changed in the last 2-3 years.

As a consequence, two traditionally KDE-centric Linux distributions that used to consider GNOME as “just an attachment” have decided to grant it a higher role, almost unbalancing themselves.

SUSE was the first to change its focus to GNOME, after having been acquired by Novell (GNOME is more popular in the United States, whereas KDE is the most popular in Europe). At some point, panicked users thought that Novell decided to support only GNOME as its official desktop. It was later clarified that both desktops are important to Novell, but in the meantime Hubert Mantel, co-founder of SuSE, had already left the company. He returned to Novell after the controversial Novell-Microsoft deal.

As a buyer of Ximian, the added know-how allowed Novell to have an active role in the development of GNOME too. Evolution, now the default personal information manager and workgroup information management tool in GNOME, is an important asset of Novell too.

Mandriva was also tempted by GNOME, and they have managed to provide a well-polished GNOME desktop, at pair with the KDE one.

The less positive part of the never ending competition between KDE and GNOME is that it is not like the competition between other window managers or desktop environments (XFCE includes enough tools to be considered a desktop environment, and its latest file manager Thunar is excellent; yet, it is unable to draw transparent icon labels on the desktop, which is surprisingly anachronistic nowadays). You will never see passionate quarrels on whether Fluxbox or WindowMaker “rulz”, but almost every KDE-GNOME dispute gets flaming and political.

It’s even easier now that GNOME has Mono.

As I have personally switched from FVWM and WindowMaker to GNOME, then to KDE two years later, I can tell it’s not that easy to make the switch.

Basic differences include the reversed order of the OK and Cancel buttons, or the immediate effectiveness of the changes in GNOME, versus the need to Apply them in KDE, all idiosyncrasies inherited from the widgets or from the philosophy of the respective desktop. As for the look and feel…

The default graphical theme in KDE looks more frivolous than in GNOME, and this is one of the reasons I rejected it for a long time. On the other hand, GNOME only started to matter since version 2.0, because the previous GTK+ 1.2 had some of the ugliest possible widgets.

An annoying issue that appears when in a desktop environment you’re running an application that uses the widget library of the other desktop is the lack of consistence: if you run KDE and use Evolution, it can look very plain, and won’t theme with the rest of the OS. Telling KDE to theme GTK+ applications to mimic the style used in KDE is however giving better results than configuring Qt to have an acceptable look under GNOME.

Seven years ago, Richard Stallman declared on Qt, the GPL, KDE, and GNOME: «GNOME and KDE will remain two rival desktops, unless some day they can be merged in some way.» FreeDesktop.org was then formed to encourage the cooperation among open source desktops for the X Window System. So far, feasible projects such as XDG menu or D-BUS were practically adopted by the mainstream, whereas the illusory Portland initiative has failed to set “common interfaces for GNOME and KDE” other than menus, MIME types and other small items. Important, but unimpressive from a distance.

Another salutary initiative, the Tango Desktop Project, was more successful in unifying the look of the individual icon sets as a first step to create a common look and feel.

A unified desktop might be a factor of influence for a better mass adoption of Linux, and it would also cut the rivalry that exists between the fans of each of the two desktops. Once KDE 4.0 is released, I am afraid the “KDE vs. GNOME” flamewars will get revived nevertheless.

22. Alice in BSD-land

The BSD family of operating systems comes in less flavors than Linux. The major derivatives of 386BSD and 4.4BSD-Lite are, in chronological order, FreeBSD, NetBSD, and OpenBSD, with a honorable mention to DragonFly BSD. Desktop-oriented FreeBSD spin-offs like DesktopBSD and PC-BSD are still struggling to acquire a production-ready status, yet they provide a very pleasant desktop experience. Needless to say, BSD operating systems can run Linux 2.4 binaries natively.

My first encounter with FreeBSD was after I was introduced to Linux. I guess it was in the times of FreeBSD 2.0.5 and 2.1.5. The BSD-like init system of Slackware helped me to find my way in FreeBSD. I don’t remember very much of it, but I know I liked it. Maybe this was because it was really STABLE.

Even before that, I had the joy to discover NetBSD 1.0. It was small, it was interesting, and it looked promising.

At some point I was distracted by Red Hat 3.0.3 and 4.0, and I lost the focus on *BSD.

Created to provide software that may be used for any purpose and without strings attached, the FreeBSD Project had a clear license, no matter it’s about the 4-clause or the 3-clause one. Simply put, I love the BSD license for you can summarize it to everybody in a simple way: “Do not claim that you wrote this, and do not sue us if it breaks.”

15 years after my first contact with the GNU General Public License, I still have problems in finding the coherence in it. The never ending debates on how does the Novell-Microsoft deal put up with either GPLv2 or GPLv3 (draft), as well as the confusion that reigns over the legality of the binary kernel modules (OK, if they’re illegal, why doesn’t anybody enforce the GPL?) prove that GPL is a confusing license. The best digest of it would be: “the sources better be available somehow”, but the arbitrary of the interpretation of concepts like the “written offer” and the ambiguity on what is a “derived work” consumed so much energy, that a supernova could have been born.

Unlike the governance model used for the Linux kernel, that of a benevolent dictator for life, in which Linus Torvalds approves the modifications he likes, FreeBSD has a democratic model that uses a core team of 9 members, and several other carefully sized teams. The governance model does matter because the FreeBSD project is responsible for the whole operating system, not only for the kernel, as it’s the case with Linux. So no, FreeBSD has not a closed development model, but only a mature one.

Why are FreeBSD & friends less used than Linux? The discrepancy is even more visible at the level of the general public, and the desktop usage of FreeBSD is surprisingly low, regardless of the fact that popular cross-platforms desktop environments are working in *BSD just like they do in Linux.

The detractors of the BSD license blame it for the alleged slowdown of *BSD compared to Linux. When there is no guarantee to have the modified versions of your code returned to the community, less people are motivated to contribute. I have to confess that I fail to understand what is all about: after all, you should write proprietary, closed-source code if having someone else using it is a problem to you. Yes, commercial usage should be allowed too, otherwise what’s the meaning of the “open” part in “open source”?! Or maybe, as Clinton said once, «it depends of what the meaning of “is” is.»

It’s rather that FreeBSD long time resisted to the temptation of making things “easier” in order to entice the Windows users. It’s not elitism, as nobody said “we only need smart users”, it’s responsibility.

Unfortunately, starting with 5.0, the quality and stability of FreeBSD have deteriorated, or maybe it was my hardware that was less supported. Compromises have been made to match the popularity of Linux, and the price to pay was rather high.

Starting with 6.1 and 6.2, I felt like FreeBSD started to recover, and I hope the future will prove it as a stable, rock-solid operating system.

The second-best is not second-best no more. As, Charles Hannum, one of the NetBSD founders, wrote last year, «The NetBSD Project has stagnated to the point of irrelevance. It has gotten to the point that being associated with the project is often more of a liability than an asset.»

I can’t make any judgments on that one, but I noticed that NetBSD 3.1 is a step back in terms of project management than 3.0. With 3.0, a second CD image was available, to offer extra binary packages. This ISO file is not available anymore in 3.1, and the FTP tree is messy and inconsistent. The i386 architecture seems quite neglected nowadays.

Fortunately, there is OpenBSD too. It is a hard nut, and a “take it or leave it” issue. Theo de Raadt, the founder and leader of OpenBSD and OpenSSH, is known to have a strong personality, and a real repugnance of the closed-source code. Unlike FSF’s Richard Stallman, Theo is credible in his approach, and doesn’t like any kind of attached strings, GPL included.

Et pour cause! The viral character of the GPL makes that everything that mixes with GPL be only able to be licensed as GPL. As it was once said: the BSD license protects the freedom of the users, whereas the GPL protects the freedom of the code.

The traditionally cold relations between Theo de Raadt and the Linux community experienced a burst of bold reciprocal accusations when an OpenBSD developer was accused of GPL infringement and code stealing by the developer of a Broadcom bcm43xx driver licensed under the GPL (the full thread).

To make a long story short, the controversial driver code in OpenBSD was not meant to be put in the main code tree — although it was indeed publicly available, it was just to look at and modify it in-place, so that a working driver could be created in the end. No “stealing” was ever intended, and getting your inspiration from GPL’ed code with regards to data structures, finite state machine models, and whatever else was not documented by Broadcom is not theft.

The dispute got uglier than necessary, however simple conclusions like «The lesson is simple. Communicate and observe the licenses. There is no other way to put it.» are way too simple. The actual dispute generated by the incompatibility of GPL with BSD looked more like a dispute over software patents, with GPL bearing the coats of the patentee. There was a definite lack of common sense and humanity on the Linux/GPL side.

While the most secure operating system (in the default installation), OpenBSD might not be the best choice for everybody. It releases twice a year, with a mathematical precision (unlike Fedora or Ubuntu), but this also means you should upgrade the system: older releases are not supported for long. The FAQ says that «old releases are typically supported up to two releases back.» This does not guarantee 12 months, and the elusive phrasing makes people believe that OpenBSD is only supported for 6 months.

While you don’t “have to” upgrade every 6 months, it is certainly to your benefit to do so. There is great wisdom in this approach, because although upgrading is always a stress, OpenBSD advances in small, non-volatile steps that take the hassle out of upgrading. Each release is a large collection of small changes. Also, where other projects are consistently down to the wire with the release date, too frequently pushing an RC out the door as golden master simply because they have delayed things too long and need to start selling, OpenBSD usually has a release finished and ready to go almost a month before the release date.

OpenBSD is not a one-man project: there are some 170 contributors around Theo de Raadt. The engineering team is usually busy with creating drivers and reverse-engineering undocumented chips. Also, any external code passes severe quality checks before it can enter OpenBSD.

This is the team who brought to the whole world the open-source implementation of SSH: OpenSSH. And this is the team who created the OS with the most Spartan installer currently in use.

23. Shooting yourself in the foot

Shooting yourself in the foot is not unsual in the OSS world. If it’s not the future Debian Project Leader trying to delay the next stable release as a sign of rebellion, it’s Ubuntu breaking after a system upgrade. Or it’s Eric S. Raymond leaving Fedora after 13 years, for that he cannot stand the alleged “incompetent repository maintenance” that «will condemn Fedora to a shrinking niche in the future.» This only makes sense when you find that ESR joined the Freespire leadership board, so that joining the Ubuntu camp was only logical. And shameful.

The first major fork that bewildered me was the incredible story of X.org being born out of XFree86, for a change in the license of XFree86 4.4.0 that was considered incompatible with the GPL.

The XFree86 Project license 1.1 added the requirement that a proper acknowledgment (“This product includes software developed by The XFree86 Project, Inc (http://www.xfree86.org/) and its contributors”) either in the documentation, or in the binary itself, “in the same place and form as other third-party acknowledgments”. This was considered by the Free Software Foundation as a the BSD-like advertising clause that makes the license incompatible with the GPL.

The claimed license incompatibility was just the final straw in a strange history of stagnation, restricted access of the developers to the CVS (only a core team of 15 had commit rights), culminating with a vote on the self-disbanding of the Core Team, effective immediately!

A few distros have forked XFree86 from an older version (Debian Sarge is using it), but we are practically using X.org now.

Another example of a severe self-shoot is the forking of cdrkit out of cdrtools, which was a result of Debian Bug #377109.

The root cause was the decision of Joerg Schiller to change the license for cdrtools’ build system under the CDDL instead of GPL. Sun’s CDDL is an OSI-approved free license, but (as we all know) GPL doesn’t mix with anything else.

By following the long thread of comments on the Debian Bug #377109, we can notice that Joerg Schiller has some excellent points every once in a while. For instance, he is showing that Debian’s own Social Contract says at section 9 that a license (here, the GPL) should not force the contamination (here, of the CDDL-licensed build system with the GPL license). If Debian would follow the strict GPL interpretation, then Debian would need to call the GPL clearly non-free because it then would violate the DFSG Section 9.

I don’t know of any impartial attorney of law to have examined the compatibility of the GPL with Debian’s Free Software Guidelines, as they seem to contradict one each other, in such cases.

Furthermore, Joerg has raised one more legal objection, after «a 1.5 hour phone conference with the lawyer who created the CDDL and the Solaris chief engineer»: a European Author has the right to create more legal combinations than a US Author has. This is a result of the archaic US Copyright law. As per the German Copyright Law (which applies to Joerg’s licensed work), «the GPL cannot go beyond the boundaries of copyright law. If, according to copyright law two works are independent, then the GPL has nothing to say about it. … Schilling is claiming his Makefile, because it is a full program, is an independent work. That means the GPL is irrelevant until you prove it is not an independent work.»

The joys of having GPL as the most used open-source license, instead of BSD, MIT, CDDL…

And now, a very practical shooting: very recently, Olivier Blin from Mandriva has witnessed how Apache 2.0 managed to self-indict a DoS! While a bug in the respective build of the 2.6.17 kernel is not excluded, isn’t that nice to know that Apache 2.0 is able under certain circumstances to shoot itself with a Denial-of-Service? (Slackware comes with Apache 1.3.37, and OpenBSD comes with Apache 1.3.29).

Not to mention the latest quiz: Are GPLv3 and Apache 2 incompatible?

24. The Awakening

Under Linux, BSD and everything UNIX, the high uptime was always one of the most valued assets, especially for servers. Windows users were traditionally enjoying the BSOD at random moments.

But they were also enjoying another features before Linux ever had them: hibernation (officially known as “suspend to disk”), and standby (officially known as “suspend to RAM”). Since a desktop user doesn’t need to have the computer running 24/7, yet he would like to be able to “freeze” it and recover to the saved state quickly, these were great features. Laptop users will enjoy them even more, as the power is a scarce resource when you’re on the way.

I remember how surprised I was to see that both hibernation and wake-up were practically instantaneous starting with Windows 2000, compared to the slow BIOS-based procedure used by Windows 98 on an old HP laptop. After almost a decade of “hibernation and awakening”, the availability and reliability of these features in Linux is one of the first things I look after a new installation on any of my home systems.

Alas, the current status of the suspend features in the Linux kernel is best described by the word broken: Barely Reliable Overlooked Kernel Erratic Nap.

I always have problems when I change a distro, or even when I upgrade to the next version. With FC5 I had a reliable hibernation on the laptop. Ubuntu 5.04 managed to hibernate my desktop, but I lost this with 5.10. SuSE 9.3 hibernated my PC with kernel 2.6.11, but SuSE 10.0 broke it completely: kernel 2.6.13. FC6 restored the sleeping capabilities, but Pardus broke them again, while still being able to suspend to RAM on my laptop. On its side, Mandriva 2007 brought back the suspend to disk for my PC, and RHEL5 and Kubuntu 7.04 can cope with my laptop almost perfectly. In all this foolish saraband, I was never able to make anything go to sleep with either CentOS4, Debian Sarge or Slackware.

Yes, I know: the BIOS might be “defective” with regards to the DSDT, no matter the kernel says it can cope with it: «ACPI (supports S0 S1 S3 S4 S5)». Patches are available, but is this really a task for the end-user? An ACPI DSDT patch in initrd is a distinct possibility, howevr none of the distros I tried had it.

While the old power management interface APM was superseded by ACPI, the power management features that work with your computer and a given version of an open-source operating system can never be foretold.

There is more than a single possibility to make your computer suspend: the swsusp support in the kernel, the suspend2 patch, and the userspace µswsusp. No one can tell you what should work with your computer.

At the desktop environment’s level, there is also some duplication in the front-end helper applets that allow you to suspend: we can count KLaptop (dead but walking in some distros), KPowersave, gnome-power-manager, the new kde-guidance-powermanager, and possibly some others. Unless the efforts to make Linux have a better power management for a broader range of hardware will be better focused, it will still be regarded as a difficult operating system, built using a chaotic development model.

But why is that we want to put our computers to sleep (err… not for good), instead of just shutting them down, then powering them up again? It might be about the booting speed.

I am personally satisfied with the speed my computers are booting. I don’t care that much if they take 30 or 60 seconds to get from death into X. Some other people do care.

The classical init system (sysvinit/sysv-rc) has been doing a good job in starting, supervising and stopping all the processes on Linux systems for 15 years now, not to mention that it used to do its job on Unix System V since 1983. When comes to the speed of booting a modern Linux system, various approaches — usually involving parallel execution — have been tried in order to improve it: initng (Init Next Generation), runit (somehow similar to BSD’s RC), pinit, and of course Ubuntu’s sexy upstart.

While progress is generally a positive idea, having several distros starting their own init system is crazy: this is going to make the restpective distros incompatible with other distros, and this will also render your previous knowledge of sysvinit useless. In the process, all the systems unsing the new and experimental init systems will definitely not be production-ready, as the risk of not booting correctly is significantly increased compared to those using the verified System V init.

When something is broken (power management), they don’t fix it. When something works (sysvinit), they try to break it. I am afraid this is the way Linux works nowadays. Maybe they need a different way of awakening to discover what is really important: through meditation (zazen).

25. Whereto?

I can already feel the trollers anxious to throw with accusations of FUD. Not facing the truth and not accepting critiques would be prejudicial to FLOSS at large.

Narrow-minded readers might assert that’s impossible for the writer of such a critical document to have a genuine interest in the development of the open-source software. The same happened to Bernard-Henri Lévy when he published the book «American Vertigo: Traveling America in the Footsteps of Tocqueville». Some reviewers declared BHL must have some twisted logic, otherwise there wouldn’t be that «insurmountable paradox: on the one hand, a declaration of love for America every few pages; on the other, the content of the book, which is in sharp contrast with this declaration. Alas (or Dieu merci?), Lévvy cannot escape his Frenchness.» Ah bon.

Criticizing the sorry state of the open source today does not mean closed source is better. Also, the current status of the open-source projects is not bad per se, but only if we put side by side the expectations we had a decade ago with the reality we are facing now. Things could (and should!) have been much better.

What are my hopes coming from?

As Linux is still the most prominent alternative to closed source and proprietary, I will start with some Linux opportunities.

Dell’s recent decision to ship desktops preloaded with Linux might be boost the progress of Linux into the business desktop. As a European, I would like to see some Mandriva-powered systems that are easy to find worldwide, however previous attempts with HP and Dell were huge failures in the end.

The remaining RHEL clones, CentOS and Scientific Linux, should increase their cooperation and even merge if deemed useful to create a solid, free of charge enterprise offering. I started however to doubt a little about the future of RHEL after the wrong message given to the public by Matthew J. Szulik with the Red Hat Challenge. The competition is only open to «full-time, part-time, or executive student attending graduate business school or a graduate design school in pursuit of a Masters in Business Administration or similar degree», so that it sounds like having a good Linux business plan is unrelated to technology, which is hard to swallow by a techie. In addition, the challenge is only open to certain countries, giving you the message that Red Hat is too lazy to deal with the contest and gambling rules of Québec, of half of the Europe and of half of South America; however, the Microsoft Canada Holiday Greeting Card Contest 2004 was won in Québec, the winner of the Microsoft Future Pro Photographer Contest 2006 was from Romania, and the software design category of Microsoft Imagine Cup won in Italy, all of these countries missing from RHAT’s list of qualifying places. It’s time for Red Hat to realize that not the MBA-ers are the missing resource, but some more common sense.

I hope that Mandriva will find its way towards a positive financial balance, and that the Corporate Desktop and the Corporate Server will gain momentum. There is no way I could agree with Linux.org in their view on raising and falling distributions.

I also hope that the unjustified aura around Linux will not hide *BSD anymore, and that FreeBSD will improve its market share with time. With the backing of iXsystems, PC-BSD should continue to strive to provide a more comfortable FreeBSD power as a viable desktop alternative to Linux.

There are little chances that OpenBSD will ever change their installer, whose main asset seems to be its ability to fit on a single floppy; its claimed perfection through simplicity is diminished by a known issue mentioned by Jem Matzan in his OpenBSD 4.0 Crash Course, where it notes at page 13: «the installer will crash if you try to use DHCP with more than one network interface; you can change this manually later if necessary.»

DragonFly BSD has also an interesting potential, I wish them luck and more resources.

We might encounter improvements even from OpenSolaris, but this will be a long and winding road. Sun is still working to complete the open-sourcing of Java, and they might need a full mental rework to be able to provide with less threatening SLA and Entitlements for their commercial offerings (maybe they should fire some lawyers in the first place). Solaris Express Developer Edition is not that express, as it has has for the minimum system requirements 768 MB of RAM and 14 GB of disk space (80 GB is the recommended size). Sounds like Vista to me.

And Slackware will always matter, as long as craftsmanship, commitment, and quality make a difference.

To contradict myself, I have written parts of this report on a Pardus system, and some final edits were done under Kubuntu 7.04, both distributions I said I will not use. Like I said in the opening, we’re all humans after all.

But I will never understand why IBM has not agreed to open-source OS/2. ■

LEGAL NOTICE: As an exception to the rest of the site, this article falls under the following specific legal terms: Copyright 2007 Radu-Cristian Fotescu and JEM Electronic Media, Inc. No reprints nor reposts without written permission from both copyright holders.