In the last year and half, I thought I could stop the distro-hopping, but very recently it turned out it wasn’t the case. Since there are criteria and thoughts about my requirements from an OS that I never put in writing, it’s time I do it now. This should also explain a bit more about my grumpiness and about my opinion that most software developers in the world of open-source are, if not mentally retarded, at least almost completely deprived of judgment and common sense. This is not vanity; it just so happens that I’m right.

Preamble

Should anyone read this from the more seasoned sysadmins or software developers, they are entitled to having different opinions, although I maintain that mine are more logical. But most Linux desktop users nowadays are retarded gamers who weren’t even born when I first met Linux! They should STFU and go back to their Reddit, to their TikTok, or to their vanity YouTube channels.

I first encountered Linux in the winter of 1994-1995. It was a bunch of floppy disks (we called them diskettes) labeled SLS. Then I met a better distro, Slackware, but I can’t remember the version. What I do remember is that I used a lot the kernels 1.2.13 and 1.3.18. Back then, odd-versioned series such as 1.3 were “experimental” or “development”; nowadays, every single broken kernel is considered stable. I also fondly remember Slackware ’96 and kernel 2.0.36.

I then met FreeBSD and NetBSD. At the time, I really thought that NetBSD was going to rule over the world, as it was supposed to be more portable and, even if back in the day we didn’t use containers, more suitable to containerization. Unfortunately, it was Linux to have won. And this makes no sense. It won in the embedded field, such as in the automotive industry. Why Linux?! It’s too heavy. If FreeBSD was suitable for servers even in 1996, NetBSD should have been the OS of choice for embedded (once we dismissed QNX, which is unfortunately commercial-only). I also met OpenBSD, which has fascinated me for a while.

Oh, and I also encountered Red Hat Linux, version 2.0 or 2.1. I even ordered the jewel case CD set of Red Hat 3.0.3 from Walnut Creek! (Otherwise, ftp.sunet.se and ftp.funet.fi were my friends.)

But some sort of indifference grew into me regarding Linux and the *BSDs. To be honest, at some point, I became quite happy with Win98SE OSR-2.1. Meantime, after my hopes that OS/2 Warp 3.11 will gain momentum were crushed, I thought that the Windows path would be quite acceptable, especially as I liked the GUI metaphor introduced by Win95 and refined by Win98 and Win2k. I have always used the classic Win95 look with WinXP, and even with Win7 from a certain point upwards. I also was a fan of IceWM for a while.

After years of apathy, I became once again enthusiastic about Linux once Ubuntu 4.10 was announced. I ordered 2 or 3 CDs, then I also ordered 5.04. Later, I suppose I just downloaded the ISOs and burned my own CDs.

I even managed to convert to Linux a retired grandma. She liked the GNOME games I showed her. She didn’t need much more than e-mail (Evolution) and a browser. (After a few version upgrades of Ubuntu on that PC, she eventually ended with a Windows laptop. GNOME 2.32 was the last usable GNOME version, and I got frustrated with the mental retardation of both GNOME3 and Unity.)

Somewhere around 2007-2008, I was too disillusioned with regard to Linux (I still keep this relic from 2007). Whoever remembers my Planète Béranger blog from those times should also remember that I also quarrelled with a few important FreeBSD developers. (Or rather, one of them called me names, which was a bit unusual for the time: Twitter wasn’t yet a cesspool, and people weren’t that much aggressive online.)

But I also had some periods of enthusiasm. I loved CentOS 5, and I played with Scientific Linux and other clones, including the very short-lived StartCom Linux. (Later, I also used Stella.) Until August 2009 I maintained an EL5 repo called Odiecolon.repo for EL5; it last included 489 RPMs (i586), not counting the SRPMs.

For a while, and until September 2011, I also maintained an Unofficial Mageia Cauldron/2 Repository; it last included 100 RPMs (i586 and x86_64), not counting the SRPMs.

This aside, as a “professional distro-hopper,” I also remember how many distros of those time don’t exist anymore. And I kind of liked many of them. Who remembers KateOS, Wolvix, VectorLinux, UHU Linux, Frugalware Linux, Parsix, and many other defunct distros that looked promising? The original Zenwalk Linux was quite cool. The original Pardus Linux? I was a fan of it!

After one more period of apathy, I discovered Manjaro. I guess it was in 2014. One year later, I abandoned it.

Fast-forward… it’s still complicated.

The worst design decisions in Linux

● First of all, due to the stupid Linux kernel architecture, one cannot have a new driver for a new device, unless they install a new kernel; a driver cannot be installed over an existing kernel, like it’s always been the case with Windows.

WTF. Since Windows 3.1 (I didn’t have much experience with 3.0), not to mention that even in MS-DOS, once you had new hardware that wasn’t supported by the original OS, you just inserted a floppy disk and installed a driver. Why on Earth would anyone require a new OS kernel for that?!

Now, there are third-party kernel modules available for RHEL, and maybe for Ubuntu LTS too (although Ubuntu has Hardware Enablement kernels for supporting newer devices), but this isn’t the way things are usually done in Linux. If your laptop isn’t properly supported, you need a newer kernel. How stupid is that?

This also means that, if you’re not running a rolling-release distro, and you’re running the latest version of your preferred distro, you need another distro. This is even more retarded.

This is one of the main reasons Linux will never gain momentum on the desktop, unless it’s used on older hardware that can’t run Win11, and for the use of grandma and grandpa: e-mail, web browsing, an office suite, a media player, stuff like that.

And I didn’t even mention Nvidia. There are many gamers in Linuxland, most of whom are using something that’s Arch-based, or maybe they’re using openSUSE Tumbleweed or Fedora.

● A second idiotic design decision is to compile all the packages for a fixed version of the system libraries, and to never have them updated during the lifetime of a point-release distro. (There are exceptions, but that’s the rule.)

What do I mean by “fixed” version? Let’s set aside the fact that the apps are not updated (in a non-rolling distro), so you need to wait 6 months to get a newer version of the distro, which includes new versions of some apps too. The problem is the dynamic linking.

There are cases where a certain package version a.b.c1 requires libfoobar >= x.y.z1. That’s fine. Except that it’s not fine when the next version of the package, a.(b+1).c2 requires the latest version of libfoobar, which can be x.(y+5).z2, and that distro version only has version x.y.z1. But there’s worse: some packages require libfoobar = x.y.z. That version exactly and nothing else! This is completely retarded, and yes, it does happen.

Either way, we have the thing that each and every software developer, unless they build for a specific distro (say, an LTS one), prefers to build against the latest and greatest version of the required libs. Why is that? Are they all running Arch or Tumbleweed?

There are fixes to that. They’re called Flatpaks, snaps, AppImages. And they’re the wrong solution! They can work as a last resort solution for a few major apps that you really want to have updated, but if you’re using such fixes excessively, your OS’ footprint will become huge. Yes, SSD space is cheap enough, but I don’t like garbage. Such mechanisms recreate the Windows solution to the DLL hell problem, which was to install copies of all required DLLs in the folder of each program that required them, each DLL copy having a slightly different version! (Or to build those programs statically linked, which also increased their footprint. For some reason, this makes me think of Rust.)

At the extreme, one could use an immutable distro (immutable is the future!), and only add Flatpaks (Flatpaks and snaps are the future!). This is stupid, and I literally cannot understand how this is possible for smaller apps (what do I do if I need a newer version of a tiny app such as featherpad that nobody bothers to create a Flatpak for, and it wouldn’t even make sense to build a large Flatpak for a tiny app), and for system libraries (say, a newer Python that needs to be visible system-wide). Not to mention that Flatpaks are better suited for GUI apps, snaps are better for services and non-GUI apps, and AppImages aren’t great when it comes to theming. Oh, as for the sandboxing of them, Flatseal is cool; snap connect is a PITA.

I just want my package manager and nothing else! But I cannot have what I want, unless I’m using a rolling-release distro.

As a side note, a few apps use a different updating mechanism: they install using a .sh (or even a .deb or an .rpm), and they either self-update, or they ask you to update them by installing a newer version the same way. This is the case, e.g., with the Calibre e-book reader and editor, and with some VPNs (I’m using PIA). This is a good solution as long as you trust those who made those scripts or packages. (But snaps too got infected, and infected again, so…)

● A fourth crappy design decision is now obvious: in Linux, unless you’re using Flatpaks, snaps, or AppImages (or the above mechanism), or unless you’re building from source, you cannot have a newer version of a package. You have to have newer versions of all packages when you install a new point release! Or you will always have the latest versions of everything if you’re using a rolling-release distro! (Security patches don’t count as new versions.)

In contrast, there is much more freedom (sic) in Windows. You only upgrade the programs you want to upgrade, full stop. If you want to use an IrfanView from five years ago, you can just do that.

One may argue that, in a sense, Windows is as if it were immutable, with very few included programs (Notepad and Edge, mostly) and you install your programs from external sources, just like you’d do with Flatpaks. (Microsoft Store can be ignored.) Fair enough. But in Linux this isn’t the way an app reaches an installed OS!

For open-source apps, you get the source. Unless you’re using AUR, it can be tedious (it really is!) to rebuild each and every app once it gets a new release. The developers of some apps also build packages for some major distros, but this isn’t practical either unless they also create a repository, a PPA, a COPR, whatever suits your distro. But then you’d have to manually manage those repos with regard to what to upgrade and what not to. Tedious. And not everyone releases more than the sources anyway.

This is why distros have packages. Their own ones. But then you add extra repos supposed to be compatible, and in sync, and whatnot. And all hell breaks loose. Even zypper cannot always provide solutions to dependency hell!

Funny thing, though: unless I’m senile, there were fewer dependency breakages 15 years ago than there are now. Why is that? The mere fact that so many software developers release almost daily because they really, really must add a few new features, solve a few bugs, and create new ones, what else has led to such a complexity that can’t be managed anymore? Is everyone releasing too often, is that all there is? (For the Linux kernel, the development pace is definitely crazy, and the quality went down the drain.)

The case with versioning and the non-rolling-release Linux distros

Here’s an image taken from Wikipedia’s article about Software versioning:

Let’s ignore secondary aspects such as Debian epochs, and focus on the relevant aspects. The most relevant aspect is:

  • The patch version is supposed to fix an important bug or a security vulnerability, so even a distro that doesn’t update a package during a point release should update to the next patch level. Most don’t.

Let me explain. In distros such as Debian stable, Ubuntu, RHEL and clones, you have packages versioned like this:

  • linux-6.1.27 (Debian 12), linux-5.15 (Ubuntu 20.04 LTS), linux-5.14 (EL9); thunar-4.16.10 (Ubuntu 22.04 LTS), thunar-4.16.8 (Debian 11).

How are those packages upgraded during the lifetime of such a point release?

  • linux-6.1 is an LTS kernel, and its current version is 6.1.90. Remember that “90” is the patch level in the standard semantic versioning convention? Well, who the fuck cares in Linux? Debian did not upgrade its kernel from 6.1.27 up to 6.1.90. Instead, its 6.1.90 kernel is versioned 6.1.0-21. This is inept.
  • linux-5.15 is an LTS kernel, and its current version is 5.15.158. What do you think it’s called in Ubuntu? 5.15.0-111.121.
  • linux-5.14 is not an LTS kernel, but during its supported lifetime, its versioning went up to 5.14.21; after that, it was a distro’s task to backport the security fixes and whatever else they wanted to. But the current versioning in EL has the kernel as 5.14.0-427.16.1.

Here’s how they explain this shit at Ubuntu:

Ubuntu kernel-release = 5.4.0-12.15-generic

  • kernel version is 5.4, which is identical to upstream stable kernel version
  • .0 is an obsolete parameter left over from older upstream kernel version naming practices
  • -12 application binary interface (ABI) bump for this kernel
  • .15 upload number for this kernel
  • -generic is kernel flavour parameter, where -generic is the default Ubuntu kernel flavour

Mainline kernel-version = 5.4.8

Marvelously stupid. Why couldn’t they have 5.4.8-generic or 5.4.8-1.1-generic, and they fixed the patch level to 0, which is a lie, as it’s actually 8 upstream? Who the fuck designed this shit? If they don’t trust Linus Torvalds, why the fuck are they using the Linux kernel? Why should anyone FAKE a software version?

Next thing, they’ll tell me, “Oh, but there are apps or 3rd-party kernel modules that REQUIRE the version to be EXACTLY 5.4.10!” First of all, if anyone puts such a shit in an app’s spec, then that person is completely retarded. As for kernel modules, they would require the EXACT version, including the ABI bump, meaning 5.4.0-12, so the argument doesn’t hold water. It could as well be 5.4.8, so the real version would be known.

This parallel versioning is driving me crazy. You only get the real versioning in rolling-release distros. The other distros pretend they don’t change anything, not even the patch-level version, and they patch by adding digits afterwards: x.y.z-a.b.

This mental insanity can have side effects, and I’ll show you an example.

The CVE-2021-32563 case

I described the CVE-2021-32563 issue in May 2021, in the post “I had forgotten why I shouldn’t trust Ubuntu… but neither many other distros!“:

An issue was discovered in Thunar before 4.16.7 and 4.17.x before 4.17.2. When called with a regular file as a command-line argument, it delegates to a different program (based on the file type) without user confirmation. This could be used to achieve code execution.

This didn’t and still doesn’t look critical to me, but the two CVSS scores for the respective CVE were quite high:

Upstream, Thunar fixed the issue in 4.16.8, and fixes were made available even before the CVE was published on May 11, but what did Ubuntu do? At the time, they should have:

  • Backported the patch for thunar-1.8.14-0ubuntu1, probably as thunar-1.8.14-1ubuntu1 in 20.04 LTS Focal.
  • Upgraded thunar-4.16.6-0ubuntu1 to thunar-4.16.8-0ubuntu1 in 21.04 Hirsute.

What did they do from the above? Well, nothing. Ubuntu only pushed 4.16.8-1 into the development branch (the future 21.10 Impish). Similarly, Debian pushed Thunar 4.16.8 in unstable, then in testing. Did Debian patch Thunar in Debian 10 Buster, the stable version at the time? Nope. Never. They released with version 1.8.4-1, and it stayed this way!

This vulnerability (if it really is one) is easy to test. Just take a screenshot, save it on the Desktop, e.g. as img.png, then open a terminal on the Desktop, and invoke thunar img.png:

  • If Thunar is vulnerable, it will open the image in the default image viewer, no questions asked.
  • If Thunar has been patched, it will open the Desktop folder instead, because that’s where the image is.

But to have a better image of the open-source world, here’s the cherry on the cake: PCManFM and PCManFM-Qt always had, and always will have this vulnerability! Invoke any of them instead of Thunar in the experiment above, and you’ll have the proof! Nobody files a CVE for them, nobody from their developers or users ever heard of this vulnerability in Thunar, so nobody checked whether their file manager was also vulnerable or not! Even if to them this behavior might have been “by design” (hence a feature), they should have reconsidered.

Fucking morons. The same could be said about the distro maintainers. OK, backporting patches to 1.8.4 in Debian (while they were busy with polishing the next stable version) and to 1.8.14 in Ubuntu (LTS though!) involves more effort, but upgrading from 4.16.6 to 4.16.8 in Ubuntu should have been easier!

Except that the retards never follow the semantic versioning. They don’t upgrade to the real patch level! Instead, they only “upgrade” to their small patches that have nothing to do with upstream patches!

A case of double versioning

Someone asked me if I tried the e-book reader Alexandria. This is my answer:

Not yet. As I don’t want to use Flatpaks, I tried to install the .deb in Kubuntu 24.04, and it failed with unmet dependencies:
Depends: libwebkit2gtk-4.0-37 but it is not installable

Not to mention the stupidity of it requiring the exact version 4.0-37 instead of e.g. >= 4.0 or whatever it needs.

It would probably only work as Flatpak or AppImage, or if built from source, but I refuse to use an app whose developer isn’t able to build a proper .deb. If at least there would be a specification as to which distro and which version of the distro was the .deb built for.

I have to reach the logical conclusion that the developer is an idiot.

After a quick investigation, I found that libwebkit2gtk-4.0-37 is available as follows:

  • In Debian 12: as libwebkit2gtk-4.0-37_2.42.2-1~deb12u1
  • In Debian 11: as libwebkit2gtk-4.0-37_2.42.2-1~deb11u1
  • In Debian 10: as libwebkit2gtk-4.0-37_2.36.4-1~deb10u1
  • In Ubuntu 23.10: as libwebkit2gtk-4.0-37_2.44.0-0ubuntu0.23.10.1 (updated package)
  • In Ubuntu 22.04 LTS: as libwebkit2gtk-4.0-37_2.44.0-0ubuntu0.22.04.1 (updated package)
  • In Ubuntu 20.04 LTS: as libwebkit2gtk-4.0-37_2.38.6-0ubuntu0.20.04.1 (updated package)

Unfortunately, in Ubuntu 20.04 LTS, the version is libwebkit2gtk-4.1-0_2.44.0-2. Such versioning could only be created by retards. Either way, this package is not good for Alexandria.

In version 4.0-37_2.44.0, what part is the most important, 4.0-37, or 2.44.0? Because the version 4.1-0_2.44.0 also contains 2.44.0, but otherwise it’s 4.1-0, not 4.0-37, so it’s not good. Once again, who was the fucktard who designed this shit?

A quick search reveals plenty of cases of trouble because of this stupid and confusing versioning, combined with this upgrade from 4.0-37 to 4.1-0 and so many packages requiring the exact version 4.0-37 (apparently, the second version, ranging from 2.38.6 to 2.44.0, is not relevant), such as:

Well, I’d say that WebKitGTK is a completely broken project by design. The latest stable version listed on their website is 2.44.2, which is the second part of the double versioning from above, so it’s this versioning that should have mattered, not 4.0-37 or 4.1-0! Retards.

Why not Fedora?

I have used Fedora, on and off, on several occasions. Usually with XFCE, later with KDE.

Most of the time, it can be said that the distro is stable enough and not buggier than Ubuntu. There are two problems with it, though.

First, regarding updating and upgrading, the policy is inconsistent.

The good part is that they seem to understand semantic versioning, and minor updates are generally available during the lifetime of a point release, i.e., a package in version x.y.z would get all the “y” updates that are released during the said period of time, as it should. Examples:

  • F35 was released with GNOME 41.0 and it got updated up to 41.6
  • F36 was released with GNOME 42.0 and it got updated up to 42.10
  • F35 was released with KDE 5.22.5 and it got updated up to 5.25.4
  • F36 was released with KDE 5.24.3 and it got updated up to 5.27.4

However, other packages don’t get even minor updates, but only patch updates. This is still fine.

The bad part is that the point-release model is a complete lie when it comes to the kernel. As far as the kernel is concerned, the only difference between a Fedora point release and Rawhide (or Arch, or Tumbleweed) is a certain delay; otherwise, Fedora will always upgrade to the last kernel! How is this not a rolling-release distro when the most important part of an OS, namely the kernel, is constantly upgraded? (The kernel doesn’t really follow the same semantic versioning, as in its case, the “minor” version is not minor at all!)

What’s worse is that Fedora never offers a LTS kernel, although even distros that are 100% rolling-release do that: Arch and its derivatives offer linux-lts in addition to the rolling linux. Furthermore, EndeavourOS and Manjaro even have GUI tools to help you manage several kernel lines! While I was still using Manjaro, it offered: linux62 (6.2.0, experimental), linux61 (6.1.8, default), linux60 (6.0.19), linux60-rt (6.0.5 real-time), linux515 (5.15.90, LTS, “recommended”), linux515-rt (5.15.86 real-time), linux510 (5.10.165, LTS), linux54 (5.4.230, LTS), linux419 (4.19.271, LTS). I’m pretty sure Manjaro still offers the entire range of non-EOL’ed LTS kernels. If a rolling-release distro can to that, why can’t a point-release distro like Fedora do the same?

Because they couldn’t care less. Any Fedora release is “pretty much Rawhide, but in name”! As an example, F35 was released with kernel 5.14.10, and it went through 5.16.16. Kernel 5.15 LTS? As soon as 5.16 was available, F35 jumped at it.

And here comes the second issue with Fedora: its kernels really suck. Of course, it depends on everyone’s hardware, but Fedora is the only distro that made two of my systems freeze twice after having upgraded the kernel! It was around F35-F36, if I remember correctly. All the (small) problems I had with Manjaro never included such a situation! Even when it broke GRUB, the kernel and the system were otherwise just fine.

I cannot recommend Fedora. It’s a great distro, except it isn’t.

The Arch way in Manjaro: replacing the kernel under my ass

Now, you might be curious why I stopped using Manjaro the last time I used it. Thanks for asking.

● But first, let me tell you about the Arch-created idiocy that I had to overcome in the first place. No, not the fact that Arch couldn’t maintain a proper installer; even Slackware’s ncurses-based installer, roughly 30 years old, didn’t require great maintenance, and it still works fine. It’s the obsession of many distros to invent a new package format and a new package manager.

Frankly, pacman is incredibly cretinous, with flags and options that are illogical, and the proper combinations are difficult to identify and memorize, as their semantics are broken. In sudo pacman -Rsc, “R” is remove, “s” is recursive, and “c” is cascade, meaning that it does… what, exactly?! No other package manager is so confusing. What is sudo pacman -Rns doing? Then, sudo pacman -Rdd removes a package without checking for any dependencies, it just removes it (almost certainly breaking the system), but why the double “d”? Totally insane. Was Elon Musk the guy who designed pacman? It took me some time to accept that I had to use such a crappy package manager. Because it’s indeed stupid.

Any half-decent package manager should be able to solve the following types of package dependencies:

  • Installing a (meta)package such as mate-desktop brings as dependencies a number of other packages, such as caja (the file manager).
  • Installing caja brings as dependencies a few other (possibly optional) packages (say, extensions).
  • Should now one want to uninstall mate-desktop, this should remove all the packages that were installed as dependencies, including e.fg. caja.
  • As caja is uninstalled, its dependencies should be removed too.

Simples, right? Synaptic can do that, and apt, dnf, zypper have no problem with such an elementary scenario. Enter the wondrous world of the Arch-made retarded package management:

$ sudo pacman -S mate-desktop
. . . 
$ sudo pacman -R caja
checking dependencies…
error: failed to prepare transaction (could not satisfy dependencies)
:: removing caja breaks dependency 'caja' required by caja-image-converter
:: removing caja breaks dependency 'caja' required by caja-open-terminal
:: removing caja breaks dependency 'caja' required by caja-sendto
:: removing caja breaks dependency 'caja' required by caja-share
:: removing caja breaks dependency 'caja' required by caja-wallpaper
:: removing caja breaks dependency 'caja' required by caja-xattr-tags

Yeah, I cannot remove the file manager because its extensions require it! Pamac is as retarded as pacmac:

Obviously, I can’t remove mate-desktop either using Pamac, as caja requires it!

Totally unbelievable!

The good news is that Octopi is much smarter and it behaves as it should. Say I want to remove mate-desktop: it says it will remove all its dependencies:

Or maybe I only want to remove caja, in which case its dependencies should go too:

Octopi isn’t the only GUI package manager to behave this way: this is the definition of dependency management in any package manager! Any and all, EXCEPT pacman and Pamac!

I’m pretty sure this wasn’t a Manjaro-specific issue. I didn’t try it in pure Arch or in a direct derivative such as EndeavourOS or ArcoLinux, but pacman and Pamac being “children” of Arch, their behavior is designed upstream.

Now, the real reason I stopped using Manjaro, and that also deterred me from using any other Arch-based distro, even those who are practically Arch, with no rebuilded packages and no delay.

I was happy with Manjaro, for quite some time (I even tried their testing branch, with good results). Until one day.

A minor thing that bugged me is that all Arch-based distros only keep one kernel from any given line. From a security standpoint, that makes sense: you’d want the latest security patches. But what if a kernel build is buggy? In most distros the previous kernel builds are kept as backups: Fedora keeps 3 kernels by default (in /etc/dnf/dnf.conf, installonly_limit=3 means DNF will retain the three most recent versions of any package, including kernel packages); openSUSE Tumbleweed keeps 2–3 kernels by default (in /etc/zypp/zypp.conf: multiversion.kernels = latest,latest-1,running); Debian and Ubuntu keep 2 kernels (the current kernel and the previous one).

Not so in Arch world, where only the last kernel is kept for linux, and only the last kernel is kept for linux-lts. Still, not a major issue.

The major issue is that in Manjaro and Arch the currently running kernel is replaced while it’s running!

I wasn’t aware of that, until I discovered that, after an update without reboot, I couldn’t read anymore the USB flash drives that were exFAT formatted! uname -a still showed 6.1.7-1-MANJARO as the running kernel, but Manjaro Settings Manager showed 6.1.9-1 for the running kernel. The kernel’s modules were a mix-up, hence the lack of support for exFAT.

I complained about this on Manjaro’s forums: when linux61 went from 6.1.7 to 6.1.9, the 6.1.7 kernel was simply deleted, replaced, while I was running it! All they had to do was this:

Literally the definition of updating.

Well, no. In other distros a reboot is usually not necessary when a kernel is updated, unless the update fixes an important security vulnerability. The current kernel is not touched while it’s running! Any decent distro adds the new kernel and puts it first in the list. Services can be restarted, and other distros have tools to identify this situation (needrestart in Debian/Ubuntu, needs-restarting in yum-utils or dnf-utils).

System libraries are another issue, and yes, their update normally requires a reboot. Maybe Windows is taking the best approach here, by not updating any system files until after the system is rebooted. I don’t know why in 30 years of Linux, nobody thought of doing the same. AFAIK, Windows cannot delete or replace files that are in use. (Note that I currently hate Windows, but I’m just saying.)

Other updates that visibly require action are e.g. when components of X or of the DE are updated. Fully rebooting is not required. Log off, log on, and that fixes it.

Note that, most of the time, when Pamac says that a reboot is required, it’s wrong. Most of the time, it updates binaries that are not running.

Here’s how apt checks for the need to reboot in Ubuntu:

Scanning processes...
Scanning processor microcode...
Scanning linux images...

Running kernel seems to be up-to-date.
The processor microcode seems to be up-to-date.
No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.

When a kernel needs to be upgraded:

When services need to be restarted:

Generally, when a restart is really needed:

Debian and Ubuntu actually make use of the needrestart package. From Quora:

Is it true that the only times Debian has to reboot, are when the kernel is updated?

Yes, but…

When a software component in your Linux (not just Debian) system is upgraded, any instance of that component still in use will need to be restarted, to use the new binaries.

Now, if that software component is a commonly-used shared library like libssl.so or libc.so (usually for security bugfixes), many running processes may have to be restarted. At that point, it’s often just easier and cleaner to reboot the entire system.

That said, at least for Debian and Ubuntu distributions, you should install the needrestart package to help minimize the number of reboots you need to do. It’s automatically launched at the end of each CLI-based upgrade, checking to see what changed, and popping up a dialog box allowing you to decide which services/containers/user sessions to restart, and/or a reboot if the kernel was upgraded.

You can also run it manually, to see if any specific processes need to be restarted, like your just-updated web browser:

$ needrestart -r i
Scanning processes…
Your outdated processes: chrome[17270, 17762, 17071, 17287, 17182, 17280, 17257, 17754, 17067, 27762, 17763, 17296, 17632, 18262, 17250, 17688, 17324, 17243, 17306, 17184, 17303, 17703, 27689, 17211, 17242, 17272, 17222, 17690, 2763, 17331, 17711, 17338, 27835, 17925, 17282, 17773, 17680], dropbox[5840, 4653], firefox[15560], ibus-ui-gtk3[2789], ibus-x11[2796], indicator-appli[2988], kerneloops-appl[2840], light-locker[2888], lxpanel[2806], lxpolkit[2813], lxsession[2675], lxterminal[3805], nacl_helper[17068], nm-applet[2854], openbox[2804], pasystray[2878], pcmanfm[2812], QtWebEngineProc[21880], thunderbird[5232], update-notifier[2866], Viber[21872], Web Content[16089, 15923, 15995, 15644], xfce4-power-man[2815], xpad[2883]

In openSUSE, you’ll be told when you need to reboot because libraries or running programs have been updated:

You can also manually check, and you might find that reboots are probably not necessary:

Either way, the running kernel will not be replaced under your ass!

In recent times, Fedora Linux doesn’t even replace your libraries until you reboot:

What’s a bit unsettling is the way this makes it reminiscent of another OS:

As they explain in Restarting and Offline Updates:

A recurring question that goes around the internet is why Fedora Linux has to restart for updates. The truth is, Linux technically doesn’t need to restart for updates. But there is more than meets the eye.

Offline Updates is there to protect you.

The Linux Kernel can change files without restarting, but the services or application using that file don’t have the same luxury. If a file being used by an application changes while the application is running then the application won’t know about the change. This can cause the application to no longer work the same way. As such, the adage that “Linux doesn’t need to restart to update” is a discredited meme. All Linux distributions should restart.

The most common sign of instability is Firefox warning you. When Firefox detects updated packages, it will force you to restart the browser. Firefox can’t reliably run without completely restarting and it will therefore force you to restart.

Not every application can recover so gracefully, though, since most will just crash. Firefox might also still crash.

Fortunately, in RHEL’s clones, such as AlmaLinux, the reboot is not Windows-esque. What’s important to note is that the new kernel is only installed after the reboot, so if you want to use it, you need to reboot one more time. For some reason, in AlmaLinux 9 I had sometimes to reboot three times before being able to run the new kernel. Annoying, but less bothersome than having the kernel replaced under your ass. Either way, most distros install the kernel without a reboot, by adding it as the newest one, then you reboot only if you want to use the new kernel.

● This being said, I’ve had some proofs of stupid design in Fedora too. For instance, uBlock Origin is a package in Fedora, you don’t just install it from within Firefox, or at least it was this way in F37 (why on Earth?):

So far, so good, but it was defined as a system package (whatever that means, or maybe every single package is now a system package in Fedora?), so updating uBlock Origin required a reboot!

It’s hard to imagine something more fucked-up!

● But back to Manjaro, there’s another debatable design decision:

Manjaro claims to be stable just by delaying packages for a week. This is not an approach a stable distribution would take at all!

If Manjaro had to be actually stable, it needs to hold back the AUR packages as well. It has to maintain its AUR that is in sync with the Manjaro repos.

Say that a package in the AUR depends on a library, say libxyz. And libxyz is in the main repos, not in the AUR. The package is updated so that it relies on the new features introduced in libxyz’s version 1.1 however Manjaro delays packages so libxyz is still on 1.0 in Manjaro. If you update the package in Manjaro, it will break because Manjaro holds back packages. So the only way Manjaro can be stable is by literally forking all the Arch related repositories including the AUR and keeping them in sync.

I’ll discuss something akin regarding the RHEL updates and EPEL, compared to CentOS Stream and EPEL Next. Until then, I’ll add to the above text that I have used Chaotic-AUR with Manjaro, and I should have had this kind of issue: if Chaotic-AUR rebuilds the packages as soon as AUR is updated, and Manjaro delays by 2 weeks some libraries, things can get broken. However, I’ve had fewer such incidents than I expected.

The kind of up-to-dateness I used to require

Before going forward, I need to explain what I looked for in a distro in terms of what I wanted it to offer me, and how up-to-date it should have been, once it wasn’t rolling-release.

As preliminary considerations, I invite you to read my previous post on KDE, especially these sections:

Now, once we know I prefer KDE, and we’re talking of KDE5, not KDE6, my requirements were as follows:

  • A distro that works well on all three computers I want to use Linux on, to simplify my life (why use several distros at once?).
  • A distro that has a fairly recent KDE version, if not the most recent one, as KDE has tons of changes every single week (take a look at This Week in KDE), that eventually get embedded in an update.
  • A distro in which other relevant software needs to be recent enough, so I won’t be forced to use Flatpaks, unless in exceptional cases. Fortunately, some major pieces of software have their own repos, or there are alternate installation methods that solve this problem.

More recently, the “I want the most recent KDE” requirement got changed to “I want to stick to KDE5 for some time, but it has to be (almost) the latest available one”; this automatically disqualified the rolling-release distros, plus Fedora, who forced KDE6 to their users. Nothing is solid enough in a point-zero version; KDE4 was almost entirely a complete failure (that’s when I used XFCE); KDE5 became stable enough to my taste starting with version 5.4; I don’t see myself using KDE6 before 6.3.

As a side note, KDE neon, which at the time was based on Ubuntu 22.04 LTS, had too much outdated software, and only KDE was the latest one. Besides, I always found KDE neon to be somewhat on the buggy side, so it never came as a sound choice.

As for my hardware:

  • Acer TravelMate P645 (TMP645-S-51CH), MFG 2016/01, i5-5200U @2.20-2.70 GHz (TDP 15W), Intel HD Graphics 5500, 8GB RAM, SATA SSDs: 256GB Kingston (slow!) + 1TB Samsung 860 EVO, Realtek ALC282 audio, Intel 7265NGW WiFi/BT combo.
  • HP ProDesk 400 G6 Desktop Mini PC (44G38ES), MFG 2021/07, i5-10400T @2.00-3.60 GHz (TDP 35W), Intel UHD Graphics 630, 8GB RAM, SSDs: NVMe (Gen. 3) 256GB Kioxia (Toshiba) + SSD 2TB WD Red SA500, Realtek ALC3205 audio, Intel AX201 WiFi/BT combo.
  • Acer Aspire 3 A315-59-32MA (NX.KBCEX.003), MFG 2022/08, i3-1215U up to 4.40 GHz (2 p-cores) + 3.30 GHz (4 e-cores) (TDP 15W), 16GB DDR4, SSDs: NVMe (Gen. 3) 256GB SSD Kingston + NVMe (Gen. 4) 1TB SSD Kingston NV2, Realtek ALC256 audio, Mediatek MT7663 WiFi/BT combo.

The last one, a cheap and dirty laptop, has MT7663-based Wi-Fi and BT, which means that kernels such as 5.14 (EL9 clones) and 5.15 (Kubuntu 22.04 LTS) would not support it, but kernels such as 6.1 (kernel-lt from ELRepo for EL9) or 6.2 (Ubuntu 23.04) worked just fine. We’re talking about when I purchased that laptop, but EL9 still runs on 5.14, obviously. And Kubuntu 24.04 LTS is now available.

TBH, I still tried openSUSE Tumbleweed for a while. It’s a strong distro, and it doesn’t break that often, but no matter how hard I tried, I could not fall in love with it.

When AlmaLinux 9 seemed a great idea

This might surprise you, but AlmaLinux, despite being a clone of RHEL, in spite of the well-known scandal, and despite the difficulties of obtaining the source code, is actually a solid choice, even for desktops and laptops. I explained here why Rocky Linux is not something I could trust.

Arguments in favor of AlmaLinux 9:

  • It’s as mainstream as one could get. It’s RHEL for those who don’t want to pay for it. It’s stable. You’ll have major software applications built for it.
  • Whoever needs a newer kernel can use kernel-lt (6.1) and kernel-ml from ELRepo.
  • EPEL is a great source of extra software (more about it, below).
  • AlmaLinux Synergy brings even more software (e.g. dnfdragora).

If we consider the official EL9 packages as a sort of “semi-immutable” system, in the meaning that, especially for people who don’t use GNOME, we don’t care that much about outdated packages, then we’ll discover EPEL to be quite an up-to-date repo, despite being built for an extremely conservative distro!

A few examples:

  • EPEL has constantly updated its KDE offerings, currently having 5.27.11, at par with Kubuntu 24.04 (which made an effort to come with a version newer than in Debian sid!), whereas Debian 12 is stuck with 5.27.5 (sid: 5.27.10).
  • EPEL has many other packages in their latest versions, and in a timely manner. E.g. it has featherpad 1.4.1, whereas Debian 12 has 1.3.5. Upstream, featherpad 1.4.1 was released on June 12, 2023, and EPEL9 got it on August 4. In contrast, Debian only got it in sid on Sept. 6, and in testing on Sept. 12.

For some apps, EPEL is a hit-and-miss. I had to ask them to include krename, and they did! As I said, AlmaLinux Synergy is another place where one can ask for the inclusion of extra packages.

All in all, with a couple of apps built from sources (XnConvert, XnViewMP), installed from RPMs (IBM Plex Fonts), or from the vendor’s specific offerings (BeyondCompare from RPM, Calibre from .sh), I could get most of what I wanted in AlmaLinux. What went wrong then, and when?

When the EL9 model really broke AlmaLinux for me

Stupid me, I never thought of this before. You see, when RHEL and its clones got updated, for instance from 9.2 to 9.3, or from 9.3 to 9.4, the kind of upgrade that takes place is totally different from, say, Debian getting updated from 12.3 to 12.4, or from 12.4 to 12.5.

In the case of Debian, the minor versions are there to offer you updated ISO files with all the patches to date. If you applied all the updates as they came, you’re good. There’s no real upgrade, just new ISOs, so fresh installs need not download gigabytes of updates. Ubuntu also releases updated media: 20.04 LTS is now 20.04.6, and 22.04 LTS is now 22.04.4.

But in EL, these point upgrades are real upgrades. Sort of. Because 99% of the packages are still the same as the latest updates from the previous point release, but some packages are upgraded to newer versions. Examples (not counting the upgrades mandated by security issues):

  • From 9.2 to 9.3, the updates included, e.g.: GCC went from 11.3.1 to 11.4.1, Valgrind from 3.19 to 3.21, elfutils from 0.188 to 0.189, Redis from 6.2.7 to 7.0.12, Apache HTTP Server to from 2.4.53 to 2.4.57, and more.
  • From 9.3 to 9.4, the updates included, e.g.: PHP from 8.1.27 to 8.2.13, nginx from 1.22.1 to 1.24.0, MariaDB from 10.5.22 to 10.11.6, PostgreSQL from 15.6 to 16.1, and more.

Some exceptions (examples):

  • Node.js got updated from 16.19.1 through 16.20.2 during the lifetime of EL9.2 (with devel going from 18.12.1 through 18.18.2), so some packages really get upgrades (remember, to them even incrementing the minor version counts as an upgrade) during the same point-release. What criteria are used to select “the few chosen”?
  • Additions can happen, too. Most notably, in EL9 the default Python implementation is Python 3.9 (python3). However, since EL9.2, Python 3.11 is also available as python3.11, and since EL9.4, Python 3.12 as python3.12.

The problem with this model is that it creates a strange creature out of RHEL (and its clones):

  • As stable (or fixed) as Debian for 98-99% of the packages, which never get upgraded. If a package was included in version x.y.z, it will stay in version x.y.z, with patches added after z. Remember how such LTS distros don’t even bother to prefer a LTS kernel, and instead they keep patching the one they released with? Well, the same happens to other packages too. Dumbos.
  • As volatile as any non-LTS release for 1-2% of the packages, which get upgraded to newer versions roughly twice a year.

That’s schizophrenia. And the impact is that whoever uses EPEL (not counting RPM Fusion), simply because there are no separate “EPEL9.3 for EL9.3” and “EPEL9.4 for EL9.4” repos, so during the transition from e.g. 9.3 to 9.4, this is what happens, in sequence:

  • RHEL 9.4 is released.
  • For a while, EPEL only contains packages built and tested for RHEL 9.3, so some breakage might exist for people running RHEL 9.4 and its clones.
  • EPEL rebuilds packages for RHEL 9.4. Now, people using RHEL 9.3 (or AlmaLinux 9.3, Rocky Linux 9.3) might experience some breakages in EPEL, because the older packages built for 9.3 are unfortunately not anymore on the mirrors. Any fresh install of EL 9.3 will have major EPEL dependency issues, because EPEL only contains packages built for EL 9.4, with no fucking archived packages for EL 9.3, because it doesn’t keep branches!
  • EL9 clones release their 9.4 editions; AlmaLinux 9.4 was the first one to release. Everyone should upgrade their systems ASAP if they want to use EPEL!

Of course I need EPEL. That’s the source of KDE for EL, since those fucktards broke GNOME and stopped providing KDE.

But why push all those upgraded packages all of a sudden, instead of gradually? Using this schizophrenic model, you get the DRAWBACKS of both models:

  • The drawbacks of a LTS distro, which is to have most packages frozen in obsolete versions.
  • The drawbacks of point-release distros that release twice a year, which is to have the risk of getting broken systems twice a year!

Obviously, this risk is almost always related to using EPEL. Still, only a retard would design such a thing!

I have experienced EPEL breakages in the past, but I never thought of it. Indeed, most of the time was around the time EL incremented the minor version of the distro. I blamed EPEL, when I should have blamed Red Hat!

Remember I mentioned a critical opinion about Manjaro, regarding their delaying of the packages. When Manjaro is out-of-sync with the versions of the libs expected by the packages in AUR (or by the real packages in Chaotic-AUR), the situation is similar to your installed EL (clone) being out-of-sync with the version EPEL has built packages for!

But there is a solution to this, only … nobody’s using it! It’s called … CentOS Stream! No kidding.

CentOS Stream is demonized because people don’t understand what it actually is. CentOS Stream is not like Fedora! CentOS Stream 9 is not the rolling-release path to CentOS Stream 10! Nope, it’s the rolling-release path to CentOS Stream 9.4, 9.5, and so on!

That means it will get those frigging updates as they appear, not all in bulk, semestrially. And there’s a special rolling-release of EPEL that is always in sync with CentOS Stream! It’s called EPEL Next (epel9-next).

I understand that enterprise people don’t use CentOS Stream because it’s only supported for a shorter period, which isn’t known exactly beforehand. For instance, CentOS Stream 8 is supported for less than 5 years (released 2019-09-24, EOL 2024-05-31), and CentOS Stream 9 is estimated to have a supported lifecycle of more than 5 years (released 2021-12-03, EOL estimated for 2027). This is the only major drawback of CentOS Stream, not its “instability” due to “being ahead” of RHEL. It’s not that awesome, but there are some valid arguments in its favor.

Whatever the model, there are still 3 bad scenarios:

  • CentOS Stream can break if you’re using packages built specifically and tightly for a specific version of some library that’s in a version of RHEL, and Stream is ahead.
  • RHEL (and clones) can break if a minor version upgrade just happened, you did upgrade to it, but the very sensitive 3rd-party software did not yet. (3rd-party vendors would understandingly be slower than EPEL.)
  • RHEL (and clones) just released a minor version upgrade, for fearing either such a breakage, or a general breakage of your system. But if you did not upgrade from e.g. 9.3 to 9.4, you will not be able to get security updates, and you won’t be able to get any updates at all!

This is completely fucked-up! When e.g. a new Ubuntu release is out, be it a normal one or a LTS one, the previous one is still supported for a couple of months (much more for LTS). Similarly, any Fedora release is supported until one month after the release of the second subsequent version (version x+2). But RHEL doesn’t support e.g. 9.3 not a single day after the release of 9.4! Same for EPEL. No branches, not anything!

This made me stop unconditionally believing in AlmaLinux. The bad design of RHEL is the culprit, and I was an idiot for not realizing earlier how broken it is.

(Obviously, this affects all of RHEL’s clones. Reported on Rocky Linux: EPEL broken… now what? The right answer: “Suggest waiting until Rocky 9.4 is released. This is unfortunately what happens when there is a new RHEL release that EPEL follows. Nothing you can do until Rocky 9.4 is released.”)

How did I come to this realization? Just a couple of days before AlmaLinux 9.4 was released, I noticed that KDE cannot be installed from EPEL on a fresh install of 9.3. Thinking that EPEL has a temporary breakage, I filed a bug: EPEL Bug 2279322 – kf5-libkdcraw-23.08.5 is not installable, making “KDE Plasma Workspaces” not installable. Then, I added a post to AlmaLinux’s forum; the initial title was different, but now it reads FYI: EPEL9: “KDE Plasma Workspaces” only installable in 9.4, not in 9.3.

As a side note, in the process of discovering the truth (I hope this doesn’t sound like Jehovah’s Witnesses!), I ranted, and I vented my frustrations a lot in that forum thread. Some snowflake made the system hide the 17th post, then another admin restored it!

What I’m really furious about this versioning crap in EL and clones is that I really had issues with such upgrades!

  • After upgrading from 9.2 to 9.3, my new laptop got a black screen with no way to continue to anything. My mini-PC, surprisingly, had no issues, despite having a lot more software installed on it!
  • After upgrading from 9.3 to 9.4, my new laptop got two problems. The easier one concerns LC_ALL and is not specific to EL, but the unsolvable one is that no keyboard with dead keys can be used anymore! In fresh installs, all keyboard layouts work, but in the upgraded system, no matter what I tried, the ” and ‘ cannot be entered at all in KDE (and neither can composed characters be obtained) using the US international keyboard with dead keys layout! (Even in KDE, dead keys work in Firefox, as it’s not a Qt app.)

This is utterly unacceptable. What is this, Ubuntu? I mean, how can we blame Ubuntu for being buggy, when this can happen in a distro that doesn’t update much of anything? I suppose the bug is in EPEL’s KDE, but it’s subtle enough to only happen on upgraded systems, not in new installs.

It took me some time to find the origin of the bug to the dead keys issue. You need to read the section below to understand everything.

LC_ALL and the fix I was looking for

Let’s move to a really minor issue, yet one that some people keep encountering across several distros. It usually appears on KDE systems, and in specific ones: those whose users use a system language that’s different than the language spoken in the country they’re in, yet they want to use other locale settings from that country!

If you’re confused, think of this set of settings:

  • Language: US English (because I don’t want to be told to write colour, labour, metre).
  • Keyboard: US International with dead keys + German + Romanian.
  • Locale settings:
    • Euro for currency (Europe!)
    • Metric system for paper (Europe!)
    • Dot for decimal separator (not comma, despite being in continental Europe!)
    • dd-mm-yyyy for date (continental Europe!)

In Windows 7, I used to select “English (Ireland)” for the language, as a shortcut towards the proper currency, the metric system, and the date. Not optimal, but a quick fix. For a while, I also used the “UK Extended Layout” (English UKX), which allowed me to enter some French characters with the proper accents. Well, those were times.

Back to KDE, here’s an older screenshot with my settings from Kubuntu:

And one from AlmaLinux:

The themes are similar because I use a separate home partition (a second SSD, actually), so my settings survive the distro-hopping.

Note that KDE is a bit buggy here: the currency is shown with the wrong decimal separator! The correct one is under “Numbers” and it will be honored by the system.

To be frank, the only distros that have shown no warnings or errors whatsoever were Manjaro and Fedora, if my memory serves me well. There can be two kinds of warnings.

First at the KDE level. Usually, “Cannot find an example for this locale”:

Then, the most annoying ones are at the command prompt, and they can be issued by about any possible command:

bash: warning: setlocale: LC_ALL: cannot change locale (en_DE.UTF-8)
/usr/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_DE.UTF-8)
/usr/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_DE.UTF-8)
/usr/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_DE.UTF-8)

That one is really annoying! But whoever says (and they do!) that “en_DE is not a proper locale” should go fuck themselves, then go fuck themselves again, fucking retarded shitheads that they are!

Let me explain for such retards, because all kinds of wrong advice can be found on the Internet of Tards: such en_DE.UTF-8 locale have the following meaning:

  • The language to be used is the one from the first part, here English.
  • The specific localized parameter that receives this value, being it one of LC_NUMERIC, LC_TIME, LC_COLLATE, LC_MONETARY, LC_MESSAGES, LC_PAPER, LC_NAME, LC_ADDRESS, LC_TELEPHONE, LC_MEASUREMENT, LC_IDENTIFICATION, or maybe LC_ALL, or LC_CTYPE, is to take the settings from the second part, here Germany (Deutschland).

Simply put, let’s live like in Germany, with the exception of the system language being English!

Unsurprisingly, most distros do not build all such combinations:

en_DK exists, despite English not being an official language in Denmark!

I still wonder how was it possible for me not to encounter this situation on some punctual occasions in the past. Most recently, AlmaLinux 9.3 KDE did not issue any LC_ALL warning, but updating to 9.4 and a new KDE build from EPEL led to such warnings. I suppose the EPEL team doesn’t always build the same locale sets?!

Either way, the solution is to generate the missing locale from the country’s settings.

● For RHEL clones and Fedora:

sudo dnf install glibc-locale-source
sudo localedef -i de_DE -f UTF-8 en_DE.UTF-8

Then check the result:

localectl list-locales | grep en_DE

The long version of the relevant command, should anyone be curious:

sudo localedef --inputfile=en_DE --charmap=UTF-8 en_DE.UTF-8

● For Debian, MX, Kubuntu, Manjaro, and more:

sudo localedef -i de_DE -f UTF-8 en_DE.UTF-8

Now the proper examples should be available:

The address is not the best example, but the idea is that it’s been generated

How about the dead keys not working in KDE’s apps that I experienced on one laptop after upgrading from AlmaLinux 9.3 to 9.4, which included upgrading KDE from EPEL?

I’m still not sure how the previous KDE build from EPEL didn’t have any problems with this setting, but the facts are these:

  • I didn’t do anything but to upgrade.
  • The only GUI apps that failed to register the dead keys were KDE’s ones! (Even Featherpad was affected, although it’s only Qt-dependent. But Firefox and everything GTK-based were just fine.)
  • I noticed that in Konsole the locale command still reported everything as en_DE.UTF-8 even after having changed in System Settings some LC_* locales to C, which is another way to get metric and European values for LC_MEASUREMENT, LC_PAPER, LC_TIME.

The last point made me investigate. This KDE-only behavior was impervious to whatever changes I made to /etc/environment, /etc/locale.conf, /etc/locale.gen, or to my user’s profile files, so even if I put the standard en_US.UTF-8 everywhere, it didn’t matter! Heck, even resetting everything to default (American English, matching the language) in KDE’s System Settings didn’t change what locale reported in Konsole! WTF?!

The key was a parasite
LC_ALL = en_DE.UTF-8
entry in
~/.config/plasma-localerc

Who the fuck has put it there?! The localedef command did not set any value to LC_ALL in plasma-localerc. I ran it again to make sure it didn’t do that. And the real problem is that KDE’s System Settings can only set individual LC_* values in plasma-localerc, but it cannot set or reset LC_ALL, which is beyond stupid! (At least, this is what happens in 5.27.11 from EPEL.)

Here’s the bad ~/.config/plasma-localerc that I had, even after having switched everything (but the language) to C in KDE:

[Formats]
LANG=en_US.UTF-8
LC_ADDRESS=C
LC_ALL=en_DE.UTF-8
LC_MEASUREMENT=C
LC_MONETARY=C
LC_NUMERIC=C
LC_PAPER=C
LC_TELEPHONE=C
LC_TIME=C

Here’s a good one, as it should have been:

[Formats]
LANG=en_US.UTF-8
LC_ADDRESS=de_DE.UTF-8
LC_MEASUREMENT=de_DE.UTF-8
LC_NAME=de_DE.UTF-8
LC_PAPER=de_DE.UTF-8
LC_TELEPHONE=de_DE.UTF-8
LC_TIME=en_DE.UTF-8

And, should I tinker a bit more with the regional settings in KDE, the result should include an unconfigured LC_ALL:

[ludditus@weed ~]$ locale
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME=C.UTF-8
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER=C.UTF-8
LC_NAME=C.UTF-8
LC_ADDRESS=C.UTF-8
LC_TELEPHONE=C.UTF-8
LC_MEASUREMENT=C.UTF-8
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=

Now I have finally fixed AlmaLinux 9.4 KDE on my laptop! 🙂

The funny thing is that occurrences of dead keys not working were usually reported in the past for GNOME or GTK (in Fedora 36; in Ubuntu 20.04; in Ubuntu Budgie 20.10), although occasionally there’s some guy saying that the issue only occurs on KDE. What I believe is that internationalization is a complete mess in Linux, even after all these years of UNICODE and shit. If it’s not ASCII or C, it’s complicated. What is this, 1972?

When the Linux kernel screws NTFS

Before commenting on other distros, let’s trash a bit the child of Linus Torvalds and his minions. Remember how I said that I used to be happy with kernels 1.2.13 and 1.3.18? Well, back then, NTFS wasn’t for everyone’s use—now it is.

I’m sorry to say, but the quality of the Linux kernel is abysmal. Given its complexity, I doubt that they really know what’s inside. It just happens to work while, on the one hand, being constantly updated with new features and drivers, and on the other hand, being constantly patched.

Take the K-series Keychron keyboards. I so happen to own and use the Keychron K3 Ultra-slim Wireless Mechanical Keyboard (German ISO-DE Layout) version 2.

Prior to Linux 5.19, the Fn keys weren’t working in Linux, because the kernel believed this is a Mac keyboard, regardless of the mode the keyboard reported. On the other hand, starting with kernel 5.18, such a Bluetooth keyboard doesn’t reconnect after suspend. Restarting the BT service fixes the issue (there is a script to unload the btusb module, restart the Bluetooth service and reload the module again). This has been patched in kernel 5.19.2, but the regression showed up again in kernel 6.1.0.

Older regressions include the situation of my old laptop’s audio jack. It uses Realtek’s ALC282 chip. Mid-December 2020 they added a kernel patch for the Acer TravelMate laptops P648/P658 series that only have one physical jack for headset. My P645 has two separate physical jacks for headphones and mike, but the new patch breaks it, because it singles out a P648/P658 laptop via a code line that also matches my P645, to which the patch shouldn’t be applied! (See chapter two of this post.) The way this shit is structured, I can’t find out which kernel version first included it, but ever since, Linux cannot detect when I insert a headphones jack in that laptop.

Every single time when they touch the kernel, they break something. Let me show you a big one!

I have a number of external SSDs and HDDs that were NTFS-formatted. They’re used to save comics, movies, music, e-books, stuff. Now I’ve migrated most of them to exFAT, but why do you think this happened?

Prior to kernel 5.15, the NTFS support was in userspace (FUSE). To access NTFS drives in AlmaLinux with the original 5.14 kernel, I had to install ntfs-3g from EPEL. This driver is slow, but reliable.

But then I installed the 6.1 kernel. And life went on as usual (I never had any issues with the 6.1 kernel line as far as ext4 and exFAT are concerned, and I still don’t.)

At some point, I saved on an external NTFS drive a complex hierarchy of folders, the leaf one having lots of files (comics, documentaries, magazines, etc.). Exactly for such folders with up to 3,000 files I need the Compact List View in a file manager, and given that Files (Nautilus) is the only one that lacks it, GNOME is unusable (why would I use it with another file manager, when I could use another desktop environment altogether?).

Well, as I later found, after having deleted the original files, that the “copy” made only included the folder hierarchy, but absolutely no file! And no error was displayed whatsoever.

It wasn’t a bug in Dolphin. It was in the kernel, which has included Paragon’s NTFS driver without enough testing. NTFS is not that important to Linux, after all.

Such a bug has been reported, but it was never officially acknowdledged and never investigated.

From a ComputerBase forum thread from June 2023, translated from German:

In fact, the Linux kernel NTFS driver is obviously broken. By the way, it’s called “ntfs3”. It should not be confused with the old kernel driver “ntfs” (read only) or with the FUSE driver “ntfs-3g”. You can read more about it here:

https://www.reddit.com/r/archlinux/…3_driver_keeps_corrupting_ntfs_filesystem_on/
https://randthoughts.github.io/i-tried-paragons-ntfs3-and-it-didnt-go-well/
https://bugs.launchpad.net/ubuntu/+source/evince/+bug/2000626
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2019153

It is completely irresponsible to leave the driver in the Linux kernel. I have no idea how such a dangerous bug can survive for so long. Maybe the victims are too convenient to write a “bug report”? Apparently there is currently no one responsible for the module because the manufacturer “paragon” apparently withdrew after disclosing the sources.

PS: I also find the “corrupt” files thing a bit funny. 🙂

Let’s take a look to some such reports. On Reddit:

I have noticed for a while that ntfs3 driver sometimes corrupts the filesystem after a write or rename operations. Does not happen always, but from time to time. I can re-boot into Windows and “repair” the disk, and the files are back. I lost quite a few files before I noticed this.

When used with the old ntfs-3g fuse filesystem, I didn’t have such problems. I’ll revert back to using ntfs-3g as a short-term workaround.

A reply:

you are not the only one, yesterday this happened to me for the 2nd time, a minute everything was fine then the next one you notice one or several folders are missing, you check with ls and nothing yet you can cd into the folders if you know the global path. The solution? go to windows and use chkdsk. Its really frustrating. Also an hdd here.

From the same thread:

Same problem with 6.1 LTS. I had problems with ntfs-3g so I switched. And now it’s the revert: so I switched back to ntfs-3g.

Voilà. From I tried Paragon’s ntfs3 and it didn’t go well:

Recently I’ve upgraded my laptop to Fedora 37 which includes, along with kernel 6.x, the new ntfs3 driver. So I decided to modify the /etc/fstab entry for the Windows partition to use the new, fast ntfs3 instead of the old, slow ntfs-3g. Everything’s been fine up until today, when I noticed that a folder disappeared from the NTFS filesystem. The weird thing is that the directory wasn’t visible with ls or any GUI file manager, but I could still cd into it. Similarly, enclosed files weren’t visible with ls, but could still be accessed by programs, knowing the filenames. For example, my torrent client could still seed existing files from there. This behavior screamed only one thing: filesystem corruption. So I booted into Windows to schedule a chkdsk run, which indeed fixed the thing.

I also encountered the same behavior in the said folder structure! Unfortunately, only a small part of the files could be recovered!

Also on Reddit:

I’ve had problems with it like an entire folder disappearing and having to repair in Windows to get it back multiple times so I wouldn’t recommend using it.

Had filesystem corruption from the Paragon driver, too, unfortunately. It was fixable, and I recovered all of my data, but after that I’ve stayed with NTFS-3G. It’s a shame because Paragon tends to be very reliable on Mac and Windows in my experience, but not on Linux. It’s clear they put their support where the money is, and I can’t really blame them.

Well, paragon driver on Mac caused lots of unrecoverable file corruptions on my ntfs drive. They weren’t really critical files but it’s annoying.

Yup, I’ve had repeated lock-ups only solvable by reboots (leading to fs corruption) due to the ntfs3 driver. I shrugged, wrote it up to the driver still being somewhat beta-y, and switched back to ntfs-3g.

How was this shit included in the Linux kernel?

At some point in 2022, Paragon’s ntfs3 driver seemed orphaned:

Since the driver was finally mainlined last year in Linux 5.15, there haven’t been any major bug fixes to be sent in for the driver. The driver, which started out as a proprietary driver by Paragon Software, has seen a few fixes towards the last year within Paragon’s Git tree but never submitted to mainline. Attempts by other developers to reach the NTFS3 maintainer have been unsuccessful.

… Some have hypothesized the lack of NTFS3 driver maintenance may be fallout from the Russia-Ukraine war with Russian developers involved.

Linus Torvalds commented on the thread yesterday.

If you are willing to maintain it (and maybe find other like-minded
people to help you), I think that would certainly be a thing to try.

And if we can find nobody that ends up caring and maintaining, then
I guess we should remove it, rather than end up with two effectively
unmaintained copies of NTFS drivers.

Not that two unmaintained filesystems are much worse than one :-p

The Reg had it too: Problems for the Linux kernel NTFS driver as author goes silent.

But then… NTFS3 Kernel Driver Sees Fixes Sent In For Linux 5.19. And a bug that got fixed: Kernel Bug 214833 – NTFS3: junctions are not properly resolved.

However, this bug reported for Ubuntu is newer, from May 2023: ntfs3 kernel driver corrupts volume during file move operations. From November 2023: Apparently empty files when writing to NTFS partition.

Nice. This definitely gives you trust in using the ntfs3 driver included in the post-5.15 Linux kernels!

Also, take a look at the many commits to this driver, all by the user “aalexandrovich” which is supposedly a guy called Konstantin Komarov, but using an e-mail for a completely different name: almaz.alexandrovich@paragon-software.com.

Does anyone remember the xz thing? What if this is something similar?

My decision was to disable this fucking piece of shit. Here’s how to replace ntfs3 with ntfs-3g:

  • Edit /etc/modprobe.d/blacklist.conf
  • Add this line: blacklist ntfs3
  • Reboot or unload the driver: sudo modprobe -r ntfs3

You should make sure the ntfs-3g driver is used, by checking that NTFS partitions are mounted as fuseblk:

ludditus@chenille:~> mount |grep sdb
/dev/sdb1 on /run/media/ludditus/fast64ntfs type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)

ludditus@chenille:~> lsmod|grep ntfs
ludditus@chenille:~> 

I was in openSUSE Tumbleweed when I tested this, hence the stupid prompt.

But, of course, Modern NTFS Driver Sees Bug Fixes With Linux 6.10. As if I could trust it. Comment number four:

Does this driver still cause regular data corruption?

Whose bugs are these?

I hope this is a bug in Manjaro’s kernel, not upstream, but I just booted the Manjaro 24.0 KDE Live ISO, and its kernel 6.9.0-1-MANJARO did not identify my Wi-Fi! Thankfully, Kubuntu 24.4 did; its kernel 6.8.0-31-generic is generic enough to include the proper support for MT7663.

I also hope that the bug in ELRepo’s kernel-ml for EL9, which I tested as 6.9.1, bug that is the same as in Manjaro, meaning that the Wi-Fi cannot be seen, is also not upstream’s. ELRepo’s kernel-lt for EL9, currently 6.1.91, works perfectly, as the 6.1 line from them always did.

But let’s be frank: this must be an upstream bug. I can’t know whether the MT7663 support in the 6.9 kernel is broken, or whether the kernel’s defaults are broken, so whoever builds it has to fine-tune something. And I don’t care about such details. When two different distros (although ELRepo isn’t a distro) reach the same results, then it must be an upstream bug. Especially as ELRepo’s 6.8.9 kernel was just fine.

Not to mention that the support for MT7663 in the Linux kernel, no matter the version, is so poor, that the device cannot be woken up after a hibernation. Nope. A full shutdown is required if one ever wants to have Wi-Fi again. And I tried all kinds of scripts. This reminds me of how, many years ago, Linux was unable to activate a Broadcom Wi-Fi on a dual-boot machine if Windows had hibernated it. The usual fanboys blamed Windows, but it was Linux to blame: if you cannot fucking reset a device from no matter what status, then you don’t have a full driver for it! Take this analogy: you pretend to be able to drive a car, but you cannot do it if the previous driver left it with the emergency brake on! You don’t know how to turn it off. Who’s to blame?

Could it be a possibility that the Linux kernel is now some sort of Boeing, quality-wise? You know, Linus Torvalds is a US citizen, as in his greed for money, he moved to the US in 1997. Greed leads to nasty things. Finnish modesty and moderation are not what defines him anymore. And this fucking kernel is a pile of dung. The only reason people are using it is that Windows is a complete fuckup.

Now let me show you some random kernel bugs, based on my browser’s history. On LWN.net, on November 25, 2021: Stable kernel 5.15.5: “The 5.15.5 stable kernel has been released. As usual, it contains lots of important fixes throughout the kernel tree. Users should upgrade.”

That’s standard blurb; however, one user (it wasn’t me, I promise!) had to say this:

Should we also eat faeces?

Bug 215137 – Since 5.15.5 asking for cache data fails on one disk

I’m so bloody tired of LWN editors posting this crap with each “stable” kernel update.

And don’t get me started on how numerous “stable” updates have contained serious regressions. QA/QC in the kernel has always been an effing joke. The only kernel you can rely on comes from RedHat.

Please, stop.

Another comment by the same guy:

Writing or mentioning “must update” or “should upgrade”. This is wrong, bad and dishonest. You’re relaying an opinion of someone who doesn’t give a flying …. whether this or that update has been properly tested and whether it’s regression-free as seen by a constant stream of “Revert(ing)” in “stable” kernel logs. Yes, most of such reverts are for previous “stable” patches.

Another guy this time:

Really, “users” shouldn’t be using upstream “stable” kernels at all. Fixes are pulled from master into “stable” at high rates with minimal testing. Regular severe regressions can only be expected.

E.g., Linus pulled the SA_IMMUTABLE changes that broke debugging into 5.16 master on Nov 10. Those changes pulled into 5.15.3 and released just 8 days later. How much testing or other scrutiny could there have been? It’s sheer luck that 5.16rc1 was released on Nov 14, Kyle ran the rr test suite against it a few days later and discovered everything was broken.

I suggest that the upstream “stable” kernels be relabeled “maintenance” and disclaimers attached to indicate that they are not suitable for production use. I think making quality downstream’s problem is a mistake, but since that’s how it is, better be honest about it.

Then how about Btrfs and kernel 5.16? Reported on Manjaro: Obviously high SSD I/O usage from btrfs-cleaner after upgrading to Kernel 5.16.

Finally, Bcachefs Multi-Device Users Should Avoid Linux 6.7: “A Really Horific Bug”. Bcachefs was supposed to be “The COW filesystem for Linux that won’t eat your data”…

It’s a never-ending story. What do you want your kernel upgrade to break today? Yes, I know, we’re drowning in code, but this crazy complexity is exactly the reason why code shouldn’t be pushed into the kernel at this insane rate!

The Linux kernel team uses CVEs as a bug tracking system

From the Risky Biz News newsletter for June 5, 2024: The Linux CNA mess you didn’t know about:

The Linux Kernel project was made an official CVE Numbering Authority (CNA) with exclusive rights to issue CVE identifiers for the Linux kernal in February this year.

While initially this looked like good news, almost three months later, this has turned into a complete and utter disaster.

Over the past months, the Linux Kernel team has issued thousands of CVE identifiers, with the vast majority being for trivial bug fixes and not just security flaws.

Just in May alone, the Linux team issued over 1,100 CVEs, according to Cisco’s Jerry Gamblin—a number that easily beat out professional bug bounty programs/platforms run by the likes of Trend Micro ZDI, Wordfence, and Patchstack.

Ironically, this was a disaster waiting to happen, with the Linux Kernel team laying out some weird rules for issuing CVEs right after the moment it received its CNA status.

We say weird because they are quite unique among all CNAs. The Linux kernel team argues that because of the deep layer where the kernel runs, bugs are hard to understand, and there is always a possibility of them becoming a security issue later down the line. Direct quote below:

Note, due to the layer at which the Linux kernel is in a system, almost any bug might be exploitable to compromise the security of the kernel, but the possibility of exploitation is often not evident when the bug is fixed. Because of this, the CVE assignment team is overly cautious and assign CVE numbers to any bugfix that they identify. This explains the seemingly large number of CVEs that are issued by the Linux kernel team.”

While this looks good on paper, the reality is that other projects also manage similarly sensitive projects, but they don’t issue CVEs for literally every bug fix. You don’t see Intel and AMD issuing hundreds of CVEs with each firmware update, even if their CPUs are where the Linux kernel runs.

These projects vet reports to confirm that bugs pose a security risk before issuing a CVE and triggering responses with their customers, such as inventory asset scans and emergency patch deployments.

Instead, the Linux Kernel team appears to have adopted a simpler approach where it puts a CVE on everything and lets the software and infosec community at large confirm that an issue is an authentic security flaw. If it’s not, it’s on the security and vulnerability management firms to file CVE revocation requests with the precise Linux Kernel team that runs the affected component.

The new Linux CNA rules also prohibit the issuance of CVEs for bugs in EOL Linux kernels, which is also another weird take on security. Just because you don’t maintain the code anymore, that doesn’t mean attackers won’t exploit it and that people wouldn’t want to track it.

The Linux team will also refuse to assign CVEs until a patch has been deployed, meaning there will be no CVEs for zero-days or vulnerabilities that may require a longer reporting and patching timeline.

The new rules also create a confusing process of validating, contesting, and rejecting CVEs. I’m not going to go into all of that since the venerable Brian Martin did a way better job back in February. Open Source Security’s Bradley Spengler shared a real-world example last week of why the entire process of analyzing, validating, and revoking Linux CVEs is now a giant clusterf**k of confusion and frustration. We quote him:

To say this is a complete disaster is an understatement. This is why CVEs should be for vulnerabilities, should involve actual analysis, and should provide that information in the CVE description, as any other responsible CNA would be doing.

While Linux maintainer Greg Kroah-Hartman tried to justify the team’s approach to its new CVE rules, as expected, this has not gone down well with the infosec community.

Criticism has been levied against the Linux Kernel team from everywhere, and there have been some calls for the Linux team to reconsider their approach to issuing CVEs.

The new rules were criticized right from the get-go. The likes of Katie Moussouris, Valentina Palmiotti, Ian Coldwater, Bradley Spengler (again and again), Adam Schaal, Tib3rius, the grsecurity team, the GrapheneOS team, and a whole bunch more, foresaw the disaster that is currently unfolding.

And if this isn’t bad enough, the Linux kernel team appears to be backfiling CVEs for fixes to last year’s code, generating even more noise for people who use CVEs for legitimate purposes.

Some described the Linux team’s approach as “malicious compliance” after the project was criticized for years for downplaying vulnerability reports and contesting CVEs assigned to its code by other CNAs. That may not be the case, as the new approach has some fans who see its merits, such as forcing more people to upgrade their kernels on a more regular basis.

The Linux CNA intentionally adopts an overly cautious approach and assigns a new CVE when in doubt. While this may surprise many, it is a perfectly legitimate and entirely honest strategy. In contrast, vendors of proprietary software often tend to take the opposite approach, minimizing the assignment of CVEs whenever possible. Effectively managing the substantial number of CVEs involves understanding your kernel configuration, having a clear threat model, and ensuring the ability to update the kernel as needed. I hope that other large projects will eventually adopt Linux’s approach.

Unfortunately, all of this CVE spam also could have not happened at a worse time. Just as the Linux Kernel team was getting its CNA status, NIST was slowing down its management of the NVD database—where all CVEs are compiled and enriched.

NIST cited a staff shortage and a sudden rise in the number of reported vulnerabilities—mainly from the IoT space. Having one of every fifth CVE being a Linux non-security bug isn’t helping NIST at all right now.

Fucking retards. Could we change the name of Linux into, if not Clusterfuck, at least Fustercluck?

OpenSUSE: some thoughts

As hard as I tried, I couldn’t fall in love with any version of openSUSE, although I tried a few times to stick to Tumbleweed (one time via GeckoLinux). It’s by no means a bad distro, but it doesn’t appeal to me. Even without using 3rd-party repos (not even Packman) I experienced breakages, usually specific to a given repo, so the issue was in the repos syncing more than in the packages themselves. Thankfully, zypper is rather smart; however, I never liked YaST2, and I never will.

The things I don’t like in openSUSE include:

  • It comes with a non-configured sudo, which means it will ask for the root password (see here and here). They say it’s a security feature. I say they’re dumb.
  • You cannot launch a GUI app via sudo, because it doesn’t export DISPLAY=:0.0. One more time, they don’t care about the users.
  • Of course, even after configuring everything, you cannot launch Kate via sudo: “Running Kate with sudo can cause bugs and expose you to security vulnerabilities. Instead use Kate normally and you will be prompted for elevated privileges when saving documents if needed.” Yeah, we know that. I prefer Featherpad, which asks for my password on save (if sudo is configured). But why can’t I fucking to whatever I want? What is this, Windows?
  • Their too restrictive defaults include the fact that you cannot install e.g. from Packman (ffmpeg gstreamer-plugins-{bad,ugly} libavcodec vlc-codecs) packages that replace existing ones without adding the flag “–allow-vendor-change” (sudo zypper dist-upgrade --from "Packman Repository" --allow-vendor-change). This is ridiculous. No other distro has such a number of lock-ins by default!
  • I couldn’t find any proper documentation for the colon in the repository names (“it’s used to separate the alias from the subdirectory,” whatever that means). What’s the difference between Kernel:/ALP-current:/ and Kernel:/ALP-current/? How about the difference between M17N:/ and M17N/? They differ in contents, but why, and what’s the meaning of “:” in such contexts?
  • The set of available repos is a complete mess. For instance, XFCE comes from a separate repo. Security updates for XFCE, from one more repo. Should you need extra fonts such as IBM Plex, you need to add one of the M17N repo above. It’s great to be able to host dozens of repos, even from third-parties (to offer what the PPAs and COPRs offer to other distros), but the mess is complete. And then you dare to hope that you won’t run into a dependency hell? Hell, no!
  • Unless you want to stick to a host name like “p200300f63f171897c44b50a377165447” (that was a real one!), you must set it yourself at the CLI (sudo hostnamectl set-hostname name). This is a complete shame for a major distro, to leave things this way.

Even if I had nothing to do with my life to the point of wasting my time understanding the chaos in openSUSE’s repositories, this distro might have no future, except for that of an immutable one!

The latest update on the future of openSUSE Leap confirms that there will be a release called Leap 16 at some point, alongside a version 6 of the existing Leap Micro … but version 16 will be based on SUSE’s containerized ALP distribution. This means that Leap 16 is destined to be an immutable distro with transactional updates, and thus significantly unlike the current openSUSE distribution.

If Leap goes this way, I suppose this means that openSUSE Tumbleweed will at some point stop being what it is. What a sad end for openSUSE. Of course, they’ll also have an RHEL clone, or fork, or … vaporware? They announced that fork for almost a year, and I don’t see it!

A last note. Not long ago, there was something unique to openSUSE Tumbleweed: it was the only distro to always have the latest version of Calibre, usually within 24 hours! Most distros didn’t have the latest PyQt6 bindings (which are a complete mess anyway). Meanwhile, Tumbleweed’s OSS repo lags behind by quite a lot, and the distro having the most recent Calibre is Fedora! (With the caveat that each Calibre update rests a bit to long in Updates Testing. But Manjaro is even slower to update Calibre…)

Debian: thoughts and rants

Debian was always a stable, albeit boring, distro. Unspectacular, but solid. It is probably still a good choice for some, but I personally believe that it’s somewhat moribund, which makes me worry about the future of Ubuntu.

First, let me say that it underwhelmed me when I last used it on a virtual server. I chose a Debian 11 image, only to have to find 3rd-party repos for a newer PHP, for newer JS libraries that required newer versions of some other packages, and in the end I got really pissed off. All those fucktards who require the latest version of every single piece of shit, otherwise their latest piece of vomit won’t build! Debian 12 was in the making, so Debian 11 was quite obsolete. Sigh.

Every single distro has its stupid bugs. While I was trying it on my newer laptop, I noticed that the “contrib” repo was not enabled. I don’t remember whether “non-free-firmware” was enabled or not, and I’m not sur that this bug is still valid: “The Debian 12 version of the Software & Updates GUI tool has a significant unhandled bug that prevents it from enabling the Debian Security updates repository.” Probably not.

The real problem is that on the respective laptop, I ended with the CPU governor set to “performance” instead of “powersave”; as a result, the fan was making the noise of a Tupolev, even if the CPU load was very low, but the frequency was around 3.3 GHz instead of, say, 800 MHz. WTF?!

Moreover, cpupower-gui, which existed in Debian 11, is not available in Debian 12, despite being still in sid! What the fucking fuck?!

This is not a new thing, unfortunately. With every new Debian stable version, there are packages that are discontinued. Left aside, outside the distro. And this usually affects Ubuntu, too. (I’m not sure that Mint fixes such missing packages for Ubuntu, or MX for Debian. Most of the time, they don’t.) This is the first sign of a distro that decays and goes towards a slow death.

Another example of Debian being a senile dinosaur: featherpad 1.4.1 was released on June 12, 2023, but Debian only got it in sid on Sept. 6, and in testing on Sept. 12. This led to the ridiculous situation that it was too late for Ubuntu 23.10, which shipped with version 1.3.5. They Lubuntu team made an effort (I wrote them an e-mail), so Ubuntu 24.04 includes featherpad 1.4.1, although by the normal process, they shouldn’t have, as the snapshot they took from Debian testing didn’t have that version. To see why and how this is ridiculous: EPEL9 got featherpad 1.4.1 on Aug. 4, and mxrepo, which can be used by Debian 12 users, has it since June 14.

When I noticed that Debian failed to notice the new version (meanwhile I had it on my AlmaLinux via EPEL), I went to the package’s page, and there it said:

  • maintainer: LXQt Packaging Team (archive) (DMD)
  • uploaders: ChangZhuo Chen (陳昌倬) [DMD] – Alf Gaida [DMD] [DM] – Andrew Lee (李健秋) [DMD]

I e-mailed ChangZhuo Chen (陳昌倬) czchen@debian.org:

mitropoulos.debian.org rejected your message to the following email addresses:
zchen@debian.org 
Remote server returned '550 5.0.350 Remote server returned an error -> 550 Unrouteable address'

Oops! Since Featherpad is the default text editor in LXQt, I went to their alioth-lists.debian.net/pipermail/pkg-lxqt-devel/mailing list. As you can see, it’s full of spam instead of real messages! How is this not a dying dinosaur?

As there is no Debian 13 to backport from, Debian 12 still has Featherpad in version 1.3.5.

Oh, I forgot: Debian 12 has Mousepad 0.5.10, whereas EPEL9 and Ubuntu 24.04 have version 0.6.1. One more effect of Debian stable never updating its packages…

Another revolting example: Debian 12 has Transmission 3.0, released in May 2020, despite Transmission 4.0 being released in February 2023. It could have included it, as Debian was released in June, but no, a dinosaur cannot be that agile.

I was forgetting about another example. I have never used fswebcam, but I noticed that it exists in EPEL8, but not in EPEL9. Its latest version is fswebcam_20200725. EPEL8 has this version. Debian 12 has fswebcam_20140113. WHAT?!? 6 years behind!!! That means Ubuntu 24.04 LTS (yes!) and Mint have the same version from 2014, and so does MX. They’re all based on the moribund Debian.

BTW:

Q: Is there security support for packages from backports.debian.org?
A: Unfortunately not.

Oh. In this context of a distro that fails to keep up with the changes, guess who’s the new Debian Project Leader? Why, it’s Andreas Tille! Take a look at his platform as a candidate:

For me, among other things, freedom means not being available at all times. That’s why I decided against owning a smartphone, for instance. Therefore, it is important for you to know that as your potential DPL, there may be times when I am offline and cannot be reached. I value freedom deeply, and I am grateful for the privilege of making choices that are sound with my values.

Oh, fuck. If you value freedom that much, go hide under a rock instead of becoming the DPL!

Let’s now talk about KDE in Debian. No, I won’t switch to Debian testing, as I don’t want KDE6 to be forced upon me when it’s available. I also don’t like the way Debian testing is simply frozen when the team is busy preparing the next Debian release. And let’s not talk about the security patches. Testing is stable enough (it’s not sid, and it’s not experimental), but it’s not a distro.

There used to be a way to get newer KDE builds to Debian stable. It was the work of Norbert Preining. In January 2022 he announced he’ll stop packaging for Debian, then he last packaged KDE 5.24 in February, and that was the end of it. Oh, wait, the real end came in 2022/11, with the updates to KDE 5.24.7 (repo plasma524, 5.24 being a LTS version of KDE), and 5.25.5.

Why would he do such a thing? Because he was repeatedly abused by several people; I remember how, back in 2019, Martina Ferrari called him names because he wasn’t aware that Martina Ferrari, born Martin Ferrari (with XY chromosomes), is now a she/her. The Debian community is toxic. The transsexuals and the non-binary persons contributing to Debian are aggressive and nasty people. Norbert Preining could not offend the Japanese while living and working there for years, but somehow he wasn’t courteous enough towards a few mentally insane Debian contributors!

To put it bluntly: yes, I am transphobic and non-binary-phobic! There are hermaphrodites, or people born with sexual dysmorphia, who really need medical help to improve their lives. But such spoiled, pampered assholes who believe they’re entitled to whatever they want, such perfectly sane males who want to have a vagina, and such perfectly healthy females who want to have a dick—such persons deserve no respect. Similarly, considering oneself non-binary is a mental disorder. Simone de Beauvoir and Pierre Bourdieu talked about gender as a social construct; then Michel Foucault wrote a lot more about sex, gender and society, and entire generations of American retards, encouraged by Judith Butler, now believe that they can have another gender than one of the two biological sexes that exist! Such “privileged” people (with privileges granted by themselves!) require “safe spaces” so that whatever their current lunacy is (“my pronouns for today are…”), everyone else pampers them, and agrees with them. How about such “safe spaces” for the rest of the world, for those who don’t elevate the mental insanity to a legitimate new norm, you fucktards? We deserve respect in the first place! There is progress, and there is madness. Not every change is progress. Not every illusion deserves respect.

This episode really deters me from using Debian. Let it be for those who are gender-fluid.

Speaking of “gender is not sex” (indeed, gender is a grammatical category in many languages), all those fucktards who ask for “a third gender” (X, which is neither M nor F) in their passports or ID cards, and even the authorities who approve such a mentally insane demand, seem to have failed to notice that all identity documents list one person’s sex, not gender! And sex is biology, meaning there are only two: male and female. Legally, if a person is born with sexual characteristics belonging to both sexes, then that person is a male because “they” have a penis. End of story.

As a side exercise, let’s compare GNOME’s and KDE’s codes of conduct. GNOME’s is called “Creating a Welcoming Community” and it includes, in about 1250 words (without the licensing and the end links), specific mentions of:

  • gender expression
  • gender identity
  • sexual identity
  • deliberately referring to someone by a gender that they do not identify with, and/or questioning the legitimacy of an individual’s gender identity.
  • “inappropriate touching, groping, or sexual advances.”
  • “Unwelcome physical contact. This includes touching a person without permission, including sensitive areas such as their hair, pregnant stomach, mobility device (wheelchair, scooter, etc) or tattoos. This also includes physically blocking or intimidating another person. Physical contact or simulated physical contact (such as emojis like “kiss”) without affirmative consent is not acceptable. This includes sharing or distribution of sexualized images or text.”
  • The GNOME community prioritizes marginalized people’s safety over privileged people’s comfort, for example in situations involving:
    • “Reverse”-isms, including “reverse racism,” “reverse sexism,” and “cisphobia”.
    • Reasonable communication of boundaries, such as “leave me alone,” “go away,” or “I’m not discussing this with you.”
    • Criticizing racist, sexist, cissexist, or otherwise oppressive behavior or assumptions.
    • Communicating boundaries or criticizing oppressive behavior in a “tone” you don’t find congenial.

Wow. Marginalized because they’re lunatics, because they pretend they’re neither male nor female, because “she” and “he” are not enough, so they invent a new grammar? Such people should be anti-marginalized, and their “safety” is a priority to GNOME! (So that’s why Files cannot have a compact list view, because the priorities are different!) As for the so-called safety, some snowflakes cannot tolerate words. Nobody wants to hurt them physically. But nobody cares about their pronouns.

Nice thing that they mentioned cisphobia. Everyone is cis. There is no other way. If you’re male, you should consider yourself a male. If you’re not cis, you consider yourself either female or non-binary. And this means you’re nuts.

In contrast, the KDE Community Code of Conduct, despite having about the same length, is a whole lot different:

  • Sex is only mentioned once: “We do not tolerate personal attacks, racism, sexism or any other form of discrimination.”
  • Gender isn’t mentioned.
  • No mentioning of sexism, cisgender, cisphobia, tattoos, drinking, emojis like “kiss” etc.

Let’s hope it stays this way.

MX Linux: nope

MX Linux isn’t bad, but it’s overrated. By no means could it be so popular, except in Distrowatch’s rankings. This aside, I dislike it. Esthetically, it’s anticlimactic to me, and I cannot explain why.

It has some good parts:

  • Extra repos to fix what Debian (stable) doesn’t have, especially updated packages.
  • Lots of goodies: MX Tools (mx-user, mx-service-manager, etc., and even mx-snapshot is not a bad idea), MX Tweaks, preinstalled Timeshift and luckyBackup, even an installer praised by some.

However,

  • Their anti-systemd stance is stupid. They default to non-systemd, which breaks a lot of software nowadays.
  • They also have their own bugs not present in Debian. They change too much, and customize too much.
  • Speaking of customizations, their KDE and XFCE ones are both ugly, and different from one another.
  • Also, while some of their tools are tremendously useful, the overall impression is that they try too hard. It’s overwhelming.
  • I had issues with MX Repo Manager. Selecting the fastest Debian repo and the fastest MX repo, despite being two different operations, sometime broke the repo list and made one of the repos either slow or broken.
  • Despite the initial experience being rather good, after some updates (from Debian), I ended with the CPU governor set to “performance” instead of “powersave”; that’s a killing choice for a laptop. If at least they had cpupower-gui preinstalled…

This being said, there isn’t anything severely wrong with MX Linux. I just don’t like it, I don’t feel we’re compatible.

Ubuntu and its flavors: a few notes

● The “theological problem in Ubuntu” is that, not only it forces snaps on you, but it has replaced Firefox and Thunderbird with their snap versions. But guess what?

“Mozilla Team” continues to maintain ppa:mozillateam/ppa, with firefox, firefox-esr and thunderbird even for Ubuntu 24.04 LTS.

I try to avoid both snaps and Flatpaks, and each of them is different: Flatpaks are better suited for GUI apps, but it’s a PITA for the developers; snaps are easier to create and maintain for the developers, but they’re better for non-GUI processes; they’re both different from AppImages, which are portable applications that are distributed as FUSE filesystem images. Either way, it’s not worth making such a fuss about snaps. I had my time hating them; now I’m much less resentful. My major gripe with snaps is their ridiculous auto-update mechanism that can’t even be disabled.

● There is also a more recent problem: “the GDebi issue,” which is another way to force snaps on people. OMG Ubuntu: Fix ‘No App Installed for Debian Package’ in Ubuntu 23.10:

When you double click on a DEB package in Ubuntu 23.10 an error appears to say: “there is no app installed for ‘Debian package’ files”.

Thing is: the Ubuntu App Center can install deb packages from the Ubuntu archives but can not install local deb packages, at present at least. I imagine this feature will be added in a future update (there’s an open issue about it and it hasn’t been closed, which bodes well).

The fix is to install GDebi (gdebi), and to associate it with .deb files once it’s used to install such a package. Obviously, dpkg can also be used to install a .deb at the CLI level.

People tend to be more hysteric than I am: I AM SO DISAPPOINTED WITH UBUNTU 24.04 😡

The all caps headline is deliberate. I am all enraged.

Disappoint is a sober word here. I am actually pissed at the casual arrogance of Ubuntu and its parent company Canonical.

Please excuse the language, but I just cannot express my disgust any other way.

What’s the issue? Well, Ubuntu 24.04 LTS is here and it will stay for more than the usual 5 years of long-term support period.

For years, Ubuntu has been promoting its ‘universal packaging format’ Snaps over ‘Universal Operating System’ Debian’s deb packages.

But it has gone to the next level in the recent release of Ubuntu 24.04 LTS.

Ubuntu’s official app store won’t install deb files.

Actually, it started with the previous version Ubuntu 23.10.

If you download a .deb package of a software, you cannot install it using the official graphical software center on Ubuntu anymore.

When you double-click on the downloaded deb package, you’ll see this error, ‘there is no app installed for Debian package files’.

If you right-click and choose to open it with Software Center, you are in for another annoyance.

The software center will go into eternal loading. It may look as if it is doing something, but it will go on forever. I could even livestream the loading app store on YouTube, and it would continue for the 12 years of its long-term support period 😕

And this is not just a simple bug they forgot to fix. They deliberately ignored it.

Canonical has no intention of fixing it.

An issue was opened on GitHub in Sep’23 when Ubuntu 23.10 was in beta. Canonical did not fix it for Ubuntu 23.10 release.

And six months later, Ubuntu 24.04 LTS is released and Canonical has still not fixed this bug.

That’s because the so-called bug for us, the public, is most likely a feature crafted by them, Canonical folks.

Otherwise, Ubuntu developer won’t say that they “didn’t have capacity to work on it”.

This is a lame excuse. Who would believe that? Not me, for sure.

It’s a stalling tactic. The bug was noticed by users for months and yet, it was not fixed.

The entire point of these intermediate releases in-between the LTS ones is to test things out. Clearly, there was no intent to fix it while the end users, specially the new users, kept on struggling with this error.

Ubuntu is cleverly booting out deb in favor of Snap, one baby step at a time.

By the way, before Ubuntu 20.04, you could just double-click on the deb files, and it would be opened in the software center for installation. Starting with Ubuntu 20.04, this behavior was cunningly changed.

Double-clicking the deb file would open it with Archive manager.

… …

Let’s put things straight:

Canonical wants to force people to believe that snaps are the best thing since sliced bread. Still, .debs are not being discontinued, and nobody prevents anyone from installing GDebi and restoring the traditional behavior.

Now let’s get to specific flavors, as I tested them in version 24.04 from the Live ISOs.

Ubuntu

I cannot use the default Ubuntu, because I cannot use GNOME for various reasons, including the retarded file manager that can’t have a Compact List View, and the GNOME Shell or Activities Overview which seem designed for tablets. Still, let’s report a few quirks.

There’s an inconsistency regarding the locations to download the desktop ISO images from. Why the desktop image is only here: https://releases.ubuntu.com/24.04/ but not also here? https://cdimage.ubuntu.com/ubuntu/releases/24.04/release/

The other desktop flavors have locations following a pattern that doesn’t exist for the main one, i.e.:

and so on.

Then, the main Ubuntu flavor is only surpassed in size by Ubuntu Studio, which means that it’s bloated:

  • 6.7G for ubuntustudio-24.04-dvd-amd64.iso
  • 5.7G for ubuntu-24.04-desktop-amd64.iso
  • 5.0G for ubuntukylin-24.04-desktop-amd64.iso
  • 4.9G for ubuntucinnamon-24.04-desktop-amd64.iso
  • 3.9G for ubuntu-mate-24.04-desktop-amd64.iso
  • 3.8G for kubuntu-24.04-desktop-amd64.iso, xubuntu-24.04-desktop-amd64.iso, ubuntu-budgie-24.04-desktop-amd64.iso
  • 3.1G for lubuntu-24.04-desktop-amd64.iso

Practicalities:

Ubuntu MATE

  • Text files are not associated with Pluma. WTF?!
  • GDebi is preinstalled, which is a plus.
  • Evolution fails to start, just like in the main flavor.
  • webcamoid is stupid and doesn’t work, or I couldn’t find something to click on that makes it do something different from this shit:
  • Dark themes still screw Qt apps, just like in the previous versions of this distro flavor:
  • Also, notice how Dolphin has words instead of the toolbar icons for Icons, Compact and Details view modes. This happens in all Ubuntu flavors that are not Kubuntu. I suppose there’s an optional dependency that’s not pulled in when installing Dolphin.

Xubuntu

Possibly the ugliest XFCE ever. I don’t remember how ugly XFCE’s themes were 20 years ago, but nowadays, we’d expect better defaults. This is why I’d rather use XFCE in Mint or Manjaro, where XFCE is configured to a specific visual identity, or even in Fedora, where it’s easier to configure the layout thanks to the preinstalled xfce4-panel-profiles utility.

Also, the aforementioned CVE-2021-32563 case revealed that you can have someone that is the Xubuntu Technical Lead and an Xfce Core Developer, yet he was absolutely unaware of anything and everything:

It’s ugly and I cannot trust it. Still, some practical aspects:

  • It’s still the ugliest default XFCE theme of all, and the available themes aren’t satisfactory either. Years ago, I liked XFCE very much, but nowadays it looks like a dinosaur, even more of a dinosaur than MATE! Funny how it uses Engrampa and Atril from MATE; and how how many components never reached version 1.0: Mousepad, Ristretto, etc. Pathetic.
  • GDebi is installed, which is a plus.
  • Thunderbird works.
  • Another relic that makes it annoying. I’ll quote from The Reg on resizing windows: “We’ve also heard from people who find the very skinny window borders of Xfce make windows hard to resize.”
  • Shutdown took too long with the Live ISO; actually, I guess it failed, as I had to keep the power button pressed until it really reached the shutdown.

● Linux Lite—the contender

A completely different distro that only tracks the LTS releases, but a legitimate alternative to Xubuntu. In the past, I liked it at some point for its looks and convenience, but then I had a disagreement with Jerry Bezencon, Linux Lite’s leader. Either way, about Lite 7.0:

  • In the Live ISO, Chrome cannot be launched. There are some missing rights. I suppose it works in the installed system, but I wonder why nobody’s complaining that the live system cannot be used to browse the net, unless Firefox is installed.
  • Either way, to default to Google’s spyware called Chrome is a bit of a strange choice.
  • And Linux Lite is anything but light in terms of speed and RAM usage.
  • Finally, I’m still unable to find the sources for their specific packages. For instance, Linux Lite comes with the excellent system-monitoring-center, whose last (and final) version is 2.26, but Jerry Bezencon is using a strange versioning that happens to have reached 6.8.1. The binary is here, but the source for this 6.8.1 thing seems to be well hidden. Bad juju.

● Kubuntu

  • Double-clicking on a .deb opens the QApt package installer (qapt-deb-installer).
  • KMail is not preinstalled, but Thunderbird seems to work fine.
  • They made an effort to include a KDE version that’s even newer than in Debian sid: Plasma 5.27.11, KF 5.115.0, Gear 23.08.
  • They also include the latest versions of apps that are older in Debian, such as (discussed above): featherpad is in version 1.4.1, transmission (-qt, -gtk) is in version 4.0.5.

Lubuntu

  • Double-clicking on a .deb opens the QApt package installer (qapt-deb-installer).
  • No mail client installed in the Live ISO.
  • As in most cases from the past, because of upstream releasing LXQt at the worst possible moments, the included version of LXQt is obsolete: 1.4.0 (2.0.0 was released on April 15). Note that even upstream, pcmanfm-qt is still in version 1.4.1, so maybe it’s not that tragic to stay behind as far as LXQt is concerned. Is LXQt 2.0.0 more like KDE6, i.e. unfinished?
  • Some Lubuntu-specific apps are in the latest version (featherpad 1.4.1).

Ubuntu Unity

  • It should be, out of the box, more usable than the GNOME-based Ubuntu, because it uses Nemo instead of Nautilus. However, it’s impossible to enable Nemo’s menu (in the global menu), unless you run gsettings set org.nemo.window-state start-with-menu-bar true, but then it will show up all the time. (Nemo’s menu is required to access its Preferences.)
  • Probably to avoid Ubuntu MATE’s screw of the Qt-based apps, Ubuntu Unity doesn’t bother to theme Qt apps at all: they’re on a white background, even as Unity is using Yaru Dark.

Ubuntu Cinnamon

  • 24.04 must be a broken release, as it simply doesn’t detect any inserted USB drive (other than the already mounted Ventoy that it booted from), and it also lacks a Devices section in Nemo’s left panel. That’s strange, because Linux Mint 21.3 Cinnamon works just fine (Mint 22 has an unknown ETA).
  • Otherwise, and this is a Cinnamon “defect by design” (not Ubuntu’s), should I replace the icon-only Grouped window list with the more traditional Window list, I get that idiotic design of having the window titles displayed on a too narrow, fixed width: the width is the same in a 3-window situation as if it were 8–9 windows. Clement Lefebvre, the genius. Not.
Why can’t I see more from the longer window titles on the taskbar?
From 8 windows upwards, that’s justifiable

Linux Mint: by no means

When Mint was in its beginnings, I liked it for a while. I tried to come back to it more than once while I was an XFCE type of person. As I explained here, I dislike Cinnamon for its poor design decisions (one of which you can see in the screenshots above). Now that I’m into KDE, I cannot use Linux Mint, simply because they don’t support KDE anymore!

Another thing I dislike about Mint is the fact that it’s only based on Ubuntu’s LTS editions. So you can have the latest Cinnamon, but on the base of obsolete apps, which means that when backports don’t help, Flatpaks or snaps should be used. So Mint is sort of “neon for Cinnamon” 🙂

Linux Mint has a significant number of good parts, though:

  • Uniform theming among desktop environments (although it’s bland and less attractive than in Manjaro, and unbelievably ugly even for a boomer).
  • Preinstalled Timeshift.
  • Mint Tools, something that many users are appreciating.
  • X-Apps developed by Mint: Xed, Xviewer, Xreader, Xplayer, Pix.
  • Warpinator, a tool that can be used in other distros too.
  • Hypnotix, an IPTV player.
  • A significant user base, hence an important community that can help one another on the forums.
  • Being based on LTS has the advantage of stability (but for Cinnamon!).

And yet, Mint takes too long to switch to the next Ubuntu LTS, once it’s available. Mint is really a dinosaur.

A few words on KDE

The idea is KDE is far from perfect, but it’s the most usable desktop nowadays. Remember when I explained how Qt (and therefore KDE) fixed a bug in the Linux Kernel? The exFAT renaming bug. And PCManFM-Qt did not fixed it, because it’s such a superficial port of PCManFM that the Qt part is only for GUI; file operations are still using GTK. Oh, boy, what a fucked-up thing is LXQt! Why did they all run away from LXDE? (And let me remind you that PCManFM-Qt’s developers have refused to add the Ctrl-1, Ctrl-2, Ctrl-3 and Ctrl-4 shortcuts that would make it behave like PCManFM, “because the view buttons are on the toolbar now.” Assholes.)

Here’s another thing where Dolphin is the only major file manager that works as expected, and all the others just suck: when canceling a file copy operation!

❓ What do you expect when you’re using a graphical file manager to copy a large file to a slow destination, and you change your mind, so you hit the Cancel button? Mind you, this is not a CLI utility; you don’t hit CTRL+C, you don’t kill it.

❗ The file manager should cancel the copy operation graciously, by issuing a fclose() followed by remove(), so that nothing remains in the destination. This is what Windows has been doing since, like, forever.

😮 Not so in the open-source land. Here, only Dolphin does that!

I tested using a 4.4 GB (4.1 GiB) file to be written to a slow Flash drive, so I had the time to cancel the operation in the progress dialog box. This is what I got:

🟢 Dolphin canceled the copying right away, and it left nothing in the destination.

🔴 Nautilus/Files canceled the copy after a very long time, waiting for the OS to flush to disk, and it left a broken 4 GB file in the destination!

🔴 Nemo canceled the copy after a very long time, waiting for the OS to flush to disk, and it left a broken 4 GB file in the destination!

🔴 Caja canceled the copy after a long time, waiting for the OS to flush to disk, and it left a broken 2.1 GB file in the destination!

🔴 Thunar canceled the copy after a long time, waiting for the OS to flush to disk, and it left a broken 2 GB file in the destination!

🔴 PCManFM canceled the copy after a very long time, waiting for the OS to flush to disk, and it left a broken 4 GB file in the destination!

🔴 PCManFM-Qt canceled the copy after a long time, waiting for the OS to flush to disk, and it left a broken 2 GB file in the destination! Knowing that PCManFM-Qt is still using GTK for file operations, the differences from PCManFM came as a surprise. Maybe the message from the UI was processed faster.

Who were those shitheads who wrote such a code? Who needs a broken, incomplete file in the destination folder?

The problem with such broken, incomplete files is that people would assume they’re complete, valid files! “Oh, so those files got copied, great.” Except that they weren’t!

These fucktards write such retarded code, yet they earn more money than I do. Fucking stupid software developers.

Caja
Nemo
Thunar
PCManFM

Speaking of Caja and Nemo’s “Duplicating” text, I do have a criticism for Dolphin, though. It suffers from a bit of poor code logic that I only discovered by chance.

Say you want to overwrite a large file on a filesystem that’s tight in space. The above example of writing large ISOs on a Ventoy flash drive is perfect. I accidentally tried to copy that 4.4 GB file over itself while the free space was low, and Dolphin said that there wasn’t enough space!

Again, who the fuck is writing code nowadays?! The correct implementation should have been:

  • First, check whether the destination includes files with the same name as some of those to be copied, and ask the user if they want to overwrite them or to write the files under new names.
  • Only after the user makes this decision, and only then, calculate the necessary space and possibly decide that the operation is impossible because of a lack of space!

Obviously, this would be very time-consuming if an entire folder hierarchy were to be copied, but when only one file is copied and such a message is displayed, this is 100% stupid and 100% a lie! Only if the user decides to rename the file would the space be insufficient!

This obviously assumes that when replacing (aka overwriting) a file, the preexisting file is first deleted. It would be completely retarded to write the new file with a temporary name, only to be renamed on success. This is not filesystem journaling! If someone wants a file replaced, then they don’t need the original file, right? So the above algorithm is the logical one.

Unless you’re writing yourself the entire OS and all the apps that you need, there’s no guarantee that you won’t be using software written by retards, incompetents, or stupid people.

Well, but there’s one more thing I like in Dolphin, though. It has to do with Natural Sorting. Remember the times when Windows Explorer was sorting the files alphabetically, say photo1, photo10, photo2, etc., instead of photo1, photo2, … photo10? Wel, that was before natural sorting was invented! Now all file managers use natural sorting, although this can be disabled in Dolphin.

Still, there’s one more question when using such a sorting method that I would call Semantic Sorting: should a filename start with an underscore, where will it be listed: at the beginning of the list (alphabetically), or at the place dictated by the first alphanumeric character, which would be a “fully semantic” approach?

I prefer the first approach, because this is the trick I use to put some files first in a folder. This approach is only used by: Dolphin, PCManFM-Qt, and Windows Explorer.

The “fully semantic” approach is used by everyone else: Caja, Nautilus, Nemo, Thunar, PCManFM:

But how do I make a file show first in the list, other than prefixing it with “aaa” instead of “_”? I’m sticking to Dolphin. And KDE.

And yet, I fear that KDE might not be headed in the right direction.

Switching to KDE Plasma 5 was for me a pragmatic decision. I might have written before, or maybe not, but I wasn’t a fan of KDE Plasma 4. It crashed all the time. And I never needed those frigging plasmoids, which probably were inspired by the Windows Desktop Gadgets—something that I never cared about. KDE3 wasn’t that bad, and I liked it a lot in the original version of Pardus that used PiSi, KDE3, and was not based on Debian! Otherwise, I was a fan of GNOME2 up to and including version 2.32. I was also using XFCE with the GNOME2-like 2-panel layout.

You see, the 2-panel layout was great in times when the display aspect ratios were 4:3 (800×600 and 1024×768) and 5:4 (1280×1024). Once everyone started to believe that computers are sort of CinemaScope and should have 16:9 screens (although CinemaScope had 2.35:1 and 2.39:1, not 16:9 ≈ 1.78:1, but DVDs have adopted the 16:9 aspect ratio), everything changed for the worse.

This is when the metaphor introduced by Win95 became the most sensible one: a single bottom panel. Somewhere around versions 5.3–5.4, KDE Plasma 5 became solid enough to my taste. And I also used XFCE with a Windows-like panel.

But people (sheeple!) started drooling after the macOS UI metaphor. This is how we got the “global menu bar” and the centered date and time in GNOME3/4x. What’s worse, people wanted and created several Dock implementations or lookalikes.

Unfortunately, the Dock concept of an icon-only representation of an app, being it only a launcher or a running app, has contaminated both Win7 and KDE5!

I detailed in this section why KDE is the best desktop environment at implementing the Win95-style of windows’ titles on the taskbar, better than e.g. Cinnamon. But it’s not the default, as precisely with version 5.4, KDE switched from the classic Task Manager to the Icons-only Task Manager, which was the most anti-ergonomic decision they ever took!

Unless you manually switch back to the full Task Manager, you get the same shit in KDE Plasma 6:

Unless you’re completely retarded (98% of the world’s population is retarded), why would anyone want to waste most of the taskbar’s space? And can you really determine at a glance which icons represent running apps and which ones are just launchers? Really? (Yeah, Discover is the only one that’s not running.)

Obviously, whoever has still a minimum portion of working brain would prefer to get much more information without even moving a finger.

The floating panel in KDE6 is completely stupid. Also, if you don’t need windows’ titles to be displayed in two rows (although this concept has been introduced by Win98, and it’s great for productivity), you can make the panel smaller in height and skip the stupid floating that only wastes space.

This is the most productive UI ever invented. If someone at Microsoft ever did something beneficial to the humankind, it was the team who created the Win95/Win98 visual metaphor. Too bad that everyone is now shitting on this great gift.

Why use the entire screen width if you, the complete retard, only want to see icons and no text at all? Make it simulate a dock:

But then, why are you using KDE? KDE was meant to implement the good parts of the Win95/Win98 UI concepts, although it gradually started to mimic the bad parts too, especially the icons-only task manager.

You could use a vertical panel, then. This is what Unity and the current GNOME customization in Ubuntu do. This is what MX Linux XFCE does. If the screen has become too wide (read: too short in height for all practical purposes but watching movies), one solution is to get rid of all horizontal panels and use vertical ones. But KDE doesn’t come this way by default. And a vertical panel is stupid, for it really could not display window titles.

I can only hope that KDE6 will never degrade so much as to remove the features that enhance usability, ergonomics, and productivity. The worst offender is GNOME: the dock is for retards; the application grid with huge icons seems made for tablets or touchscreens (or is it for retards too?); and Files (Nautilus) lacking a Compact List view is the utmost accomplishment in reducing the productivity of intelligent people!

For the time being, my gripes with KDE are minimal. Say, this blurring of KDE’s logout/shutdown and lock screens. Minor issues, generally. Things that I can fix, unlike the abhorrent idiocies of Win10 and especially Win11.

Whitherwards?

All things being considered, “I am too old for this shit”—the shit being distro-hopping. I agree with everyone saying that Linux will never become the future of desktop computing, because the big companies that support Linux don’t care about desktop users. Most Linux distros are servers, containers, and occasionally desktops installed in a VM. The number of desktops installed on bare metal is negligible (those 4% that were reported for 2024) and will remain negligible (say, under 10%).

However, I won’t go back to Win10, which will reach its end of support on October 14, 2025, meaning everyone will have to upgrade to Win11, which is the acme of stupidity. And I can’t use macOS either, for many reasons that I won’t explain here to avoid offending too many people. So it has to be Linux—the *BSD flavors are way behind.

  • I will keep using AlmaLinux with KDE on the mini-PC and on the newest laptop, at least for some time. I don’t have time to waste on shit, otherwise I’d give a try to CentOS Stream. But right now I have on the mini-PC all the software I need; only on the laptop I might need some more apps, and I’ll see if they’ll be installable (or buildable). AlmaLinux is as close to “enterprise” and to Red Hat as possible (in a sense, Rocky Linux is closer, but I dislike them).
  • For the older laptop, surprisingly, I’ll probably install Kubuntu 24.04 LTS, at least for some time. At first sight, it seems to work pretty well. It has the advantages of being synonymous with Linux for those cretinoids who think, “Oh, you need a Linux build; take this .deb for Ubuntu!”

KDE6 is already at 6.1. Its development has been too fast-paced to trust its quality. Maybe, around KDE Plasma 6.4, I’d be interested in it, especially if lots of apps will migrate to Qt6 and KDE6. Then, but only then, I’ll have to make another choice:

  • Will AlmaLinux 10 have KDE6 in EPEL? Will CentOS Stream 10 be a good choice?
  • Will a new Kubuntu, LTS or not, feature a solid KDE6? Will a backport be available to 24.04 LTS?

I forgot to mention that I found it stupid to have KDE Plasma 6 replace KDE Plasma 5 when it becomes available in a distro, instead of letting the user choose what generation to install. They’re based on different Qt generations (Qt6 vs. Qt5). KDE5 is still going to be supported for a while, and I’m pretty sure LTS distros would bother to patch any severe security issue even after upstream stopped supporting it. Why force KDE6 on people? When KDE4 was released, if I’m not mistaken, Debian introduced an Epoch 4 to versioning KDE, so they now still have versions like 4:5.27.11-1. An Epoch 6 should be used for KDE6, as it’s a completely different product. You know, like Python 3 over Python 2.7. People should be given a choice! Otherwise, it’s like “Win11 will replace Win10, take it or fuck off!”

Meanwhile, maybe Vladimir Vladimirovich will help us reach the afterlife faster. 💣☢️💥 Or maybe Comrade Xi will rule over us all. 🐉 Nobody knows a fuck about the future, except that it’ll suck big time. That applies to operating systems too!

LATE EDIT: Clement Lefebvre, libAdwaita, and GNOME’s impact on GTK and XFCE

I just forgot about it. The last time I mentioned GNOME’s nefarious effect on the usability of GTK-based apps was, I guess, in 2021. Meanwhile, I cursed a lot each time some random GTK app made me click OK in the top-left corner. WTF, is CSD designed for right-to-left languages? Even if it were so, there is no bottom-to-top language that I know of!

This being said, I noticed in Clem’s Linux Mint Monthly Newsletter for April 2024, some talk about their XApps, and about libAdwaita being “for GNOME only”:

Many Xapps are available in other distributions (Debian, Ubuntu, Fedora, Arch, etc..) but very few distributions actually make use of them.

Take Xubuntu for instance. It used to ship with file-roller, gnome-calculator, evince. These applications moved to libAdwaita (more on that in the next paragraph) and now look completely out of place in Xfce so Xubuntu replaced them with engrampa, mate-calc, atril.

For GNOME-Scan, it couldn’t find alternatives though.

This is GNOME-Scan and Atril side by side in Xubuntu 24.04:

This isn’t ideal for Xubuntu. These applications are installed by default, this is how it looks out of the box.

So on the right you have Atril which looks like all the other apps in Xubuntu 24.04, and on the left you’ve got an app which has nothing to do here and which is designed to integrate specifically with GNOME Shell.

To add to the issue, although MATE apps such as mate-calc work everywhere, they were designed for MATE, so if you open up the application menu you don’t see “Calculator” in your Xubuntu desktop, but “MATE Calculator”.

They have the same problems as us, as MATE, as Budgie, as many other desktops… we made Xapps because we needed them in Mint, in Cinnamon. We didn’t want to make Cinnamon apps, so we made “Linux” apps which worked “everywhere”, we wrote it somewhere and we left it that.

libAdwaita is for GNOME only

It would be completely unacceptable for us to ship with an application which used its own window controls and didn’t follow the system theme. Looking at it long-term, we also do not want our apps to be designed by people who have no consideration for what is important to us, and whose decisions are motivated by a desktop we don’t even use.

This is File Roller 3.42. This application has always been labeled as “for GNOME”, but it integrated well in any GTK desktop. With File Roller 44 this is no longer the case. It looks just like GNOME Scan in the previous screenshot. It’s not made for MATE, Cinnamon or Xfce and it really shows.

By moving to GTK4/libAdwaita this app really became a GNOME app, an app which looks specifically designed for GNOME and nothing else.

In Mint 22 GNOME Font Viewer was removed and the following applications were downgraded back to GTK3 versions:

  • Celluloid
  • GNOME Calculator
  • Simple Scan
  • Baobab
  • System Monitor
  • GNOME Calendar
  • File Roller
  • Zenity

libAdwaita is for GNOME and GNOME only. We can’t blame GNOME for this, they’ve been very clear about it from the start. It was made specifically for GNOME to have more freedom and build its own ecosystem without impacting GTK.

Adwaita no longer works outside of GNOME

Adwaita (the theme) will be removed from the list of available themes in Cinnamon 6.2.

As you can see the theme provides icons for some categories (Internet, Accessories, etc.) but not others. Many icons are missing, the desktop looks completely broken and it’s not a bug, it’s a feature. The direction Adwaita is taking is to only support GNOME and nothing else.

It would be OK if we could remove Adwaita or not ship with it, but we can’t. GTK depends on it.

Budgie didn’t wait for it to break and blacklisted Adwaita 2 years ago. We’re doing it now in Cinnamon. MATE and Xfce should probably do it since it looks just as bad on any non-GNOME desktop.

Then, in the newsletter for May 2024, in which Clem announced the wise decision to disable by default the unverified Flatpaks, there is this user comment:

Can we PLEASE do something about the horribly unintuitive GTK file chooser dialog? By my count there are 30+ forum posts that complained about it within the last year. A common issue is the line where we type the file name to save. This causes a dropdown list with folders which a lot of people hate and find confusing. It gets in the way. Choosing one of the folders in the dropdown list does not navigate into that folder, which a lot of people find confusing too. There are many other issues with it (no option to open in the same / a default folder, having to re-navigate over and over), it needs a complete overhaul. We should gather feedback from the community.

From the six replies to it, two agree, and the remaining four mention other bugs (which are by design) in the CSD-screwed GTK! Why on Earth did Clem decide to ever stop supporting KDE? Yes, I know, he got bored with MATE and started developing Cinnamon, which has a terrible use of the screen estate. He could have “adopted” or forked LXQt and made something great out of it, but he didn’t. Well, now he’ll have to fight with GNOME, which means he already lost the battle. Meanwhile, indeed, XFCE looks like shit with mixed-looking apps, whether they are from GNOME or from MATE!

“OK, boomer” or not, I find MATE inadequate for our times, which means the only sensible decision is to stay with KDE, even if this means KDE5 for now, which is actually wiser than jumping into the KDE6 bandwagon.

Another LATE EDIT: A bug (or is it a feature?) in KDE6

A tiny glitch bothered me, and I was stunned that Dedoimedo (Igor Ljubuncic), who finds faults in everything, including an unsatisfactory contrast here and a few misaligned pixels there, could have missed it. And how about the other KDE6 users?

I’ve read Dedoimedo’s first, second, and third quick reviews of KDE6, and I know him to believe KDE5 to be the king of all desktop environments. Well, whatever.

I compared KDE6 in two versions: 6.0.90/6.4.0 in Neon Testing and 6.0.5/6.2.0 in Manjaro 24. In both cases, floating and non-floating.

Notice the position of the start button, and the colored dash above it, that matches its width:

Now, after changing from Application Launcher to the more Win95-like Application Menu, what do we notice?

The start button is fucking indented to the right, and the colored dash covers the entire, wider area, bar the padding! WTF?! Who made this change, and why, you retarded morons? It’s looking like shit!

Rest assured, I’m not a complete idiot. I know that even in KDE5, when Application Launcher is replaced with Application Menu, the start button is slightly indented to the right and centered in a slightly larger area because that area has to match the width of the vertical column of Favorites icons. But in KDE6, as anyone can see in the images above, the button width and the colored dash above it have absolutely nothing to do with the Favorites icons column!

Also, depending on the theme, the button width can happen to be unchanged between the two styles, launcher and menu (see the second menu above).

Breaking News: openSUSE Leap 15.6

In the past, I dismissed openSUSE Leap 15.x, because I couldn’t find it superior to RHEL9’s clones. Its progression model seemed (and still seems) confusing to me. Released in 2018 with kernel 4.12 in version 15.0, it got kernel 5.3.18 for 15.2 and 15.3, and kernel 5.14.21 for 15.4 and 15.5. A kernel line seems to be supported for about 2.5 years. Maybe that’s why the freshly released 15.6 got a major kernel upgrade to 6.4.0, making it much more interesting than RHEL 9.4 and its clones, who still swear by the antiquated kernel 5.14!

That means openSUSE Leap 15.6 supports my MT7663-based Wi-Fi and BT out of the box (in AlmaLinux, I need to use kernel-lt from ELRepo). I only need to add sof-firmware for the audio chip (alsa-sof-firmware in EL9 and clones).

Other significant updates include, e.g., KDE from 5.27.4 to 5.27.11 (we’ve got that in EPEL9 earlier), LibreOffice from 7.4.3.2 to 24.2.1.2 (EL9 has the antiquated 7.1.8.1, but 24.2.4 can be downloaded from upstream), Python from python3-3.6.15 to python311-3.11.9 (python3-3.6.15 has been preserved, similar to how in EL9 there are python3 as 3.9, plus python3.11 and, since EL9.4, python3.12).

I’m not sure that I need to consider openSUSE Leap 15.6 and the subsequent updates for any of my systems, given that it has no future: Leap 16 is going to be an immutable distro! As I already said, I’m not a fan of openSUSE’s certain defaults and idiosyncrasies, and its messy extra repos are not something I’ll ever fully understand.

I’ll have to give it some more thought.

Oh, wait! The mixed messages from openSUSE might have been misinterpreted. The Register wrote back in January:

The latest update on the future of openSUSE Leap confirms that there will be a release called Leap 16 at some point, alongside a version 6 of the existing Leap Micro … but version 16 will be based on SUSE’s containerized ALP distribution. This means that Leap 16 is destined to be an immutable distro with transactional updates, and thus significantly unlike the current openSUSE distribution. Although the project is promising a migration path, it seems very unlikely to be a simple in-place upgrade.

However, the cited page has a different message:

As many eagerly await the arrival of Leap 15.6 this year, a path for Leap 16 as a successor awaits. Based on SUSE’s new Adaptable Linux Platform (ALP) codebase, openSUSE Leap 16 will combine the benefits of an advanced enterprise server distribution and user-friendly maintenance and security that is a hallmark of the Leap series.

There are no plans to drop the classical (non-immutable) option for Leap; both non-immutable or immutable installation variants are available for Leap 15 and are planned for Leap 16. This is set to remain the preferred way for people to deploy Leap.

This dual-model is also mentioned here:

2. Will Leap have immutable, non-immutable or both?
There are no plans to drop the classical (non-immutable) option for Leap; both non-immutable or immutable installation variants are available for Leap 15 and are planned for Leap 16.

Either way, Leap 15.7 is rather unlikely:

9. Is there a contingency plan in case of delays with Leap 16?
In case of Leap 16 delays, the release team may extend the life cycle of Leap 15.6 or, as a last resort, release Leap 15.7 to ensure sufficient overlap. Leap 16 will ensure there is no gap between the release and Leap 15’s End of Life cycle.

You cannot trust people to understand what they read. Idiocracy.

But there is still something that bothers me about openSUSE. As part of the unavoidable corporate stupidity, their official download page links to the huge installable DVD openSUSE-Leap-15.6-DVD-x86_64-Build709.1-Media.iso instead of linking to the directory of live images (something they call “appliances”), where one can find, among others, openSUSE-Leap-15.6-KDE-Live-x86_64-Media.iso, openSUSE-Leap-15.6-GNOME-Live-x86_64-Media.iso, openSUSE-Leap-15.6-XFCE-Live-x86_64-Media.iso. They can’t even link to what most people would want, which is a way to live-test their preferred desktop! No wonder their repositories are a mess.

The official Repository for openSUSE Leap 15.6 doesn’t list that many packages (15874), but there are also community packages. Unfortunately, the official search page is dumb: should you search with the defaults (“ALL Distributions”), you’ll be able to find, in a few clicks, community packages for openSUSE Leap 15.6, such as fsearch. However, should you perform a search for openSUSE Leap 15.6 only, you’d get “No packages found matching your search.” Repos, the OBS, and web interfaces: a huge mess.

On the more practical side, most community packages are still for Tumbleweed only. Examples: XnViewMP, XnConvert.

So, is this distro worth considering? Can it be trusted?