Is Linux kernel design outdated?
Linux has made great strides over the years, advancing far beyond where it was when it started. But one redditor recently wondered if Linux was suffering from outdated kernel design. He asked his question in the Linux subreddit and got some interesting answers.
Ronis_BR started the thread with these comments:
I have been a Linux user since 2004. I know a lot about how to use the system, but I do not understand too much about what is under the hood of the kernel. Actually, my knowledge stops in how to compile my own kernel.
However, I would like to ask to computer scientists here how outdated is Linux kernel with respect to its design? I mean, it was started in 1992 and some characteristics did not change. On the other hand, I guess the state of the art of OS kernel design (if this exists...) should have advanced a lot.
Is it possible to state in what points the design of Linux kernel is more advanced compared to the design of Windows, macOS, FreeBSD kernels? (Notice I mean design, not which one is better. For example, HURD has a great design, but it is pretty straightforward to say that Linux is much more advanced today).
His fellow Linux redditors responded with their thoughts about kernel design:
ExoticMandibles: “"Outdated"? No. The design of the Linux kernel is well-informed regarding modern kernel design. It's just that there are choices to be made, and Linux went with the traditional one.
The tension in kernel design is between "security / stability" and "performance". Microkernels promote security at the cost of performance. If you have a teeny-tiny minimal microkernel, where the kernel facilitates talking to hardware, memory management, IPC, and little else, it will have a relatively small API surface making it hard to attack. And if you have a buggy filesystem driver / graphics driver / etc, the driver can crash without taking down the kernel and can probably be restarted harmlessly. Superior stability! Superior security! All good things.
The downside to this approach is the eternal, inescapable overhead of all that IPC. If your program wants to load data from a file, it has to ask the filesystem driver, which means IPC to that process a process context switch, and two ring transitions. Then the filesystem driver asks the kernel to talk to the hardware, which means two ring transitions. Then the filesystem driver sends its reply, which means more IPC two ring transitions, and another context switch. Total overhead: two context switches, two IPC calls, and six ring transitions. Very expensive!
A monolithic kernel folds all the device drivers into the kernel. So a buggy graphics driver can take down the kernel, or if it has a security hole it could possibly be exploited to compromise the system. But! If your program needs to load something from disk, it calls the kernel, which does a ring transition, talks to the hardware, computes the result, and returns the result, doing another ring transition. Total overhead: two ring transitions. Much cheaper! Much faster!
In a nutshell, the microkernel approach says "Let's give up performance for superior security and stability"; the monolithic kernel approach says "let's keep the performance and just fix security and stability problems as they crop up." The world seems to accept if not prefer this approach.
p.s. Windows NT was never a pure microkernel, but it was microkernel-ish for a long time. NT 3.x had graphics drivers as a user process, and honestly NT 3.x was super stable. NT 4.0 moved graphics drivers into the kernel; it was less stable but much more performant. This was a generally popular move.”
F22Rapture: “A practical benefit to the monolithic kernel approach as applies to Linux is that it pushes hardware vendors to get their drivers into the kernel, because few hardware vendors want keep up with the kernel interface changes on their own. Since all the majority of drivers are in-tree, the interfaces can be continually refactored without the need to support legacy APIs. The kernel only guarantees they won't break userspace, not kernelspace (drivers), and there is a lot of churn when it comes to those driver interfaces which pushes vendors to mainline their drivers. Nvidia is one of the few vendors I can think of that has the resources to maintain their own out-of-tree driver based entirely on proprietary components.
I suspect that if drivers were their own little islands separated by stable interfaces, we might not have as many companies willing to open up their code.”
Mallardtheduck: “In this context, "monolithic" doesn't refer to having (almost) all kernel and driver code in a single source tree, it's referring to the fact that the entire kernel and drivers run as a single "task" in a single address space.
This is distinct from a "microkernel" where the various kernel elements and drivers run as separate tasks with separate address spaces.
As mentioned, Windows kernel is basically monolithic, but drivers are still developed separately. macOS uses a sort of hybrid kernel which uses a microkernel at its core but still has almost everything in a single "task", despite having nearly all drivers developed/supplied by Apple.”
Slabity: “People have been arguing this since before 2004. The Tanenbaum-Torvalds debate in 1999 1992 is a big example of the arguments between microkernel and monolithic kernel designs.
I'm personally part of the microkernel camp. They're cleaner, safer, and more portable. In this regard, the kernel's design was outdated the moment it was created.
…Linux has overcome a lot of the issues that come with monolithic kernel designs. It's become modular, its strict code policy has kept it relatively safe, and I don't think anyone would argue against how portable it is.”
TEchnicolourSocks: “There is only one correct way of kernel design and it is the way of TempleOS.
Written in HolyC, non-networked, ring-0 only. As God intended.”
Scandalousmambo: “The nature of developing a system as complex as the Linux kernel means it will always be "outdated" according to people who were in high chairs when it was first designed.
This operating system likely represents tens of millions of man hours of labor.
Can it be replaced? Sure. Will it? No.”
Grumbel: “In pure practical terms it makes not much difference any more. Back in the day, HURD was kind of cool with it's userspace file systems and such. But Linux has since than gained most of that functionality. If you want to write a file system, usb driver or input device in userspace, you can, no need to hack the kernel. You can now even patch the kernel at runtime if you really want to.
The Linux philosophy of just not writing buggy drivers that crash the kernel in the first place, instead of making it super robust against shitty drivers also seems to work quite well in the real world. We probably have to thank USB for that, as having hardware that is self descriptive removed the need to write a new driver for every new gadget you plug into the PC.
So the whole design debate is now even more academic than it used to be, as there just aren't a whole lot of features left that you would gain by design changes alone and that you couldn't implement into a monolithic kernel.”
KugelKurt: “Although much of the discussion here is about microkernels vs monolithic kernel, more recent research went into programming languages.
If you started a completely new kernel today, chances are it would not be written in C. Microsoft's Singularity and Midori projects explored the feasibility of C#/managed code kernels.
The most widely known non-research OS without a C kernel is probably Haiku which is written in C++.”
OmniaVincitVeritas: “It was outdated when it was first created and is still so. But, as we know, technical progress almost never works so that the technically/scientifically superior solution rises to the top in the short term; so many other things influence success too.
If it did, we'd be running 100% safe microkernels written in Haskell. Security companies wouldn't exist. I'd have a unicorn/pony hybrid that runs on sunlight.”
Daemonpenguin: “There are some concepts which may, in theory, provide better kernel designs. There s a Rust kernel, for example, which might side-step a number of memory attack vectors. Microkernels have, in theory, some very good design choices which make them portable, reliable and potentially self correcting.
However, the issue is those are more theory than practise. No matter how good a theory is, people will almost always take what is practical (ie working now) over a better design. The Linux kernel has so much hardware support and so many companies funding development that it is unlikely other kernels (regardless of their cool design choices) will catch up.
MINIX, for example, has a solid design and some awesome features, but has very little hardware support so almost nobody develops for the platform.”
DistroWatch reviews 4MLinux 21.0
Linux offers many different kinds of distributions. Some are bundled with more software, and some with less. 4MLinux is geared toward those who prefer a lightweight distribution. A writer at DistroWatch has a full review of 4MLinux 21.0.
Joshua Allen Holm reports for DistroWatch:
4MLinux is a lightweight Linux distribution designed to provide four key areas of functionality. With just the software available on the ISO, 4MLinux provides a wide variety of applications for performing system maintenance; playing many types of multimedia files; offering a miniserver to provide a basic web server; and it has a decent selection of games, which the distribution places in a category it calls mystery. Those four functions provide the basis of the distribution's name. Four things that start with "M", so 4MLinux.
Booting 4MLinux from a flash drive is a quick process. I was quickly and automatically logged in as root and could start working in the desktop environment. For the desktop, 4MLinux uses JVM combined with a Wbar launcher at the top of the screen that provides shortcuts to major programs. Plus there is IDesk to manage the desktop, and Conky to provide basic system status information. Wbar, IDesk, and Conky can all be switched off, but the system is already very light when they are in their default, enabled state.
Out of the box, 4MLinux comes with a decent selection of software. In the JVM application menu there are shortcuts for a terminal, Internet applications, maintenance, multimedia, miniserver, and mystery. The Internet sub-menu contains Links for web browsing, HexChat for IRC, Sylpheed for e-mail, Transmission for Bittorrent, uGet for downloading, a utility to share files via Bluetooth, GNOME PPP for dial-up Internet connections, and an option to toggle Tor on and off.
4MLinux provides a lot of software in a small package. For system maintenance it is good choice to have on hand. For multimedia, miniserver, and mystery it provides a useful selection of software, but there are other distributions that focus on only one of those tasks and do it better by being more focused. That is not to say that 4MLinux is bad, but it tries to do too many different things at once. To be completely honest, I think 4MLinux would be a stronger offering if it were 3MLinux and dropped the mystery aspect entirely. Maybe including just solitaire or some other light game to have as a diversion while maintenance tasks run and use the space freed up by removing the games to include some of the optional extension applications by default.
LinuxInsider reviews Ultimate Edition 5.4
Ultimate Edition, on the other hand, is at the opposite end of the spectrum from 4MLinux. UE is definitely a maximalist’s delight since it’s packed with software. A writer at LinuxInsider has a full review of Ultimate Edition 5.4.
Jack M. Germain reports for LinuxInsider:
I was not thrilled with my initial hands-on experiences in getting acquainted with Ultimate Edition 5.4. I found an annoying list of things wrong with it.
With many years of reviewing Linux distros under my belt, I have noticed a solid connection between first impressions of a distro's website and lasting impressions of a distro's performance. Let's just say that the website's disorganized condition, in this case, carries through in this distro's latest release.
One small example: I found no list anywhere of the minimum installation requirements for hardware. That proved to be frustrating. I wasted time trying to load Ultimate Linux on several aging computers. Some of the issues were memory- and storage space-related. Other issues involved graphics card inadequacies.
Ultimate Edition targets Linux newcomers, but those trying it might need a bit more familiarity with Linux to get around some of the problems in running this not-so-ultimate Linux OS.
Did you miss a roundup? Check the Eye On Open home page to get caught up with the latest news about open source and Linux.