HN Gopher Feed (2017-11-30) - page 1 of 10 ___________________________________________________________________
Writing a Simple Linux Kernel Module
193 points by daftpanda
https://blog.sourcerer.io/writing-a-simple-linux-kernel-module-d...___________________________________________________________________
fmela - 3 hours ago
In the device_read function, len ? ; should be len --;
dpedu - 2 hours ago
Copy-paste trap!
megous - 3 hours ago
Probably a CMS thing. printk is probably also discouraged, these
days. pr_warn/info/..() or dev_warn/info/err...() should be
used.Also for anyone writing kernel code, this is indispensable:
http://elixir.free-electrons.com/linux/latest/source
djohnston - 3 hours ago
Kernel newb w/ a question on this. When you register the device
with register_chrdev you also have to create the file in userspace
with mknod ?
JoshTriplett - 3 hours ago
You never need to use mknod yourself anymore. Based on the
metadata you provide to the kernel, the kernel will create the
device itself in devtmpfs, and even set appropriate baseline
permissions (e.g. root-only, world-readable, world-writable).
udev will then handle any additional permissions you might need,
such as ACLs or groups.
spapas82 - 38 minutes ago
Replying after checking my notes from the University (NTUA): Before
around 14 years (~ 2004) one of the excersies we had in Operatin
Systems Lab was to actually implement a char device driver for
Linux called ... lunix as a kernel module. The actual device was
just a buch of bytes in memory.These times were so happy -
implementing a device driver seemed like owning the world for a
student!If anybody wants I'd be happy to put the source code in
github - mainly for historical reasons because I'm not sure that
the code will be still working today.
abstractbeliefs - 21 minutes ago
Please do. One of the issues for tfa is how little it actually
covers. While it quite fairly says module development is more
like writing an API than an application, it then proceeds to
entirely ignore any hooks other than init and exit!
spapas82 - 4 minutes ago
I'll try adding it tomorrow to github (it's a little late here
right now). In the meantime, I'd like to also recommend the
book "Linux Device Drivers" that fellow commenter magpi3
mentioned.
magpi3 - 26 minutes ago
There is a similar exercise in "Linux Device Drivers." The first
driver they ask you to write is a char device that runs in memory
called "scull."Here is a link to the free book (warning it's a
pdf):http://free-electrons.com/doc/books/ldd3.pdfAnd here is a
link to a github project with all exercises updated for the most
recent kernel:https://github.com/martinezjavier/ldd3
spapas82 - 5 minutes ago
Ha ha "scull" ringed a bell and after taking a look at the
cover of the book you mention (it's not in the pdf you provide
but can be found if you search for images for Linux Device
Drivers oreilly) and see the horse ... I totally remembered the
book!It was one of the refereces I used while implementing the
module and it was a really good and comprehensive book -
totally agree with the recommendation :)
ramzyo - 4 hours ago
>> A Linux kernel module is a piece of compiled binary code that is
inserted directly into the Linux kernel, running at ring 0, the
lowest and least protected ring of execution in the x86?64
processor.The author seems to be implying that rings are
implemented at the processor level for x86-64 processors. If I?m
interpreting the wording correctly that?s interesting! Coming from
the ARM world I?d always thought that rings were an OS construct.
topspin - 3 hours ago
The genesis of hardware protected modes in x86 devices go back to
1982 and the 80286.
bravo22 - 3 hours ago
ARM has modes which is somewhat analogous to x86 rings.
joosters - 3 hours ago
Most chips have different privilege levels... ARM processors have
'modes' like User, FIQ, IRQ and Supervisor (sort of the
equivalent to ring 0). Different modes can have different access
controls, e.g. varying access to memory.Edit: See
http://www.heyrick.co.uk/armwiki/Processor_modes
ramzyo - 3 hours ago
Right - out of curiosity do you know if the Linux kernel uses
the term "ring", or some other terminology to map to the
underlying hardware implementation, be they rings for x86 or
modes for ARM? Maybe it's just "privilege level" or something
similar?
JoshTriplett - 3 hours ago
The Linux kernel just talks about "kernel mode" and "user
mode" (or "userspace"). Those then map to ring 0 and ring 3
on x86, or other mechanisms on other platforms.
ramzyo - 3 hours ago
Cool, thanks!
qsdf38100 - 32 minutes ago
Kernel code does not operate at "incredible speed". This guy
doesn't know what he is talking about.
chatmasta - 3 hours ago
Does anyone have a story or two of a time you?ve created a kernel
module to solve a problem? I would be interested in hearing real
world use cases.
woodrowbarlow - 54 minutes ago
i was working on an embedded device with only RAM and a flash
chip for the software blob.i wrote a kernel module that allowed
us to keep logs in RAM even across a soft reboot (i.e., if the
device crashes or does a software update). it basically reserved
a chunk of 1M at the top of RAM (using the same physical address
each time). there was a checksum system that allowed it to tell
during boot whether the data is still valid or if the bits had
decayed.i've also done a couple i2c drivers for temperature
sensors or LED controllers. kernel work is fun.
[deleted]
alecdibble - 1 hours ago
A few years ago, I worked at a company that had hardware
peripherals conntected to a master CPU card running linux through
RS-232. One of the peripherals was a power supply that supplied
power to another peripheral. Since they were architected as
Slaves, the peripherals couldn't directly talk to each other,
they had to get the messages relayed through the master CPU.This
whole communication process had to happen within a small time
frame, something like 50 ms. A kernel module handled the
communication, decoded the packets, and automatically relayed
peripheral messages to other peripherals. The kernel module made
it easier to achieve stable and real-time operation within the
time constraint.When writing the same function in userspace,
there was absolutely no guarantee that messages would be sent or
received in time.
mbrumlow - 3 hours ago
I wrote one so I could keep my job...I work at a company that
provides backups for both Linux and windows. The entire concept
was around block level backups. You could just open up the block
device and copy the data directly but it would quickly become out
of sync by the time you finished copying it. We did not want to
require LVM to be able to utilize snapshots to solve the sync
problem. On top of that we had a strong requirement of being able
to delete data from the backup.This resulted in me learning how
to build a kernel module and then slowly over about 6 months
creating a kernel driver that allowed us to take point in time
snapshots of any mounted block device with any fs sitting on
top.Other requirements also dictated that we keep track of every
block that changed on the target block device after the initial
backup (or full block scan, after reboot).I wish I could release
the source but my employers would not like that :( So at least
for me learning how to write kernel modules and digging in to
some of the lower stuff has keep me gainfully employed over the
years. It is still in use on about 250k to 300k servers today (it
fluctuates).The hardest part was not writing the module, but
getting others interested in it enough so I don't have to be the
sole maintainer. I like working on all parts of the product and
don't want to just be the "kernel guy".One other time I wrote a
very poorly done network block device driver in about 8 hours.
You can find it here https://github.com/mbrumlow/nbs -- Note I am
not proud of this code, it was something I did really quick,
wanted it on hand to show to a perspective employer -- I did not
get the job, I am also fairly sure they did not even look at the
driver, so I don't think the crappy code there affected me.
throwaway613834 - 3 hours ago
So you implemented something like Windows's volume
snapshots/shadow copies? (Why couldn't you use the existing
feature?)EDIT: Thanks for both replies! :)
aseipp - 2 hours ago
Pretty much. But Windows snapshots aren't writeable after you
take them, which is a major downer and a major necessary
feature. (Having this feature lets people do things like
"Take snapshot and delete everything except *.lic files,
under C:\Users, on every backup" and you can just issue
deletes to the shadow volume after it is consistently
created, and intercept the new deletions, like normal.
Tracking deletions like this lets you more easily know what
you should/should not back up -- only back up the allocated
blocks in the shadow copy.)So, there was a thing that added
this feature to the Windows kernel in the product to make
this work. Aside from the Linux stuff, which was totally
separate. But if you don't need the writing capability,
Shadow Copies are good enough, sure..(Source: I used to work
with the guy who made the above post)
mbrumlow - 2 hours ago
Yes, and no.Windows Volume Shadow Copy has the advantage of
being integrated with he FS a bit closer. So in Windows VSS
can avoid some overhead by skipping the 'copy' part and just
allocating a new block and updating the block list for the
file.For the Linux systems we had the requirement to work
with all file systems (including FAT). So we could not simply
modify the file system to do some fancy accounting when data
in the snapshot was about to be nuked. So that resulted in me
writing a module that sits between the FS and the real block
driver. From there I can delay a write request long enough to
ensure I can submit a read request for the same block (and
wait for it to be fulfilled) before allowing the write to
pass through.> (Why couldn't you use the existing feature?)
We did on Windows, VSS is used with a bit of fancy stuff
added on top. For Linux there is no VSS equivalent (other
than the one I wrote, and maybe something somebody working on
a similar product may have written). And even if one did come
about (or is and I am just not aware of) it for sure was not
available when I started this project.
ploxiln - 2 hours ago
It sounds like you implemented something very similar to
the LVM snapshot feature. You didn't want to require LVM
but ... is that really worse than requiring your custom
module which is roughly equivalent?EDIT: ok so it looks
like the management of the snapshot space is a bit
different. still, you could probably have wrapped LVM
management enough to make it palatable, in less than the
time it took to write a custom module
mbrumlow - 23 minutes ago
The problem is requiring LVM is a NOGO if they did not
already have LVM. We could have standardized on LVM, but
then all the people who were not already using LVM at the
time would not be able to use the product. At the time
many hosting providers -- who we sold too -- just were
not using LVM. We to this day still have many fewer
people using LVM than raw block devices, or some sort of
raid.Also at the time LVM snapshots were super slow. I
don't have the numbers but even with the overhead my
driver created I was able to have less impact on system
performance.I was able to do some fancy stuff to optimize
some of the more popular file systems by making inline
look-ups to the allocation map (bitmap on ext3). This
allowed me to not COW blocks that were not allocated
before the snapshot. This was a huge saving because most
of the time on ext3 your writes will be to newly
allocated blocks.Wrapping LVM would probably not work,
and still require a custom module to do, the user space
tools don't do much. LVM really is a block management
system that needs manage the entire block device, so
existing file systems not sitting on top of LVM would get
nuked if you attempted to let LVM start managing those
blocks, and you still had the issue that reads and writes
were coming in on a different block device. Asking people
to change mount points was not a option. There were also
some other requirements like block change tracking that
LVM does not have the concept of doing. This is for
incremental snapshots. Without this sort of tracking you
will either have to checksum every block after every
snapshot if you wish to to only copy the changes. This
module also was responsible of reporting back to a user
space daemon that keep a map of what blocks changed. So
when backup time arrived we could use this list (and a
few other list) to create a master list of blocks that we
need to send back. This significantly cuts down on
incremental backup time. Some companies call this
"deduplication" but I feel that is disingenuous -- to me
deduplication is on the storage side and would span
across all backups.So yes, requiring a module is much
easier than telling a customer they can't trial, or use
this product until they took their production system off
line and reformatted it with LVM. Many people hated LVM
at the time, it was considered slow and caused
performance problems, this was like 8 years ago... LVM
has vastly changed and does not have these type of
complaints any more. But I can tell you people still are
going to scream bloody murder if we told them they had to
redo their production images and redeploy a fleet of 200+
servers just to switch to LVM so they could get a decent
backup solution.Also shout out to aseipp! Miss working
with you. Have yet to find a bug in the code you wrote :p
sly010 - 3 hours ago
I didn't end up writing the kernel module, so this might be an
anti use case:I was working on an embedded system and needed to
have fast I2C access (mostly just wanted very small latency bc
it's an instrument). I2C from Linux userspace (using ioctls) adds
a lot of overhead. I started looking into kernel modules but
after a day of research I found out that you can access hardware
registers from userspace using mmap("/dev/mem") which is even
faster than a kernel module.Edit:typos
zeusk - 1 hours ago
> mmap("/dev/mem")but that would only work if some kernel
module or the kernel itself hasn't mapped that IO space, right?
megous - 3 hours ago
Usually to control on-SoC peripherals/IP blocks or to add support
for other external devices (that's what kernel's for, mostly,
anyway) - camera sensor, I2C controlled regulator, thermal
sensor,... story? no. :D I just wanted to have a nice non-amd64
SoC based desktop, and some things were missing at the time.
ryandrake - 3 hours ago
This was a super-long time ago, but a kernel driver was needed
for video capture back in the 90's, to control devices across
I2C, etc. At the time I had a Matrox video card which did analog
capture, decoding and TV-in, and so wrote the first few
iterations of a kernel driver[1] to get it to work. It since got
adopted by a great maintainer. Nowadays I'd guess nobody is using
it because the hardware is ancient.1:
http://www.cs.brandeis.edu/~eddie/mga4linux/
alxlaz - 3 hours ago
Most Linux device drivers are kernel modules, so I suppose most
of the stories I have would be of the form "I needed to write a
device driver for X" :-). Userspace drivers, while certainly in
existence, are not very popular in Linux land.Other interesting
stories would include:- Instrumenting certain filesystem
operations (all modules share the same memory space; it's
possible for one module to take a sneak peak at another's
internal structures). This was back before dtrace & friends were
a (useful) thing.- Real-time processing of instrumentation
samples. Doing it in the kernel allowed us to avoid costly back-
and-forth between user and kernel memory -- but we only did
relatively simple processing, such as scaling, channel
muxing/demuxing and the like. If you find yourself thinking about
doing kernelside programming because carrying samples to and from
userspace is too expensive, you should probably review your
design first.
zb3 - 1 hours ago
I wrote one so that I could set hardware breakpoints and use them
to extract decrypted data from an app which was heavily protected
against debugging. The thing was that the app (running as an
unprivleged user) had no way of knowing what I was doing in the
kernel (uprobes can be used for this too providing the code
doesn't checksum itself). But I guess reverse engineering is not
a common use case for writing linux kernel modules though :)
aseipp - 3 hours ago
Yes, I wrote one a long time ago that exposed basic counters from
the ARMv7 PMU to userspace -- where they are normally not
accessible unless used through perf_events. This is because perf
wasn't available (I think the shitty old vendor BSP I was using
couldn't support it for some reason) and it's relatively costly
(perf_event_open is a whole syscall + another syscall to `read`
off the event info) -- I just wanted some basic cycle timings for
some cryptography code I wrote to see what worked and what
didn't.Even though it has some annoying gotchas (such as the fact
ARM cores can sleep/frequency scale on demand with no
forewarning, meaning cycles aren't always the most precise units
of measurement), and is very simple -- this thing ended up being
mildly popular. Even though I wrote it years ago, someone from
CERN recently emailed me to say they happily used it for work,
and someone from Samsung ported it to ARMv8 for me...(I should
dust off my boards one of these days and clean it up again,
perhaps! People still email me about it.)
invernomut0 - 2 hours ago
Some years ago I was working on an embedded platform with two
processors connected via ethernet, one handling data and most of
the application load, the second one handling some automotive
buses (CANbus and ARCnet). I wrote a kernel driver to expose raw
data coming from the ARCnet to userspace applications in the most
transparent way. So I labeled data coming from the second cpu
with an unused ethertype, I changed the ethernet driver to
inspect the ether type of the incoming data and send it up inside
the network stack as if it was received from a certain fake
ARCnet device. The fake ARCnet device was a dummy module that
allowed userspace to receive that data via normal sockets.
Probably it could be done in a better way but it was quite simple
and it worked well.
andrewcchen - 1 hours ago
I modified a kernel module to use the wifi enable/disable button
on my hp laptop as a normal button. For some reason the bios
sends an acpi event when the button is pressed, instead of just
having it as a normal keyboard key, so there is a kernel driver
to handle it.
samtho - 3 hours ago
I wrote one that decodes Wiegand[0] off of GPIO. I ended up not
using it because epoll-ing the pins is easier to use from a
userspace perspective and fast enough to capture the
transmission. I loosely based the kernel module off of one that
was written for another microcontroller[1].[0]:
https://en.wikipedia.org/wiki/Wiegand_interface[1]:
https://github.com/rascalmicro/wiegand-linux
[deleted]
dottrap - 3 hours ago
"And finally, you?ll need to know at least some C. The C++ runtime
is far too large for the kernel, so writing bare metal C is
essential."That line reminded me that NetBSD added Lua for writing
kernel modules.
(https://news.ycombinator.com/item?id=6562611)Anybody have any
experiences to share from this?
chowyuncat - 56 minutes ago
At work we write plenty of C++ for kernel modules, but with the
caveat that our C++ can rarely include a Linux kernel header
directly. We have to write a small shim for every kernel function
we consume.No library runtime features, so no RTTI or exceptions,
but we use templates and dynamic dispatch.
feelin_googley - 50 minutes ago
"Anybody have any experiences to share from this?"http://lua-
users.org/wiki/MarcBalmer
albinofrenchy - 1 hours ago
The C++ runtime is too big for the kernel. It is also largely
unnecessary. You can easily compile C++ code without STL or
larger library support.This still leaves some holes that the
runtime provides, but all of which are easy to provide -- namely
things like 'new' and 'delete'.
sigjuice - 3 hours ago
If you look at the NetBSD kernel source
https://github.com/NetBSD/src.git, it doesn't look like there are
any real and substantial Lua kernel modules. $ find sys -name
'*.lua' sys/modules/examples/luahello/luahello.lua
sys/modules/examples/luareadhappy/happy.lua
sys/modules/lua/infinite.lua sys/modules/lua/test.lua
sys/modules/luasystm/test.lua
alxlaz - 3 hours ago
There are plenty of operating systems that use C++ on the kernel
side of things. Some things, like exceptions, are frowned upon
and rarely used -- if at all -- but there is no shortage of C++
kernel code. Just not in Linux.When it comes to Linux, one could
say that most reasons to avoid it are historical, but this does
not quite paint the awkward truth -- namely that, for most of the
kernel's lifetime (since back in 1991), C++ compilers simply did
not have the level of maturity and stability across the breadth
of platforms that Linux required. Linus Torvalds' stance on this
matter is pretty well-known:
http://harmful.cat-v.org/software/c++/linus .Today, when x86-64
and ARM are the only two families that you need to care about in
the following ten years or so (maybe RISC-V but I rather doubt
it), it probably makes sense to look at C++ for operating systems
work, but the runtime is certainly heavier than back when Linus
was writing about it, too. A modern C++ compiler has a lot of
baggage; C++ was huge back in 1998, now it's bloody massive.
IMHO, all the reasons why you would want to use C++ (templating
support without resorting to strange hacks, useful pointer
semantics and so on) are reasonably well-served by cleaner
languages with less hefty runtimes, like Rust. What these
alternatives do lack is the amazing level of commercial support
that C++ has.
jcelerier - 3 hours ago
> C++ was huge back in 1998, now it's bloody massive.I don't
think any "run-time" feature was added since, though. It's all
either OS support (
, etc, that you wouldn't use in-
kernel anyways) or template stuff that has 0 impact on runtime
(and actually sometimes helps decreasing code
size).https://istarc.wordpress.com/2014/07/18/stm32f4-oop-with-
emb...https://www.embedded.com/design/programming-languages-
and-to...https://hackaday.com/2015/12/18/code-craft-
embedding-c-templ...If some guys are able to run c++ on 8kb
microcontrollers, there's hardly a non-political reason it
couldn't be used in-kernel.See also IncludeOS:
http://www.includeos.org/
alxlaz - 2 hours ago
Additions have certainly been made since back in 1998 (things
like smart pointers are relatively new on this scale, as far
as I know). Many runtimes for resource-constrained embedded
systems do not support all of C++'s features. Exceptions are
the most usual omission.You can certainly strip things down
to a subset that can fit 128K of flash and need only 1 or 2K
of RAM at runtime, but the question is not only one of
computational resources used for the library itself.
Additional code always means additional bugs, the semantics
sometimes "hide" memory copying or dynamic allocation in ways
that many C++ programmers do not understand (and the ones who
do are more expensive to hire than the ones who do not), and
so on. You can certainly avoid these things and use C++, but
you can also avoid them by using C.I agree that mistrust and
politics definitely play the dominating role in this affair
though. I have seen good, solid, well-performing C++ code. I
prefer C, but largely due to a vicious circle effect -- C is
the more common choice, so I wrote more C code, so I know C
better, so unless I have a good reason to recommend or write
C++ instead of C, I will recommend or write C instead. I do
think (possibly for the same reason) that it is harder to
write correct C++ code than it is to write correct C code,
but people have sent things to the Moon and back using
assembly language for very weird machines, so clearly there
are valid trade-offs that can be made and which include using
language far quirkier than C++.
CyberDildonics - 2 hours ago
> Additional code always means additional bugsWhat
additional code?> You can certainly avoid these things and
use C++, but you can also avoid them by using C.Right, but
what you can't get with C is destructors and move/ownership
semantics.> I do think (possibly for the same reason) that
it is harder to write correct C++ code than it is to write
correct C codeThe ability to write typesafe data structures
with move/ownership semantics and specified interfaces
while being a superset of C would lead some to say that
this is not true.
alxlaz - 1 hours ago
> What additional code?All the code that you need in
order to support smart pointers, templates, move/copy
semantics, exceptions and so on. To paraphrase someone
whose opinions you should take far more seriously than
mine, you can't just throw Stroustrup's "The C++
Programming Language" on top of an x86 chip and hope that
the hardware learns about unique_ptr by osmosis :-).
There used to be such a thing as the sad story about
get_temporary_buffer ( https://plus.google.com/+Kristian
K%C3%B6hntopp/posts/bTQByU1... -- not affiliated in any
way, just one of the first Google results).The same goes
for all the code that is needed to take C++ code and
output machine language. In my whole career, I have run
into bugs in a C compiler only three or four times, and
one of them was in an early version of a GCC port. The
last C++ codebase I was working on had at least a dozen
workarounds, for at least half a dozen bugs in the
compiler.
johncolanduoni - 34 minutes ago
If you?re writing a kernel, you?ll almost certainly be
using ?-nostdlib? (or your compiler?s equivalent) so
unique_ptr, etc. won?t be there. You could however write
your own unique_ptr that allocates via whatever allocator
you write for your kernel. See [1] for a decent overview
of what using C++ in a kernel entails.[1]:
http://wiki.osdev.org/C%2B%2B
jcelerier - 29 minutes ago
> All the code that you need in order to support smart
pointers, templates, move/copy semantics, exceptions and
so on.smart pointers are their own classes, and
exceptions would certainly be disabled in kernel-mode,
sure, but for the rest ? which additional code ? there's
no magic behind templates and move semantics, and no run-
time impact. It's purely a compile-time feature.
cmrdporcupine - 2 hours ago
I absolutely do not understand your point. Anybody doing OS
development in C++ is doing so with absolutely no C++
standard library support, same as if you were using C++ to
develop for your microcontroller. If C++ binaries are
compact enough for Arduino or Parallax Propeller
development (<32KB RAM), they are absolutely fine for
kernel development.The real answer is historical, and
cultural. On the latter, Unix is a product of C (well, and
BCPL) and C is a product of Unix. They two are intertwined
heavily. The former is as was mentioned a product of the
relative crappiness of early C++ compilers (and the
overzealous OO gung-ho nature of its early adopters perhaps
as well...)C++ without exceptions, RTTI, etc. has a lot to
offer for OS development. Working within the right
constraints it can definitely make a lot of tasks easier
and cleaner.It won't happen in Linux, tho.
jlg23 - 1 hours ago
I don't get how you make the connection from Torvald's
arguments to what you say. His complaints are on a
fundamentally different level: ~"Programmers over-abstracting
are a timeless problem, the more complex the language/standard
library, the easier it is for programmers to get lost in
abstraction. Bad programmers just get lost, mediocre
programmers even defend what they do."I despise his choice of
words and general attitude towards programmers "less capable
than him", but I do think there is some truth in what he says
there: Imagine we'd all be writing asm only - we'd spend much
more time on the drawing board and find much more elegant
solutions to problems that we today solve with mindlessly
writing hundreds lines of code within a "framework".Side note:
I wrote C++ in 1998; it was more "bloody massive" back then
than it is today, in my humble experience.
alxlaz - 1 hours ago
I was referring only to the parts of that rant that are/were
of some objective value, e.g.:> anybody who tells me that STL
and especially Boost are stable and portable is just so full
of BS that it's not even funny> the only way to do good,
efficient, and system-level and portable C++ ends up to limit
yourself to all the things that are basically available in
C.You can certainly write C++ code without overabstracting
it. Maybe Torvalds only ran into C++ programmers who liked
overabstracting their code -- as it is usually the case with
opinions of some people about other people, that part of the
rant is best discarded :-).But as my memory serves me, back
when "portability" meant x86, SPARC, Alpha, ARM, MIPS, PPC,
PA-RISC and m68k, the only way to do "good, efficient, and
system-level and portable C++" was to limit yourself to the
(relatively small) subset of C++ that was well-implemented by
widely-available compilers (particularly GCC) across all
these architectures -- not as a matter of discipline and code
elegance, but because writing non-trivial C++ code back then
usually resulted in very unpleasant trips down the standard
library's source code and poring over objdump outputs trying
to figure out what the hell broke this time.
vkjv - 1 hours ago
Has Linux stated any opinions on rust? Some of his issues with
C++ also apply to rust.> any compiler or language that likes to
hide things like memory
steveklabnik - 1 hours ago
?That's not a new phenomenon at all. We've had the system
people who used Modula-2 or Ada, and I have to say Rust looks
a lot better than either of those two
disasters.?https://www.infoworld.com/article/3109150/linux
/linux-at-25-...
ahoka - 2 hours ago
> There are plenty of operating systems that use C++ on the
kernel side of things. Some things, like exceptions, are
frowned upon and rarely used -- if at all -- but there is no
shortage of C++ kernel code. Just not in Linux.I used to work
on one. It was developed in the 90s, around the same time C++
became an ISO standard. I think most of you have used it
without knowing. :)I think OOP is natural for writing kernels
and most mainstream kernels implement some kind of object
system because of this. I also wrote my toy kernel in modern
C++, despite that I'm "fluent" in both C and C++. So yeah, C++
kernels are here, just not in Linux land.
saagarjha - 2 hours ago
Are you referring to macOS?
Cyph0n - 2 hours ago
I think the kernel itself is called Mach. Not sure if it
was written in C++ though.
sigjuice - 1 hours ago
The kernel is called XNU.
https://opensource.apple.com/source/xnu/ It is mainly
written in C.
self_awareness - 1 hours ago
The BSD part of the kernel is written in C, but IOKit,
the kext framework, is written in (a subset of) C++:http:
//xr.anadoxin.org/source/xref/macos-10.13-highsierra/xn..
.
nabla9 - 2 hours ago
>Today, when x86-64 and ARM are the only two families that you
need to care aboutWait what? What happened to MIPS,
z/Architecture,Power Architecture, QDSP6, TMS320?
alxlaz - 1 hours ago
I meant that strictly in the realm of general-purpose
operating systems (the article is, after all, about writing a
Linux kernel). Two of those are DSPs. The Linux kernel and
busybox account for 99,99% of the general-purpose code
running on them, and bare-metal code for both is still very
common. In fact, Linux runs on QDSP6 only under a hypervisor,
and gained C6x support only somewhat recently (~5 years, I
think?). Last time I saw a MIPS workstation was a very long
time ago :-) and, barring a few niche efforts, the same goes
for PowerPC as well.Back when Linus was ranting, one could
make a convincing case for supporting all of these
architectures in an OS meant for general-purpose workloads,
from server to desktop and from thin client (eh? feel old yet
:-)?) to VCR -- and . Now you can sell an OS that supports
nothing but x86-64 and a few ARMs, and you get most of the
server market, and virtually all of the desktop and mobile
market.Obviously the architectures that you need for a
particular project are the ones that you want to "care for"
-- but in broad terms, it is perfectly possible today for
someone to have worked as a programmers for ten years without
encountering a single machine that is not either x86 or ARM-
based. Someone who had 10 years of programming experience in
2003 probably ran into a SPARC system whether they wanted or
not.
johncolanduoni - 31 minutes ago
I think you?ve missed large segments of Linux?s user base;
you?ve definitely missed Android devices using MIPS
processors.
revelation - 2 hours ago
There is plenty of C++ in the Linux kernel. It just masquerades
as function pointers and initial fields in structures called
"base".
TFortunato - 1 hours ago
Your getting downvoted, but you are kind-of correct. Even if it
isn't actually written in C++, there is a lot of "OO-style"
code in the kernel that does things like dynamic disoatch using
VTables...basically making explicit what C++ is going to do
behind the scenes for you anyways.I can see the argument from
Linus' POV about avoiding over abstraction, and knowing exactly
what your code is doing on a low-level, but at the same time it
leads to a lot of reinventing the wheel and makes me think
there is a better balance to be found.https://lwn.net/Articles/
444910/https://lwn.net/Articles/446317/