HN Gopher Feed (2017-11-16) - page 1 of 10
RISC-V port merged into Linux 4.15
219 points by rwmjhttps://groups.google.com/a/groups.riscv.org/forum/#!topic/sw-de...
acoye - 5 hours ago
With Google's new war on Intel's ME this is not a coincidence.
_chris_ - 3 hours ago
Actually it is.
thisispete - 3 hours ago
RISC is good. https://www.youtube.com/watch?v=wPrUmViN_5c
ramshorns - 8 hours ago
phantom_oracle - 8 hours ago
Can someone explain what having RISC-V support in the kernel means
for Linux OSes ?Also, how is this different to:
davidlt - 5 hours ago
Stable ABI. We have done ~Fedora 25 in late 2016 -
https://fedoraproject.org/wiki/Architectures/RISC-V and you can
even boot that in your browser - https://bellard.org/jslinux/ABIs
have changed after we did this. We get the first long-term stable
ABI with 4.15 kernel and glibc 2.27 (hopefully). At that point
Fedora, Debian, and others can reboot efforts.
phkahler - 4 hours ago
That was great work and it made RISC-V look that much more
viable to me. To think that thousands of programs are ready to
go whenever the hardware arrives is really amazing. It's
unfortunate that some of it has to be repeated after the ABI
changes, but that seems a small price to pay in the grand
scheme of things. But then I'm not the one who did it or has to
redo it ;-) Anyway, great job guys!
consp - 8 hours ago
Since the status log includes the patch request and news about
it, I think none. It seems originated from the combined efforts
of the RISC-V community at large and supported by the
dtech - 8 hours ago
It's been merged into the main kernel (i.e. the one maintained by
Linus). Distributions can apply their own patches against the
main kernel, which Debian did for RISC-V support. They don't have
to apply the patches anymore.Functionality wise, what's been
merged is only a subset.
mmjaa - 8 hours ago
I'd really like to jump on the RISC-V bandwagon. Does anyone have
any recommendations for hardware that would be .. somewhat ..
future-proof for hacking/experimenting on this platform? I'd love
to have a board or two akin to the rPi-Zero in form-factor, if such
a thing were available ..
vhodges - 8 hours ago
Nothing available yet capable of running Linux. SiFive has a SOC
coming soon (https://www.sifive.com/products/risc-v-core-
ip/u54-mc/) that will be. Speculation is that dev boards will be
available 1st quarter 2018. My guess is that they'll announce
(likely on crowdsupply) around Christmas for delivery sometime
next yet... but just a guess.The other option that is coming is
from lowrisc.org, but they are a small team so it's taking a long
time and have no idea when they might be thinking of delivering
ajross - 7 hours ago
There are actually no hardware processors capable of running the
code that was merged. So far all the RISC-V parts in the market
are MMU-less microcontroller devices.
srcmap - 7 hours ago
Also, is there any (low cost?) FPGA board that is capable of
running RISC-V + Linux?That would be a fun thing to to do.
kbeckmann - 7 hours ago
Found this blog with someone who bought an FPGA devkit for
?260 and managed to boot Linux on it.
phkahler - 4 hours ago
That's been going on for some time. This is probably not the
best link, but it's one of the
custom-u500/2...I assumed SiFive was waiting for the privileged
instructions to be finalized before making real chips. But
people have been running Linux on their stuff on FPGA for a
kbeckmann - 8 hours ago
As far as I know, only the Freedom E310 chip is available for
purchase by normal consumers. It's a microcontroller and only has
16kB of RAM so it's probably not really what you are looking for.
But if you are still interested in it, you can pick up a HiFive1
devkit at crowdsupply and possibly elsewhere. There's also the
cheaper LoFive which is a very bare breakout board for this
chip, but I'm not sure if it's possible to buy it anymore.
nickik - 9 hours ago
I love how once this exists in all the standard tools, anybody can
just make a new chip and practically instantly run huge amounts of
software on it and have the right base to add to it. A free ISA is
a great interfacing point, innovation can go on above and below
mostly independently, no licences or anything required. A good
example of permission less innovation.The road is long for RISC-V
but I think the project is progressing about as fast as one can
hope for. Thanks to everybody who helps with this.
pjc50 - 8 hours ago
What about the boot sequence and driver enumeration? How is this
handled in RISC-V? This has traditionally been one of the major
obstacles to ARM - you need different kernels for two ARM devices
with nominally the same core because of initialisation
differences.Devicetree makes this better but not 100% as simple
as the old "legacy" PC boot sequence was. Ironically these days
PCs are quite hard to boot too with UEFI.
ansible - 7 hours ago
Heh. Device Tree only deals with a fraction of the
problem.Here's what I think would help: (warning: lots of
work!)1. Large database of free peripherals and foundational
blocks. Not just UARTs and SATA controllers, but also network
interconnects (crossbar), SDRAM controllers, and all the other
bits that go into a modern System-On-Chip (SoC). Power
management is a huge pain in general, and is widely different
among chip families even from the same vendor.1.A. Obviously,
the above implies a decent set of open-source drivers for
multiple operating systems for the peripherals. At the very
least, you need support for a bootloader, a real-time OS, and
Linux. Preferably with no binary blobs.2. A standard for
enumeration of these on-chip peripherals. There should be a
way to query the chip itself to see what is included in the
SoC, what device addresses they have, what options are enabled
(like how many interrupts), and how the peripheral lines (if
any) are connected (IO mapping).3. Some widely-accepted
standard for storing the board configuration as well. This
could just be a I2C EEPROM, but it has to have a listing of
everything connected to the SoC and how. This I2C is mapped to
these pins, and has this accelerometer connected at this I2C
address. And all the hardware manufacturers (who produce the
boards) have to be convinced this is a good idea to have this
extra cost (and space) built into their products.Only after all
that would you have a fighting chance to boot a relatively
generic kernel and have it actually run. And then have a
fighting chance to have the ability for the end-user to upgrade
the software after the manufacturer has ceased support.The
cross-platform situation with PCs is light-years ahead of where
we are in the embedded space.
cjbprime - 4 hours ago
Step 3 is often achieved by having the firmware append a
device tree blob to the kernel which contains all of the
board setup. I think it's pretty widely adopted?
ansible - 3 hours ago
Yes, it is headed in that direction. There's more work
needed in areas like uniform conventions for configuring
peripheral pins and such, in my opinion.And then there's
clock configuration, which is kind of a mess, because
everything is hooked up differently on different SoCs that
I've seen.It would also be nice if vendors always shipped
up-to-date device tree source files for any board they
throwaway613834 - 3 hours ago
For those of us who don't know what is involved in power
management, would you mind giving a quick explanation of what
it involves and why it's painful? What power is there even to
manage unless you want to put the computer to sleep or shut
it down? Or are those the sole things your are referring to?
cjbprime - 3 hours ago
Not the OP, but there are many (tens of) "power domains" on
the system, and if you care about power/battery life then
you want to power each of them down when they're unused,
automatically, even while the rest of the system stays
running. i.e. why pay the battery life cost for powering a
USB hub when there are no USB devices plugged in, or for a
DSP when there's no sound being played right now.But the
calculation of when it's safe to power down any of the
power domains can be very complicated -- you can't turn off
a power supply unless everything that depends on it is
unused, so there's a subtle and board-dependent
graph/dependency problem to solve.
throwaway613834 - 3 hours ago
Awesome, thank you!
klodolph - 25 minutes ago
Piggybacking on this?what can make it worse is that some
chips might not properly power back on reliably, in spite
of the manufacturer's claims.
ansible - 2 hours ago
In addition to what cjbprime mentioned...There's a lot
going on in a modern (or even one from 10 years ago) SoC
with regards to power management. Just the processor cores
themselves can run at different voltages and frequencies,
and these are tied to the software workload.And then you've
got the big / little stuff from ARM now, which is an
asymmetric multiprocessor, and which cores you use depends
on the workload.At how it works and the interface to it
changes from vendor to vendor, sometimes from chip to chip.
So it usually isn't implemented all that well, nor is it
maintained for very long either.If (big if) we had more
standardized interfaces to this functionality, we could use
common drivers that can be upgraded independently.Like I
said, the desktop people may not realize how easy they've
got it, where most things are accessed via a standardized
bus interface (PCIe or USB). There's no virtuous cycle of
software and interface re-use, like there is in desktop
lloeki - 5 hours ago
> as simple as the old "legacy" PC boot sequence was.Some of
the Hackintosh folks know how much this can be false. As soon
as you step out of real mode / VESA graphics, all bets are off,
those ACPI tables can be a real PITA, and you may have to fix
stuff by extracting, editing and recompiling your SSDT/DSDT, at
which point you generally wonder how the stock ones worked at
all in the first place: missing symbols, broken CPU state
tables... (well, deep inside you know, Windows/Linux are doing
a hell of a job doing their best at working around all that
nickik - 8 hours ago
There are still many issues left and it will not be easy to
figure out all these things. It will have many of the same
problems as ARM but hopefully the community can work on these.
rwmj - 7 hours ago
For servers, they're looking at ARM SBSA as a model. There's
already some starting work on UEFI for RISC-V (done by HPE).
Zaak - 6 hours ago
> There's already some starting work on UEFI for RISC-VThat
makes me deeply sad.
rwmj - 5 hours ago
Why? UEFI is open source. It's standardized, and provides
standard interfaces to boot-time device drivers and to
operating systems above. It provides a reasonable command
line (TBH much preferable to u-boot). It works well with
Linux and Windows. It's complicated, but it's solving a
jabl - 3 hours ago
Parts of this presentation has some explanation: https://
(obviously you can ignore Intel x86 specific stuff like
the ME).OpenPOWER apparently does something similar, i.e.
a minimal firmware that loads a Linux kernel + simple
userspace from flash.
rwmj - 28 minutes ago
On RISC-V the machine layer (like ME) will be completely
open source, and so will UEFI. You're inventing a
problem that doesn't exist.
jws - 7 hours ago
PCs are quite hard to boot too with UEFILet's hope RISC-V can
avoid something stunningly complex like ACPI while they are at
it. You might be trying to write a secure OS of well reviewed
code, then suddenly find that to reliably shut down a PC you
need ACPI which drags in 100000 lines of code including an
interpreter capable of accessing arbitrary physical addresses.
(Or you just try all the shutdown mechanisms you've ever heard
of and hope one of them nails the machine you are running on.)I
wonder if hardware is easy enough now that a platform could
require that all optional hardware respond with a UUID during
an enumeration phase. That would have been a burden back in
early ISA days when things were as simple as a couple registers
on a card, but maybe now it isn't. Then the OS can handle the
ones it knows about and leave the rest alone.
tmzt - 5 hours ago
What about dtb and standardizing something like coreboot's
libfirmware.Should it really be up to every hardware block to
support full introspection?Doesn't that start to turn into
PnPIsA?The eeprom solution starts to sound a lot like how
Dimms are identified and configured.
hinkley - 5 hours ago
Learning to write a better ACPI script for my lifebook was a
big accomplishment for me, but ultimately lead me to a 13?
MacBook Pro.I suspect it?s such a hidden part of the whole
process of using a computer that few repercussions are felt.
Since not enough negative PR value, it changes but slowly.
Though it sure does suck for portability.
tmzt - 4 hours ago
It would be pretty cool to have a DSDT to kernel c compiler
allowing us to port our laptops over to native power
management over time.We should be able to write gpio chip
drivers for the actual hardware which the interpreter
normally targets and eventually eliminate ACPI from our
madez - 8 hours ago
While it is true that a lot of software is available immediately,
there is also software which is not. For example PC games are
nearly always restricted to x86. Performance of common non-x86
hardware might not be enough to run the latest, demanding games,
but. I don't see why games like Factorio, Stardew Valley, Lethal
League, Don't Starve or Bastion couldn't run on less powerful
hardware. Great games do not require demanding hardware.One
problem is that games are usually closed source and are rarely
freed. Another problem is that there are no open graphics driver
for common single board computers. The latter problem is being
worked on by the amazing Mesa project. What can be done about the
former?I worry that games will be abandoned and never be freed.
The copyright of movies eventually expires and they become a
common good. What about closed binaries compiled for obsolete
hardware and software?
ekianjo - 5 hours ago
I know its not ideal but x86 can run on arm via a x86 processor
emulator, and then WINE to convert windows systems calls and
directx calls to Linux and OpenGL. So nothing will really be
lost as long as WINE lives on as a project.
cyphar - 8 hours ago
There are quite a few communities around creating free engines
for games, such as OpenMW (there are plenty more). With a
free engine all of those problems can be resolved.As for movie
copyright, this is the true purpose of DRM -- to try to make
copyright eternal by locking away the content behind strong-
enough encryption and then licensing the keys. And of course,
breaking DRM (even if the content itself is no longer protected
by copyright) is a felony.: https://openmw.org/en/
madez - 8 hours ago
Wow. So DRM is a technical hack of copyright law. Is there
any other way than adjusting the law to enforce release?
zanny - 7 hours ago
This is going to be one of the most embarassing chapters of
world history in a few centuries."Oh yeah, all those old folk
at the turn of the milennia had this thing called copyright and
all their code was proprietary so we no longer can run or use
99% of software written between 1970 and 20XX"."And there are
these files where it was a felony to be able to access
them!"Wow those guys were sure dumb. Now all our information is
open and shared and makes the world a better place and anyone
can use it or improve it!All I can is lets hope its only until
a 20XX that we get over the facade of IP being worth the cost.
squarefoot - 6 hours ago
"This is going to be one of the most embarassing chapters of
world history in a few centuries."And it won't be that far
from archaeologists being prevented to decipher any ancient
language. If today's copyright laws existed in historic times
we wouldn't have precious documents copied by monks nor the
Rosetta Stone. It's just dumb crazy how this works.
pkaye - 6 hours ago
Is having copyright a concern? Even Linux has a copyright.
And all GNU software. Do you want all forms of copyright not
zanny - 5 hours ago
I am a general IP abolitionist, but the real solution to
the problem of source code lost forever is in the domain of
right to repair. Since the advent of software the right to
repair, which used to be respected for almost anything you
could buy, has been completely abandoned. Requiring access
to the sources to modify code you buy (or in a post-IP
world, receive in binary form) would go a long way to
stopping the software culture death of source code never
being released and rotting on hard drives until its
irretrievable.The problem is the current dominant culture
is either completely apathetic or actually hostile to right
to repair for software. Its why I say "I hope its only
20XX" because I don't see the path to fixing that
mindset.That being said, it is important to distinguish
that while I spoke about how future historians will see
both draconian IP and the lack of right to repair as
barbaric and antiquated, they are distinct but related
cjbprime - 3 hours ago
GNU software's clearly using copyright as a hack
("copyleft") to achieve its actual goal of a non-copyright
world. So it shouldn't be surprising that GNU supporters
would prefer copyright not to exist.
phkahler - 2 hours ago
>> So it shouldn't be surprising that GNU supporters
would prefer copyright not to exist.Not so sure. That
would effectively make all open source code fall under
the equivalent of a BSD or MIT license, which the GPL
clearly is not. GPL want's the essential freedoms to
apply to all derivative works as well as the original.
Neither BSD, MIT, or complete lack of copyright can make
gnulinux - 1 hours ago
FSM is about making all software free, not all software
GPL'd. If copyright laws are dissolved and prop. software
got outlawed, there would be no need for GPL. GPL is a
hack to circumvent future prop. software using your code.
In other words, if no one can hide their source code,
there wouldn't be a difference between MIT and GPL.
phkahler - 1 hours ago
Abolition of copyright and banning proprietary software
seem like two very different things. The comment I
replied to just mentioned elimination of copyright. I
agree that combined with a ban on proprietary software
none of the licenses would really matter - depending on
the wording of the laws of course.
imtringued - 6 hours ago
I have switched to opensource alternatives primarily because of
the freedom to recompile it to different architectures.>Another
problem is that there are no open graphics driver for common
single board computers. The latter problem is being worked on
by the amazing Mesa project. What can be done about the
former?Can you tell me more about this?
madez - 4 hours ago
I phrased it incorrectly. The work is not done necessarily by
the mesa project, but within its framework. The degree of
activity of each project varies.Mali 400 (Allwinner A20)
https://github.com/yuq/mesa-limaAdreno 2xx, 3xx, 4xx
(Raspberry Pi) https://github.com/anholt/mesa/wiki/VC4If the
VC4 or Mali driver mature, then that'd be a huge step
phkahler - 2 hours ago
I've said it before, but I'd like to see LLVMpipe get a
risc-v back end. And when the vector extensions are ready,
support for that. This way all the SoC vendors should be
able to implement basic frame buffer support with HDMI and
use extra risc-v cores in place of a GPU. It wouldn't be
fast, but it should be enough for things like a composited
desktop and would provide minimal OpenGL support. Also,
some guys have done a 500 core risc-v chip so maybe it
doesn't have to be a poor performer. Intel got decent
performance from Larabee right?edit: the point is that many
companies don't have any graphics IP, so this would give
them a compromise.
jjawssd - 7 hours ago
It is impressive to see a new architecture already have so much
support for existing software with a relatively simple hardware
support layer. It is already standing on the shoulders of giants
with millions of man-hours invested in software development to
bring it to its current state.Related Question:Why is Google
developing a new operating system for their devices from scratch?
Will this operating system be incompatible with all existing
software? Or will it be compatible with software and hardware
platforms like this one through some sort of compatibility layer?
pedroaraujo - 7 hours ago
Fuschia will be a capability-based operative system , a new
security model for computing. I guess Google wants to
experiment with a new software system for secure IoT
snvzz - 2 hours ago
Linux, while very successful, is still essentially based on a
design (UNIX) from ~1970.Much has rained since then in OS