FYI: Fedora/RISC-V third and final bootstrap
by Richard W.M. Jones
About a month ago I posted about the state of the RISC-V architecture
for Fedora:
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.o...
Quoting from that email:
| First the basics: RISC-V is a free and open Instruction Set
| Architecture (ISA). You can read more about it on the RISC-V
| Foundation's website here:
|
| https://riscv.org/
|
| Fedora/RISC-V is a project to port Fedora to RISC-V. Actual, real
| 64-bit RISC-V hardware you can buy is going to be released in Q1 2018
| (it's already sampling to a few lucky developers), and I want Fedora
| to be the first choice to run on that hardware.
|
| The Fedora/RISC-V project web pages are here:
|
| https://fedoraproject.org/wiki/Architectures/RISC-V
As noted in the earlier email the project was on a hiatus since the
end of 2016, waiting for the RISC-V Foundation to commit to a stable
Linux libc ABI for the architecture. Well, finally that has happened.
glibc 2.27, due to be released on Thursday, will contain a stable
RISC-V ABI allowing us to sanely develop a Linux distro.
And thus the Fedora/RISC-V project is back in business. Next week
we'll be starting the third (and final) bootstrap. You can follow the
work at these links:
https://github.com/rwmjones/fedora-riscv-bootstrap
https://fedoraproject.org/wiki/Architectures/RISC-V
https://fedoraproject.org/wiki/Architectures/RISC-V/Bootstrapping
The approximate timelines (don't hold me to any of this) are:
* mid-February: Stage 3 disk image.
* end-March: Pristine, pure RPM-built stage 4 disk image,
autobuilder picking up Fedora packages and building them.
* Summer: Shadow-Koji instance, Fedora 28/Rawhide RPM hosting.
You can already try out the interim (and very minimal and hacky)
bbl-bootloader/kernel/stage 3 disk image:
http://oirase.annexia.org/riscv/
Install the riscv-qemu package from:
https://copr.fedorainfracloud.org/coprs/rjones/riscv/
and then run:
qemu-system-riscv64 \
-nographic \
-machine virt \
-m 2G \
-kernel bbl \
-append "console=ttyS0 ro root=/dev/vda init=/init" \
-device virtio-blk-device,drive=hd0 \
-drive file=stage3-disk.img,format=raw,id=hd0 \
-device virtio-net-device,netdev=usernet \
-netdev user,id=usernet
I'll be at FOSDEM on Saturday if anyone is interested in Fedora and
RISC-V.
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
Fedora Windows cross-compiler. Compile Windows programs, test, and
build Windows installers. Over 100 libraries supported.
http://fedoraproject.org/wiki/MinGW
6 years, 3 months
GCC broken in rawhide?
by Ralf Corsepius
Hi,
ATM all rawhide builds are failing for me, because autoconf's tests for
CC are failing.
Digging into details led me to this error:
...
cc1: error: fail to initialize plugin
/usr/lib/gcc/x86_64-redhat-linux/7/plugin/annobin.so
annobin: conftest.c: Error: plugin built for compiler version (7.3.1)
but run with compiler version (7.2.1)
...
AFAIS, GCC in rawhide was updated to GCC-7.3.1, but apparently annobin
wasn't rebuilt, resulting into rawhide now carrying an inconsistent
GCC-toolchain.
Ralf
6 years, 3 months
Re: Fedora27: NFS v4 terrible write performance, is async working
by Petr Pisar
On Tue, Jan 30, 2018 at 08:31:05AM +0100, Reindl Harald wrote:
> Am 30.01.2018 um 08:25 schrieb Petr Pisar:
> > On 2018-01-29, J. Bruce Fields <bfields(a)redhat.com> wrote:
> > > The file create isn't allowed to return until the server has created the
> > > file and the change has actually reached disk.
> > >
> > Why is there such a requirement? This is not true for local file
> > systems. This is why fsync() exists
>
> pretty simply because on the NFS server side the whole VFS layer sits again
> and without "async" in the export you need a way to rely on "the stuff i
> wrote to the network ended on the filesystem on the other end"
If I need reliability, I issue fsync from the client process, client VFS
passes it to the NFS client, NFS client translates it into NFS COMMIT message,
sends it to the NFS server, NFS server translates it back to fsync, pass it to
the server VFS and from there to the local file system driver.
I don't understand why NFS should be reliable by default.
-- Petr
6 years, 3 months
Re: Fedora27: NFS v4 terrible write performance, is async working
by J. Bruce Fields
On Tue, Jan 30, 2018 at 10:00:44AM +0100, Reindl Harald wrote:
> Am 30.01.2018 um 09:49 schrieb Terry Barnaby:
> > Untar on server to its local disk: 13 seconds, effective data rate: 68
> > MBytes/s
> >
> > Untar on server over NFSv4.2 with async on server: 3 minutes, effective
> > data rate: 4.9 MBytes/sec
> >
> > Untar on server over NFSv4.2 without async on server: 2 hours 12
> > minutes, effective data rate: 115 kBytes/s !!
> >
> > Is it really expected for NFS to be this bad these days with a
> > reasonably typical operation and are there no other tuning parameters
> > that can help ?
>
> no, we are running a virtual backup appliance (VMware Data Protection aka
> EMC Avamar) on vSphere 5.5 on a HP microserver running CentOS7 with a RAID10
> built of 4x4 TB consumer desktop disks and the limiting factor currently is
> the Gigabit Ethernet
>
> 35 TB network IO per month, around 1 TB per day which happens between 1:00
> AM and 2:00 AM as well as garbage collection between 07:AM and 08:00 AM
Again, this is highly dependent on the workload.
Your backup appliance is probably mainly doing large sequential writes
to a small number of big files, and we aim for that sort of workload to
be limited only by available bandwidth, which is what you're seeing.
If you have a single-threaded process creating lots of small files,
you'll be limited by disk write latency long before you hit any
bandwidth limits.
--b.
6 years, 3 months