For those not at the meeting tonight, I just wanted to give a quick report on some new-ish first-hand tech experience vis a vis Linux support.
- Display-port daisychaining works with Linux (kernel must be fairly new), and at least the nvidia binary driver (not sure about nouveau, though the Intel onboard GPU FLOSS drivers are said to support it for sure). You'd also need a vidcard that is new-ish with a new enough dport spec. Probably any decent mid-range card in the last 3 years will do.
- Final monitor in the chain doesn't need a pass-thru port and can be an older dport spec (probably 1.1+)
- Works great with 2 x 2560x1440 LCDs, but 2 x 4k is probably beyond the bandwidth of current widely-available dport spec
- Ryzen (1st and 2nd gen) supports ECC on (certain) normal desktop boards, like most Asrock ones. We used Asrock AB350-PRO4. edac-utils says it works. dmidecode hints it works, though with some weird output. memtest86 (latest version) says it works. rasdaemon says it works. It's rasdaemon that is the replacement for abandon-ware edac-utils.
So there you have it, an option to get a "workstation" with ECC and at least 1 PCIe x16 (full lanes) slot. At about half the price of a Xeon setup, which is the only way to get ECC + x16 slot in Intel land with a real CPU. There are some Ryzen teething issues and bios+linux workarounds you have to do though, so it's not as painless as a Xeon would be.
If anyone knows why dmidecode says this for ram widths, let me know: Error Correction Type: Multi-bit ECC Data Width: 64 bits Total Width: 128 bits
(128 bits!!!!? should be 72)
Oh ya, one more:
- NVMe SSD in M.2 (basically an SSD right on the PCIe bus) works great in Linux, with a couple of gotchas:
- If you want to boot from it or use it in raid you need to tell dracut to include the nvme modules and load them super early in the boot process. Make a /etc/dracut.conf.d/nvme-before-raid.conf with: force_drivers+="nvme-core nvme"
Without that your raid will assemble before nvme is availabe and the nvme will be kicked from your raid every reboot.
- nvme drives are not /dev/sdX, they are /dev/nvme*. That means tools (like scripts I've written, hehe) that expect names to be like sda will not work without modification. Looks like the choice was taken in the kernel to make these their own class of device rather than pretend scsi like most disk interface out there. This has subtle implications for things like smart, raid, sdparm, etc.