Didn't realize it was possible to order the workstation I got without it, or I would've made triple-sure I had it. Get serial output ASAP, or use a board with a built-in IPMI. This made things far harder than necessary. I use a USB serial port to watch the boot now, but for a long time I just waited until SSH came up, and if it didn't come up, used the systemd emergency console to try to poke it blind. The downside is no video out after the bootloader so you can't watch the boot process. Ensures that the video device is available for vfio-pci to grab and block out other drivers that may try to grab it later. In theory you don't need this but it depends on your board, hardware, etc. I had to do this for my GTX 670 (I use a 1070 most of the time though, and it did not need this).ī) Kernel VGA options, particularly video=efifb:off. If your video card is slightly older and from the time when UEFI was just getting supported by PC mobos and does not have UEFI boot, you may be able to find a UEFI-compatible video BIOS online. It can also be hard because of the way boards are sort of straddling a middle ground between UEFI and BIOS if you don't set everything to explicit UEFI in the setup, it may init the system with either, or may init the BIOS first for hardware compat, which will make things weird. It's possible to do this with BIOS but as far as I understand, not well tested anymore. I started to build a custom kernel to enable some extra features, since I had to compile dev branches anyway to troubleshoot periodic hard locks on kernels 4.12 and 4.13, but the setup should work with a stock kernel.įor me, the biggest hangups on the checklist were:Ī) Ensure 100% UEFI everywhere. I did the hypervisor build on a new enterprise-class workstation with a Supermicro motherboard. Unfortunately for me, my i7-3770k did not have it (but the i7-3770 non-K did). It's not necessarily new but a lot of hardware doesn't support it. On consumer hardware, it can be hard to find out if you even have IOMMU support, required for passthrough. I would not recommend it for the faint of heart, or those without significant sysadmin experience. It took me a couple of weeks to get all of the bugs worked out and things running reasonably smoothly. There are some guides, but it's very hardware-dependent and touch and go. Now, Windows is separate and it can crash, reboot, or hurt itself all it wants, and rarely causes any real loss. The biggest issue with this (aside from the general shame and guilt of using Windows on the hardware) was that Windows would decide it wanted to turn off for MS-enforced updates and bring everything down. It would only not work for Linux-based graphics development, but even then, you can get a second GPU and pass it through to another VM, running on a separate display.īefore I got the hypervisor set up, I ran Windows on the hardware with Linux VMs hosted in VirtualBox. It also grants the admin convenience of virtualized environments, since I can use zvols to snapshot everything at once, place clean resource limitations on each environment, etc. It essentially makes Windows act like a desktop environment for a Linux box while maintaining practically-native overall performance for all workloads, including gaming and photo/video editing. This is the most convenient workflow situation for me, and allows the best of both worlds. The hypervisor also runs Linux VMs, from which I do development work via VNC and/or SSH. My solution was ultimately to set up an Arch-based KVM hypervisor with a Windows 10 VM running as the main "workstation", with USB + GPU PCI passthrough and paravirt. I ran Linux natively as my sole workstation OS for nearly 10 years, and spent a lot of that time tinkering with WINE, including developing and submitting some patches, but eventually I had to give up because advanced things like Photoshop were too spotty in Wine and too slow in VMs.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |