this post was submitted on 04 Dec 2023
75 points (95.2% liked)
Linux
48220 readers
616 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So essentially it's running a single computer we if it were two seperate workstations?
I could see an implementation that's similar to those running a VM with a DGPU for gaming. User A could run a login against the primary GPU and OS. User B could run a VM with several cores allocated and the secondary GPU dedicated to the VM. If any shared did file resources in the primary OS are needed, KVM has ways to do that as well.
Not entirely sure why this reply is being panned (was at -6 when I first saw it).
OP is in the process of upgrading their PC to a Ryzen 9. If we make the assumption that this Ryzen 9 is on the AM5 platform, the CPU comes equipped with an IGPU, meaning the RTX 3060s are no longer needed by the bare metal. So, installing a stable, minimal point release OS as a base would minimize resource utilization on the hardware side. This could be something like Debian Bookworm or Proxmox VE with the no-subscription repo enabled. There's no need for the NVIDIA GPUs to be supported by the bare metal OS.
Once the base OS is installed, the VMs can be created, and the GPUs and peripherals can be passed through. This step effectively removes the devices from the host OS -- they don't show up in lsusb or lspci anymore -- and "gives" them to the VMs when they start. You get pretty close to native performance with setups of this nature, to the point that users have set up Windows 10/11 VMs in this way to play Cyberpunk 2077 on RTX 4090s with all the eye candy, including ray reconstruction.
Downsides:
Upsides:
It's not exactly what OP is looking for, but it's definitely a valid approach to solving the problem.
I came to the comment section to recommend Proxmox or another hypervisor as well. If it was a system with just one GPU, I wouldn’t, as splitting it between two VMs can be difficult. But, most of the time having two GPUs under one OS can be a lot worse too though. I think it’s definitely the cleaner & easier way to go. One caveat I’ll add is that resources are more strictly assigned to each seat, so memory & cpu can’t be sent to who needs it more as readily. Another positive though is that it would be super simple to create a third VM with a small amount of resources for running a small self-hosted server of some kind on the same box.