Jump to content

I'm starting to plan my next build, and something I want to accomplish is complete isolation between my Linux and Windows operating systems. I primarily work while on Linux, but wanted to keep a Windows partition around for gaming. My current setup is just a regular dual boot; nothing special. However, I've had disasters of Windows update completely ruining my Linux partitions, and as such wanted to achieve complete isolation of the different operating systems.

 

I know it's physically possible to do this: if I physically disconnect the other drive every time I want to switch OSes, I get what I'm after. This is obviously not very practical. Solutions exist for SATA-based setups: https://www.amazon.com/Coolgear-Switch-3-5inch-Design-KeyLock/dp/B00R8IEXHI this would be ideal, if I wanted to use SATA drives.

 

I want to use 2 large NVMe drives (maybe with a larger SSD/HDD for game storage, unsure yet) for my operating systems. Are there any solutions that exist to accomplish this? Is there a board maybe whose BIOS supports this feature (maybe as something like "Drive profiles" that I can easily swap between at boot)? 

Link to post
Share on other sites

in windows you can disable the linux drive so windows cant touch it. You can also boot linux off USB. This is what I was doing until I bought a low power intel NUC for web surfing on linux.

Back in the day I had a key switch that would switch SATA power between drives. It was much simpler than the product in your link. That would require some engineering for PCIE

Link to post
Share on other sites

  • 2 weeks later...

Form the linux side, you have the option to disable a complete drive on the command line form the boot loader, but on the windows side i'm not aware of a way to disable it in software in a way that doesn't allow windows to turn it back on. (disabling it in device manager still allows you, or a program to re-enable it) One ugly hack for this could be a DSDT override that allows you to completely hide a device from the OS (unless some kernel mode driver tries to re-scan the complete address space).

 

If you are into hardware hacks you could simply disconnect all Vcc/Vdd lines on the DC-DC converters on the NVMe drives and add a physical switch to reconnect them. Is the only 'true' way to achieve this. The problem is that your firmware is still made to detect all devices (even those not needed to boot) because that's what legacy systems used to require as they couldn't do it themselves. After detecting all devices it would inform the OS, i.e. via ACPI or a device tree, and then the OS would use the devices. Now, an OS can simply start scanning/requesting things form the CPU/PCH etc. by itself and re-discover devices missing from ACPI.

 

One way some security modules in firmware allow you do change this is disabling a 'port' and then locking that port in the off state using SMM (system management mode) as that is the only way to block an OS from re-enabling the port or device. Downside is that you have to enter the UEFI/BIOS, enable/disable the ports/devices you want, save, reboot, and then boot your OS.

 

tl;dr: it's not really universally possible without a physical switch. if a half-baked solution is good enough use windows device manager and linux boot loader settings to soft-disable the drives.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×