Jump to content

Building a $100,000 PC Pt. 2 - SO MANY PCIe CARDS

Hey Linus, plug those power supplies into two different grids, as at peak each of those power supplies pull 12.5 amps. Put it on the same one, and have fun flipping breakers.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, NMS said:

This channel is becoming more like Fortnite.

Thank god GN exists!

Care to explain your reasoning for comparing LTT to Fortnite?

Link to comment
Share on other sites

Link to post
Share on other sites

not very redundant, if it breaks all your editors cant do any work!

Link to comment
Share on other sites

Link to post
Share on other sites

Odd questions, but do you know exactly how the cable is transferring the PCIe signal? Is it a pin for pin direct line, or a reduced number of conductors and some sort of shift register on the other end? Or is the PLX chip used on both ends creating its own signal in its own protocol that is being transmitted? Also, have you guys considered the Amfeltec PCIe expansion backplane for JUST storage? I work for a local hospital system, and we use that exact product for PCIe SSDs being used as cache drives for our 23 Pb datacenter. Sorry if I keep rambling, but have you guys also considered some of the products by Trenton Systems? They havse some really neat concepts, including the use of a custom form factor SBC with a custom mainboard, Xeon Gold/Silver/etc, and 88 PCIE lanes. Their backplanes support up to 14 PCIE slots as well. These things could be a neat VM machine, should you guys want something a bit smaller than the bonafide OBELISK you guys are in the process of building. 

Link to comment
Share on other sites

Link to post
Share on other sites

I just saw your video, especially the struggling with the IOMMU and the failing of the ACS patch to split up the groups. Out of curiosity, I'm wondering if you also have tried the pice_acs_override=downstream, if it also failed and what the kernel log has to say about that.

 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Anselm84 said:

I just saw your video, especially the struggling with the IOMMU and the failing of the ACS patch to split up the groups. Out of curiosity, I'm wondering if you also have tried the pice_acs_override=downstream, if it also failed and what the kernel log has to say about that.

 

Stay tuned! 

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Kollective_MK said:

Odd questions, but do you know exactly how the cable is transferring the PCIe signal? Is it a pin for pin direct line, or a reduced number of conductors and some sort of shift register on the other end? Or is the PLX chip used on both ends creating its own signal in its own protocol that is being transmitted? Also, have you guys considered the Amfeltec PCIe expansion backplane for JUST storage? I work for a local hospital system, and we use that exact product for PCIe SSDs being used as cache drives for our 23 Pb datacenter. Sorry if I keep rambling, but have you guys also considered some of the products by Trenton Systems? They havse some really neat concepts, including the use of a custom form factor SBC with a custom mainboard, Xeon Gold/Silver/etc, and 88 PCIE lanes. Their backplanes support up to 14 PCIE slots as well. These things could be a neat VM machine, should you guys want something a bit smaller than the bonafide OBELISK you guys are in the process of building. 

Looking into these! 

Link to comment
Share on other sites

Link to post
Share on other sites

please make the build fast!!!!!!!!!!!!!!!!!!!! these parts are killing me with curiosity!!!!!!!!!!!!!!!! please linus, have some mercy on my soul!!

Link to comment
Share on other sites

Link to post
Share on other sites

@LinusTech Not sure if you are using the UnRAID array to host the Windows VMs... personally I find the array slow for virtual images and instead use another feature hidden in the UnRaid OS.
Using drives not part of the array and passing them through as standalone works ...However, the other way within UnRaid is to use LVM2.

This enables you to create images on a vgpool of physical drives which can be moved around and snapshotted live to different drives within the pool without shutting down your Windows VMs ...for example, you have a pool of SSD's with 6 logical volumes and one SSD shows sign of failing, you can live migrate the Windows logical volume from one SSD to another... Also... make a live snapshot to back up of a complete running Windows as an image onto the Array.

TL;DR - LVM2 is powerful and is included within UnRAID!

Link to comment
Share on other sites

Link to post
Share on other sites

On ‎16‎-‎8‎-‎2018 at 9:16 PM, nicklmg said:

Check out One Stop Systems' server gear: http://geni.us/ocuAnN
 

 

 
Lynus, what was the command in the config file?

I would also like to split groups.

 

Greets Davy

Link to comment
Share on other sites

Link to post
Share on other sites

Linus have you seen AnywhereUSB by Digi?

It is only USB 2.0, but it could be a solution for keyboard, mouse and the occasional coffee mug heater that uses USB.

Might eliminate some really long USB extenders with this one.

 

You can also go full Linus, and throw in some Thunderbolt docks, but using optical Thunderbolt cabling (I know you digg enterprisy stuff) and maybe even eGPU to add more workstations?

No signature found

Link to comment
Share on other sites

Link to post
Share on other sites

On ‎8‎/‎28‎/‎2018 at 9:30 AM, Death_Masta187 said:

I have not used unraid before but I know vmware with Nvidia GRID GPUs works amazingly well as a DaaS solution. vmware with vcenter/vmotion allows for easy expansion as well.

Same with UNRAID but my thought was with why not use Hyper-V with RemoteFX and Discrete Device Assignment (DDA). The only gotcha I know with RemoteFX you can't have different type of video cards for it to work, they all have to be the same. I have my doubts that he's going to run a really long cable for the video, mouse, keyboard to each desk. Unless he plans to move all this editors into his conference room permanently and have work there so they can have the short run of cables to the central tower. But that would be silly (But then again this is Linus and doing silly to bat shit crazy is what he does to keep his video entertaining). To see this as a better use case, they all be connected via RDP and you can have RDP redirection USB devices on their local system in their VM.

Link to comment
Share on other sites

Link to post
Share on other sites

Hi @LinusTech

 

I finally saw your troubleshooting video and realized that you're building out the expansion board to facilitate USB hotplugging, but there's a better piece of hardware for that.  The Highpoint RocketU 1144D is a PCIe USB 3.1 card with 4 ports, 4 controllers, and an ACS-enabled PLX chip built into a single package. It also draws power from the motherboard.

 

I built a budget multi-headed system around that card back in 2012, and just put USB hubs on each port. Link if you're interested: https://imgur.com/a/kWyH4

 

I thought I should drop you a line as it might come in handy for this or future builds if you get this message.

 

Best of luck with your build!

 

-Andrew

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 29-8-2018 at 4:21 AM, RobbinM said:

You can also go full Linus, and throw in some Thunderbolt docks, but using optical Thunderbolt cabling (I know you digg enterprisy stuff) and maybe even eGPU to add more workstations?

 

I guess Deadmau5 beat me to it :P

Must have been fun being at his house.

No signature found

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 months later...

from what I understand, Linus is gonna hook 8 editors up to this server and they each will have theirown graphic cards and USB port. my question is how does Linus plan to connect the VMs to the server from the server room to the edit room? what's the best method specially if the edit room is far from the sever case. 

 

he could use hdmi cable with active extension but this is so lame (and not linus like). I was thinking using cat6 might be the way to go (using hdmi converter) but I am very curious to know what methods Linus used. 

Link to comment
Share on other sites

Link to post
Share on other sites

What would be a worth the try to look into rtx cards with usb-c port(virtualink) it is designed for vr so one usb-c cable can carry video and data basicly usb-c with dp and usb 3.0.
Virtualink cards should have a separate usb controller in them that you could pass to guests.
If all ports are properly in their separate iommu groups you could probably avoid acs-patch altogether.(if you have 8 pcie slots) Another plus could be that if you use an usb c monitor with built-in hub and 3.5 jack all you would need for each workstation is one usb c cable and you could plug in all stuff into the monitor.

I don't have an rtx card to test this, so it could worth a try.
PS: If you try and this actually works don't forget to give me a shoutout?
 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×