Jump to content

IT WORKS!!! - Six 8K Workstations, 1 CPU Finale

jakkuh_t

We finally finish and test the 6 8K editors, 1 CPU build....

 

 

widget.png?style=banner2

PC: 13900K, 32GB Trident Z5, AORUS 7900 XTX, 2TB SN850X, 1TB MP600, Win 11

NAS: Xeon W-2195, 64GB ECC, 180TB Storage, 1660 Ti, TrueNAS Scale

Link to comment
Share on other sites

Link to post
Share on other sites

I did not think they would actually finish this successfully, or that if they did it would have even close to comparable performance to separate machines. Damn.

 

However my ever present question with crazy builds like this, and this one in particular - what are they going to do with it? Since they're not actually going to use it, are they going to just disassemble it? There's no way they can recoup the cost of this build with just the videos about it.

Link to comment
Share on other sites

Link to post
Share on other sites

This type of build seems a bit different to say the least.

Maybe not the most practical thing in the world, unless one were to add resource sharing between the VMs, so that resources that aren't used by one editor can be used by another and so forth. Though, at some point the question comes along if it wouldn't just be easier to use remote desktop to access a rig in a server room.

 

But regardless, interesting to see what one can do, even if it might not be the most practical thing in the world.

 

As a side note, the link to the Forum thread in the video description over at Youtube doesn't exist at current.

Link to comment
Share on other sites

Link to post
Share on other sites

I still would like to know why RDS wasn't considered for this project using Windows Server 2019. It can do some pretty sweet utilization limits nowadays. Of course it requires either Tesla's or Quadro's to enable the hardware acceleration over remote desktop but it works extremely well in my system.

Main Gaming PC - i9 10850k @ 5GHz - EVGA XC Ultra 2080ti with Heatkiller 4 - Asrock Z490 Taichi - Corsair H115i - 32GB GSkill Ripjaws V 3600 CL16 OC'd to 3733 - HX850i - Samsung NVME 256GB SSD - Samsung 3.2TB PCIe 8x Enterprise NVMe - Toshiba 3TB 7200RPM HD - Lian Li Air

 

Proxmox Server - i7 8700k @ 4.5Ghz - 32GB EVGA 3000 CL15 OC'd to 3200 - Asus Strix Z370-E Gaming - Oracle F80 800GB Enterprise SSD, LSI SAS running 3 4TB and 2 6TB (Both Raid Z0), Samsung 840Pro 120GB - Phanteks Enthoo Pro

 

Super Server - i9 7980Xe @ 4.5GHz - 64GB 3200MHz Cl16 - Asrock X299 Professional - Nvidia Telsa K20 -Sandisk 512GB Enterprise SATA SSD, 128GB Seagate SATA SSD, 1.5TB WD Green (Over 9 years of power on time) - Phanteks Enthoo Pro 2

 

Laptop - 2019 Macbook Pro 16" - i7 - 16GB - 512GB - 5500M 8GB - Thermal Pads and Graphite Tape modded

 

Smart Phones - iPhone X - 64GB, AT&T, iOS 13.3 iPhone 6 : 16gb, AT&T, iOS 12 iPhone 4 : 16gb, AT&T Go Phone, iOS 7.1.1 Jailbroken. iPhone 3G : 8gb, AT&T Go Phone, iOS 4.2.1 Jailbroken.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm surprised they didn't use 1GB hugepages given that that should be supported and would reduce the overhead even further compared to 2MB pages. Transparent hugepages has issues when memory starts getting fragmented, but shouldn't really add any overhead for a VM setup unless you're utilizing memory ballooning (which is incompatible with PCI passthrough and therefore irrelevant for this setup)

 

I'm still a little confused about the use of unraid given that there doesn't appear to be much, if any, utility it provides over existing libvirt/kvm management tools, but I guess their support might be cheaper than Red Hat's? If they had gone for the Quadros, the Xen hypervisor probably would have ended up having less issues as well.

 

1 hour ago, Nystemy said:

Maybe not the most practical thing in the world, unless one were to add resource sharing between the VMs, so that resources that aren't used by one editor can be used by another and so forth.

As it's a multi-socket system, NUMA becomes an issue there, plus sharing real CPU cores between VMs generally creates pretty unpleasant latency spikes in my experience.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, microsoftenator said:

As it's a multi-socket system, NUMA becomes an issue there, plus sharing real CPU cores between VMs generally creates pretty unpleasant latency spikes in my experience.

Well, preferably we would limit our resource sharing to only be applicable on the same socket.

Though, if there is current software support for such is a totally different question, but on paper it should be trivial, in practice, it is likely a nightmare without end in sight....

 

One can also take latency dependency of a given task into consideration when sharing resources, but I haven't really seen this out in the wild yet. Not even on paper.

Link to comment
Share on other sites

Link to post
Share on other sites

I had always wondered why this type of set up wasn't more common, especially in a library type setting where much power isn't needed. Now I see why. They didn't even mention the fact that if 1 computer fails, it takes down EVERYONE, not just one person, so there is a much bigger risk.

Link to comment
Share on other sites

Link to post
Share on other sites

displayport is supposed to be fantastic. but i guess linus is right, DVI is best. 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, microsoftenator said:

I'm surprised they didn't use 1GB hugepages given that that should be supported and would reduce the overhead even further compared to 2MB pages. Transparent hugepages has issues when memory starts getting fragmented, but shouldn't really add any overhead for a VM setup unless you're utilizing memory ballooning (which is incompatible with PCI passthrough and therefore irrelevant for this setup)

 

I'm still a little confused about the use of unraid given that there doesn't appear to be much, if any, utility it provides over existing libvirt/kvm management tools, but I guess their support might be cheaper than Red Hat's? If they had gone for the Quadros, the Xen hypervisor probably would have ended up having less issues as well.

 

As it's a multi-socket system, NUMA becomes an issue there, plus sharing real CPU cores between VMs generally creates pretty unpleasant latency spikes in my experience.

Did they not have a partnership with unraid or at one stage Linus was contemplating buying a stake in them. I could be wildly inaccurate but I presumed this was the reason as their would be better options.

Gaming Machine: CPU: AMD 7950x cooled by a Custom Watercooling Loop| CASE: Lian Li Dynamic Evo | MOBO: X670E Asus Crosshair Extreme RAM: 64B DDR4 G.Skill 6000mhz ram | GPU: AMD 7900 XTX PSU: Corsair RM1000x with cablemod cables SSD's: 2TB Seagate 530, 4TB Seagate 530, 1TB WD SN850 | Monitors: 38" Acer X38P Predator| Mouse: Logitech G903 and Powerplay matt | KEYBOARD: Steelseries Apex mini pro | HEADSET: Logitech G935 Wireless Headset
   

| Pics of my rig |

 

Linux Machine: CPU: AMD 5950x cooled by a Custom Watercooling Loop| CASE: Phantek Evolv X | MOBO: X570 Asus Crosshair VIII Extreme RAM: 64GB DDR4 Crucial Ballistix 3600mhz ram | GPU: AMD 6900XT PSU: Corsair AX1200 with custom white sleeved Cables  SSD's: 1Tb Seagate 530 & 2Tb Seagate 530 & 2Tb KC3000 | Monitors: 38" Acer X38P Predator | Mouse: Logitech G903 and Powerplay matt | KEYBOARD: Steelseries Apex Pro| HEADSET: Logitech G935 Wireless Headset

 

| Pics of my rig |

 

 

Basement Machine: CPU: AMD 5950x cooled by a Custom Watercooling Loop| CASE: Thermaltake Core Pro 3 | MOBO: X570 Gigabyte Xtreme RAM: 64GB DDR4 G.Skill 3600mhz ram | GPU: Rtx 3080 Ti PSU: Corsair RM1000x  SSD's: 1Tb Crucial P3 Plus & 2Tb SN850 & 2Tb KC3000 | Monitors: 32" 1440p monitor | Mouse: Logitech G903 and Powerplay matt | KEYBOARD: Das Ultimate| HEADSET: Logitech G935 Wireless Headset

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, jakkuh_t said:

We finally finish and test the 6 8K editors, 1 CPU build....

 

 

Exactly what's the SKU of the mb? The Tyan S7100 has only 4 PCI-E 16x, and you guys put 6 gpus in there...
I have a S7105 in my lab, love it!

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, bindydad123 said:

displayport is supposed to be fantastic. but i guess linus is right, DVI is best. 

Displayport is fine as long as you buy cables that meet spec.

 

Cheap cables cause flickering. 

"And I'll be damned if I let myself trip from a lesser man's ledge"

Link to comment
Share on other sites

Link to post
Share on other sites

You need to enable SMB Direct and RDMA on both your server and this system.

 

Upside: the ACTUAL data transfers will move a lot faster.

 

Downside: Windows Task Manager won't be able to see the data transfer rates BECAUSE it's RDMA/SMB Direct.

 

You guys should also really look into either picking up a relatively inexpensive 18 port 100 GbE Mellanox switch or if that's too rich for your blood, at least run an AOC direct attach cable so that you can run 100 Gbps from the server to this directly.

 

Also, DON'T use Intel Optane PCIe SSDs for this.

 

You WILL wear through the write endurance of the drives, I can guarantee that; at which point, the drives will go into a read-only state and will only write at 2 MB/s.

 

I know because I just received my RMA'd Intel 750 series PCIe AIC SSD back because I wore through the write endurance limit on the drive/card.

 

Use Micron 9300 Max U.2 NVMe drives instead.

IB >>> ETH

Link to comment
Share on other sites

Link to post
Share on other sites

How come you're still using Unraid? It's such an awful solution in comparison to thin clients and Citrix Xenserver or VMWare ESXi with GPU-passthrough. 

Link to comment
Share on other sites

Link to post
Share on other sites

Tyan didn't hesitate to boastScreenshot_20190730-083645__01.thumb.jpg.1bce6d7c1f93bcdccb56cba5c865313b.jpg

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, williamcll said:

Tyan didn't hesitate to boastScreenshot_20190730-083645__01.thumb.jpg.1bce6d7c1f93bcdccb56cba5c865313b.jpg

Thats not a standard S7100 like we see on Tyan website... I will contact Tyan!

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/29/2019 at 6:22 AM, devuli said:

Exactly what's the SKU of the mb? The Tyan S7100 has only 4 PCI-E 16x, and you guys put 6 gpus in there...
I have a S7105 in my lab, love it!

https://www.tyan.com/Motherboards_S7100-EX_S7100AGM2NR-EX

widget.png?style=banner2

PC: 13900K, 32GB Trident Z5, AORUS 7900 XTX, 2TB SN850X, 1TB MP600, Win 11

NAS: Xeon W-2195, 64GB ECC, 180TB Storage, 1660 Ti, TrueNAS Scale

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 months later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×