Jump to content

Fixing My $10,000 Mistake - Whonnock 3 Deployment

jakkuh_t

 

widget.png?style=banner2

PC: 13900K, 32GB Trident Z5, AORUS 7900 XTX, 2TB SN850X, 1TB MP600, Win 11

NAS: Xeon W-2195, 64GB ECC, 180TB Storage, 1660 Ti, TrueNAS Scale

Link to comment
Share on other sites

Link to post
Share on other sites

When you're not sure if the sparks were real or not 😛

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

The plucky music you used in the background of this video is incredibly distracting.

Please don't use it, heard it in a couple videos now and I just cant watch them.

Link to comment
Share on other sites

Link to post
Share on other sites

The 4 network ports on the main board seems to use an OCP add in card, one can likely change it for a 2x 100Gb/s one as well, and get even more networking performance if one desires.

 

And the soldered in wires for the riser card seems a bit sketchy indeed. But I guess that is the price one pays when going to PCIe v4, signal integrity starts to matter a lot more and connectors tends to be the main issue there. The PCIe connector itself + the CPU socket is already eating into signal integrity, who knows, the extra connectors on either end of the cable were maybe just too much for it to handle, thereby we likely have this "beautiful" hot snot bodge job to admire. (Or they made it to also support PCIe v5? Though, we will see if AMD sticks with the same socket for that, if they do, then it would be somewhat logical from a server standpoint to "future proof" the motherboard. Since why buy a whole new system when one can just replace the CPU and relevant add in cards.)

Link to comment
Share on other sites

Link to post
Share on other sites

Any details on the OS its running? and for the raid is it raid 5 across all 32 drives or raidz1?  

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, as i guess it's a "DELL/EMC" OEM System -> then it's a handel to carry / pull the server back into the rack from behind. (I as DC Tech like this 😄 )

Link to comment
Share on other sites

Link to post
Share on other sites

Do a video on setup os this seem more of a unboxing and plug in video. Not very interesting. 

Link to comment
Share on other sites

Link to post
Share on other sites

so even wendell could not problem solve the last whonnock server....

MSI x399 sli plus  | AMD theardripper 2990wx all core 3ghz lock |Thermaltake flo ring 360 | EVGA 2080, Zotac 2080 |Gskill Ripjaws 128GB 3000 MHz | Corsair RM1200i |150tb | Asus tuff gaming mid tower| 10gb NIC

Link to comment
Share on other sites

Link to post
Share on other sites

So how were the problems that new whonnock suffered resolved on this system? With the timing out and interrupts and polling 

Link to comment
Share on other sites

Link to post
Share on other sites

Even if it's infrequent swapping a dead drive is going to to be a PITA. First you have to figure out what honey badger it's in then you have to go dismantling it and reassembling it. Think of the downtime. 😬

 

6 hours ago, liquidpower said:

Any details on the OS its running? and for the raid is it raid 5 across all 32 drives or raidz1?  

Looks as though they just slapped Windows Server on it and likely used Windows Storage Spaces to join them together. Based on how they were talking about drive integrity it does sound as though they did a 32 drive RAID5.

 

I would have opted for ZFS on Debian or FreeBSD and probably done raidz2 but given their frequent backups and the fact the server is for active project (IIRC) it's not a big concern. Parity performance in Windows Storage Spaces does leave a lot to be desired though. They'd probably see a real performance improvement with Linux but again IIRC the software they use on Whonnock does not have a Linux version so... 😕

Link to comment
Share on other sites

Link to post
Share on other sites

I'm curious - any reason that NTFS was used for the Storage Space, instead of using ReFS?

Link to comment
Share on other sites

Link to post
Share on other sites

So how does the system fix the NVMe problem in the new Whonnock?

From the description, it is suggested that the drives had a design flaw. And it is not the case for those new drives?

And I suppose memory bandwidth would still be a limited factor?

Link to comment
Share on other sites

Link to post
Share on other sites

I must have missed the earlier video on this saga - what exactly was the problem with the earlier box and why was it not fixable?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Luscious said:

I must have missed the earlier video on this saga - what exactly was the problem with the earlier box and why was it not fixable?

This video explains it pretty detailed:

Basically the drives were fast enough h that the CPU couldn't keep up and the drives would time out while waiting for the CPU, causing intermittently slow speeds and crashes in premiere pro 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Nystemy said:

The 4 network ports on the main board seems to use an OCP add in card, one can likely change it for a 2x 100Gb/s one as well, and get even more networking performance if one desires.

 

And the soldered in wires for the riser card seems a bit sketchy indeed. But I guess that is the price one pays when going to PCIe v4, signal integrity starts to matter a lot more and connectors tends to be the main issue there. The PCIe connector itself + the CPU socket is already eating into signal integrity, who knows, the extra connectors on either end of the cable were maybe just too much for it to handle, thereby we likely have this "beautiful" hot snot bodge job to admire. (Or they made it to also support PCIe v5? Though, we will see if AMD sticks with the same socket for that, if they do, then it would be somewhat logical from a server standpoint to "future proof" the motherboard. Since why buy a whole new system when one can just replace the CPU and relevant add in cards.)

They could have at least covered the joints with something though, a piece of metal (or even plastic at a push) on the front and back, screwed together and putting even pressure on both sides would really help to stop the joints from wiggling around when inserting cards.

 

I guess they assumed most people would only ever insert a card once for the entirety of the servers life but Linus isn't most people.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Tumleren said:

This video explains it pretty detailed:

Basically the drives were fast enough h that the CPU couldn't keep up and the drives would time out while waiting for the CPU, causing intermittently slow speeds and crashes in premiere pro 

 

 

Cool. That's a pretty big jump going from 32 cores to 128, but I can see how 3-4 users hitting a single socket would slow things down. The extra PCIe lanes are certainly handy for that 2x100Gbit networking.

 

I wonder if they will throw a 5th storage card in there and max the thing out. 5x8x8TB would net you 256TB in RAID5. Configure a second box as a mirror and the read speeds would go nuts.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Master Disaster said:

They could have at least covered the joints with something though, a piece of metal (or even plastic at a push) on the front and back, screwed together and putting even pressure on both sides would really help to stop the joints from wiggling around when inserting cards.

 

I guess they assumed most people would only ever insert a card once for the entirety of the servers life but Linus isn't most people.

I would suspect that they designed the boards and then later realized that it wouldn't be good enough for their intentions.

I for one wouldn't be surprised if this server also comes with a version with connectors, but maybe with only PCIe v3 support. (Potentially only v4 if the wires are for the v5 spec.)

Adding in a plate would require redesigning and re-manufacturing the board, as screw holes would be needed. They might just not see a high enough volume for this skew for it to be worth it compared to the added costs, also a plastic plate would be preferred from a signal integrity standpoint, since it would be more non existent as far as the signals are concerned. (Or this server Linus has on his hands is an early prototype, but I suspect that to not be the case, since the system doesn't seem all that "new".)

 

But at least they added glue, so it should be a lot more reliable than just soldered wires, might not look pretty though, but that is low volume production in a nutshell. (Not that bare soldered wires are super unreliable, though, they do tend to break at the joint to the board. Or just rip of the pads...)

Link to comment
Share on other sites

Link to post
Share on other sites

If you can run debian on one of those suckers I'm getting like 4, and slapping MFS on it - set the file count to 3 so that there are 3 copies of every file across 4 servers (incase one dies)

 

Then you can basically run those suckers in parallel.

 

It works kinda like bittorrent where the chunk servers are the seeders. if one server is busy you get the data from another. Add more chunk servers until you saturate the entire network and then upgrade the network.  

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, bit said:

title tells me nothing! love it!

 

Titles like this are proven to increase clicks, it's the way YouTube (a perfectly balanced game with no exploits) is played and Linus has a lot of people's salaries to pay. There's somebody complaining like this in just about every video thread now, but you're just going to have to get over it. 

Corps aren't your friends. "Bottleneck calculators" are BS. Only suckers buy based on brand. It's your PC, do what makes you happy.  If your build meets your needs, you don't need anyone else to "rate" it for you. And talking about being part of a "master race" is cringe. Watch this space for further truths people need to hear.

 

Ryzen 7 5800X3D | ASRock X570 PG Velocita | PowerColor Red Devil RX 6900 XT | 4x8GB Crucial Ballistix 3600mt/s CL16

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, plofkat said:

It works kinda like bittorrent where the chunk servers are the seeders. if one server is busy you get the data from another. Add more chunk servers until you saturate the entire network and then upgrade the network.  

Good thing you're not in charge of any such expensive setup, then; in a sane setup, the syncing-traffic between the servers would be on a separate network, so as not to saturate the one used by people!

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

I was like all 'yaaaaaayyyy' until linus formatted it ntfs, then I was all 'C'mon man!'

 

Link to comment
Share on other sites

Link to post
Share on other sites

I am the only one a bit buzzed by the format of those videos?
From the start it seems to be very interesting with a very high end server and uncommon storage setup for most of us but it turns out being mostly an unboxing video with some comments from the users. Not much details given which leaves me a bit waiting for more ... I know this is not techquickie, Level1Tech etc ... But still.

 

Makes sense for the main channel but any way such deployments could also be covered on other LMG channel more in depth?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×