Jump to content

This is the FASTEST Server I've Ever Touched... HOLY $H!T

jakkuh_t

Liqid said they had something crazy to send us. I wasn't expecting this.

 

 

Thanks to Pulseway for sponsoring this video! Remotely monitor and manage your PC with Pulseway, for free today at https://lmg.gg/cH4TX

widget.png?style=banner2

PC: 13900K, 32GB Trident Z5, AORUS 7900 XTX, 2TB SN850X, 1TB MP600, Win 11

NAS: Xeon W-2195, 64GB ECC, 180TB Storage, 1660 Ti, TrueNAS Scale

Link to comment
Share on other sites

Link to post
Share on other sites

Cant wait until we discuss “100Gb to anything in the Vancouver exchange, 50Gb to everywhere else.”

Chicago Bears fan, Bear Down

 

Link to comment
Share on other sites

Link to post
Share on other sites

I used to work at Intel on SSDs. Linus mentioned that the Intel drives were piles of garbage and they wouldn't RMA. What went wrong? What did they do?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, csm10495 said:

I used to work at Intel on SSDs. Linus mentioned that the Intel drives were piles of garbage and they wouldn't RMA. What went wrong? What did they do?

It's LTT we're talking about. They probably fucked something up and ruined the drives themselves.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Results45 said:

If you guys do another video on the upgraded LIQUID server with extra storage, can you try running Crysis on all 128 cores/256 threads using SwiftShader?

*Cyberpunk

please quote me or tag me @wall03 so i can see your response

motherboard buying guide      psu buying guide      pc building guide     privacy guide

ltt meme thread

folding at home stats

 

pc:

 

RAM: 16GB DDR4-3200 CL-16

CPU: AMD Ryzen 5 3600 @ 3.6GHz

SSD: 256GB SP

GPU: Radeon RX 570 8GB OC

OS: Windows 10

Status: Main PC

Cinebench R23 score: 9097 (multi) 1236 (single)

 

don't some things look better when they are lowercase?

-wall03

 

hello dark mode users

goodbye light mode users

Link to comment
Share on other sites

Link to post
Share on other sites

If Intel won't RMA those drives for you I wouldn't mind buying something like 4 of them from you guys. 😜

 

My server needs an upgraded SLOG/ZIL device and I could use the others for an iSCSI project. :3

Link to comment
Share on other sites

Link to post
Share on other sites

Wait, what was the problem your intel drives were having?

They were all used weren't they?

I edit my posts a lot, Twitter is @LordStreetguru just don't ask PC questions there mostly...
 

Spoiler

 

What is your budget/country for your new PC?

 

what monitor resolution/refresh rate?

 

What games or other software do you need to run?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, Streetguru said:

Wait, what was the problem your intel drives were having?

They were all used weren't they?

The issue was drive interrupts. Stand-alone the drives are (or should be) fine but when put into a RAID array for the given setup that they had the drives would randomly reset dropping out of the 24 drive pool causing massive random performance drops.

 

Customized kernels and firmware specifically designed to managed arrays with speeds approaching the bandwidth of 8 channel memory may have made it work but as it was with off the shelf software the system on neither Windows nor Linux could manage the speeds of the pool without the drive reset bug.

 

So they had to scrap the project, essentially.

 

They also made a full video about it.

Spoiler

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

And here I were sitting and waiting for a 100GB/s networking solution, not some pesky old internal transfer rate....
Also, 12 volt "only" has been a thing in the server world for the last decade or three, it really isn't new. (Some servers even use higher voltages for lower conductive losses.)

 

Though, it is still impressive performance in the server, and it isn't all that huge to say the least.
But I do wonder, with that many PCIe add in cards for storage, is there any room left for comparable networking? Would be a bit of a waste of performance otherwise, unless one does a lot of compression/datamining....

Link to comment
Share on other sites

Link to post
Share on other sites

Sabrent has those 16TB m.2 sticks in the works. 40 of those would give you 640TB of storage - for a 2U chassis that's some high density, although still not quite there with a 1U "ruler" storage box or a 4U 90x3.5 drive server that isn't just a dumb JBOD enclosure.

 

What IS impressive is at those transfer speeds you could dump/fill the entire server in just a few short hours, as opposed to leaving it run overnight or longer. Even at 50GB/sec that's 20 seconds per terabyte, or a little over 3.5 hours to move 640TB.

 

What might be worth exploring is putting those single slot PCIe cards in a 4U box that can fit 20 of them.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Results45 said:

can you try running Crysis

Crysis will run terribly for the rest of time. They have come out and admitted they done goofed. The game came out at a critical junction point in PC growth, and that was either go wildly multi-core or go faster on fewer cores. The trend up to that point in development was that speeds were expected to clear 8Ghz+ with no issue.
 

Quote

But Crysis also hails from an era where the future of CPU technology was heading in a very different direction than Crytek may have originally envisaged. It is multi-core aware to a certain extent - gaming workloads can be seen across four threads - but the expectation for PC computing, especially from Intel with its Netburst architecture, was that the real increase in speed in computing would happen from massive increases in clock speed, with the expectations of anything up to 8GHz Pentiums in the future. It never happened, of course, and that's the key reason why it is impossible to run Crysis at 60fps, even on a Core i7 8700K overclocked to 5GHz. At its nadir in the Ascension stage (sensibly removed from the console versions), the fastest gaming CPU money can buy struggles to move beyond the mid-30s.


-- Eurogamer, 2018

 

Spoiler

CPU: Intel i7 6850K

GPU: nVidia GTX 1080Ti (ZoTaC AMP! Extreme)

Motherboard: Gigabyte X99-UltraGaming

RAM: 16GB (2x 8GB) 3000Mhz EVGA SuperSC DDR4

Case: RaidMax Delta I

PSU: ThermalTake DPS-G 750W 80+ Gold

Monitor: Samsung 32" UJ590 UHD

Keyboard: Corsair K70

Mouse: Corsair Scimitar

Audio: Logitech Z200 (desktop); Roland RH-300 (headphones)

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/29/2020 at 9:16 AM, LAwLz said:

It's LTT we're talking about. They probably fucked something up and ruined the drives themselves.

Nah they were the Intel 750 NVMe SSDs, which I had actually told Linus he shouldn't use and weren't any good... nothing like finding out for yourself though. They just weren't designed for server usage like that and perform really badly when long term performance stressed, they are firmly a workstation SSD design for burst and medium term usage (very good at that too I might add, for the time).

Samsung enterprise SATA can outperform those Intel 750 NVMe SSDs, just only in those more extreme heavy write situations which is what LTT actually have when video editing and copy data to the server etc.

 

23 hours ago, Windows7ge said:

If Intel won't RMA those drives for you I wouldn't mind buying something like 4 of them from you guys. 😜

 

My server needs an upgraded SLOG/ZIL device and I could use the others for an iSCSI project. :3

Don't, get something else.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

Don't, get something else.

...I could just replace the entire primary storage array with 1.92TB Intel D3-S4510 drives...then I could omit a SLOG/ZIL all together.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/29/2020 at 8:13 AM, csm10495 said:

I used to work at Intel on SSDs. Linus mentioned that the Intel drives were piles of garbage and they wouldn't RMA. What went wrong? What did they do?

Put them in a server, on a public Youtube video.

 

Why that matters:

image.png.c1d5449ed30cea5fb81ea69cf0d109a1.png

https://www.intel.com/content/dam/support/us/en/documents/memory-and-storage/intel-client-ssd-module-warranty.pdf

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Nah they were the Intel 750 NVMe SSDs, which I had actually told Linus he shouldn't use and weren't any good... nothing like finding out for yourself though. They just weren't designed for server usage like that and perform really badly when long term performance stressed, they are firmly a workstation SSD design for burst and medium term usage (very good at that too I might add, for the time).

Ah I see. So they were using the wrong hardware for the wrong workload and then blame Intel.

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, LAwLz said:

Ah I see. So they were using the wrong hardware for the wrong workload and then blame Intel.

Well I see it as a little 50/50, fair criticism for an NVMe SSD with such high throughput and IOPs on the spec sheet that can really only do that under lighter loads and also has an endurance rating of a meager 70GB per day across every capacity offered. So it's also easy to void your warranty by exceeding the SMART wear counter, also listed in the warranty doc linked.

 

And the reasoning applied to using them wasn't that bad either, when you take in to consideration what LMG/LTT is. Sure they aren't server rated SSDs but not every consumer SSD has such terrible sustained write performance (Samsung 840/850 Pro) and due to the cost of server SSDs you can buy between 3 to 5 times as many good consumer SSDs and self warranty against failures and still be on the winning side of cost. Server/Enterprise SSDs in the total capacity LMG needs is just too costly (at least in a situation where you don't know if it's a solid solution) and while these "not optimal" builds aren't great they are at least educational to the viewer and entertaining.

 

There is a right way and wrong way to do things on the cheap and for the most part Linus has fallen on the right side, but learning often involves missteps and mistakes along the way and those can be more enlightening than successes.

 

I have no problem with any Youtube channel exploring these kinds of things and making mistakes or using not optimal/suitable hardware so long as that is either explained or if only discovered later revisited with this discovery presented. Even in a professional setting it's not possible to always deliver the best fit solution always, sometimes the unexpected happens or requirements rapidly change or were not fully understood or gathered correctly. I honestly don't expect better from a self confessed non IT professional.

 

Personally I still maintain that cheaper Mixed Use SATA server SSDs is all that LMG/LTT needs, with external SAS JBOD expansions as required to expand capacity. But for an actual set and forget solution, just buy an enterprise storage solution like a Netapp FAS. A FAS 500f would be a set and forget solution for 5-8 years with more performance than they would ever need.

Link to comment
Share on other sites

Link to post
Share on other sites

44k r23 is awful with that system, so low iam thinking they set it up wrong or had smt disabled or something

Link to comment
Share on other sites

Link to post
Share on other sites

Do the NVMe SSDs have powerloss protection of any sort? Linus, in the past before using pcie ssd, opted for SATA enterprise SSDs with poweloss protection capacitors.  Do NVMe not suffer from powerloss issues of corrupted in-flight data writes?

 

Thanks because I'm preparing to use NVMe SSDs in my vmware ESXi host which is used for my home office.

 

*Edit:  i just read some of the above posts and learned the sabrent are not enterprise, so does that also mean no powerloss protection?  Maybe i don't need that either, but i think i do considering I'll use them as DAS (not as aNAS) with many multiple guest VMs on each ssd.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Hi,

 

Technically speaking is there a difference between liqid honey badger and the highpoint SSD7540 ?

 

thanks

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×