Jump to content

Our fastest server EVER - Serviceable Whonnock

James

Buy a SABRENT Rocket 4 Plus 8TB NVME M.2 SSD: https://lmg.gg/qgDC3
Buy an ICY DOCK EZCONVERT MB705M2P-B NVMe SSD to 2.5" U.2 SSD Converter: https://lmg.gg/fjTFz
Buy a StarTech U2M2E125 NVMe SSD to 2.5" U.2 SSD Converter: https://lmg.gg/76FvY
Buy a GIGABYTE R282-Z9G 2U Rackmount Server: https://lmg.gg/CxlsQ
Buy an AMD EPYC Milan 75F3 CPU: https://lmg.gg/DzxJF
Buy Micron 3200MHz CL22 DDR4 2x4GB ECC Memory: https://lmg.gg/k1myU
Buy an Ableconn E1s-DT157 NVMe M.2 SSD to NVMe EDSFF E1.S SSD Adapter: https://lmg.gg/PC1kk

Purchases made through some store links may provide some compensation to Linus Media Group.

 

We made a big mistake when setting up our main editing server Whonnock - it wasn’t serviceable. Today we fix that with a brand new machine and brand new sabrent nvme drives!

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, James said:

EEC Memory

*ECC

Main System (Byarlant): Ryzen 7 5800X | Asus B550-Creator ProArt | EK 240mm Basic AIO | 16GB G.Skill DDR4 3200MT/s CAS-14 | XFX Speedster SWFT 210 RX 6600 | Samsung 990 PRO 2TB / Samsung 960 PRO 512GB / 4× Crucial MX500 2TB (RAID-0) | Corsair RM750X | Mellanox ConnectX-3 10G NIC | Inateck USB 3.0 Card | Hyte Y60 Case | Dell U3415W Monitor | Keychron K4 Brown (white backlight)

 

Laptop (Narrative): Lenovo Flex 5 81X20005US | Ryzen 5 4500U | 16GB RAM (soldered) | Vega 6 Graphics | SKHynix P31 1TB NVMe SSD | Intel AX200 Wifi (all-around awesome machine)

 

Proxmox Server (Veda): Ryzen 7 3800XT | AsRock Rack X470D4U | Corsair H80i v2 | 64GB Micron DDR4 ECC 3200MT/s | 4x 10TB WD Whites / 4x 14TB Seagate Exos / 2× Samsung PM963a 960GB SSD | Seasonic Prime Fanless 500W | Intel X540-T2 10G NIC | LSI 9207-8i HBA | Fractal Design Node 804 Case (side panels swapped to show off drives) | VMs: TrueNAS Scale; Ubuntu Server (PiHole/PiVPN/NGINX?); Windows 10 Pro; Ubuntu Server (Apache/MySQL)


Media Center/Video Capture (Jesta Cannon): Ryzen 5 1600X | ASRock B450M Pro4 R2.0 | Noctua NH-L12S | 16GB Crucial DDR4 3200MT/s CAS-22 | EVGA GTX750Ti SC | UMIS NVMe SSD 256GB / Seagate 1.5TB HDD | Corsair CX450M | Viewcast Osprey 260e Video Capture | Mellanox ConnectX-2 10G NIC | LG UH12NS30 BD-ROM | Silverstone Sugo SG-11 Case | Sony XR65A80K

 

Camera: Sony ɑ7II w/ Meike Grip | Sony SEL24240 | Samyang 35mm ƒ/2.8 | Sony SEL50F18F | Sony SEL2870 (kit lens) | PNY Elite Perfomance 512GB SDXC card

 

Network:

Spoiler
                           ┌─────────────── Office/Rack ────────────────────────────────────────────────────────────────────────────┐
Google Fiber Webpass ────── UniFi Security Gateway ─── UniFi Switch 8-60W ─┬─ UniFi Switch Flex XG ═╦═ Veda (Proxmox Virtual Switch)
(500Mbps↑/500Mbps↓)                             UniFi CloudKey Gen2 (PoE) ─┴─ Veda (IPMI)           ╠═ Veda-NAS (HW Passthrough NIC)
╔═══════════════════════════════════════════════════════════════════════════════════════════════════╩═ Narrative (Asus USB 2.5G NIC)
║ ┌────── Closet ──────┐   ┌─────────────── Bedroom ──────────────────────────────────────────────────────┐
╚═ UniFi Switch Flex XG ═╤═ UniFi Switch Flex XG ═╦═ Byarlant
   (PoE)                 │                        ╠═ Narrative (Cable Matters USB-PD 2.5G Ethernet Dongle)
                         │                        ╚═ Jesta Cannon*
                         │ ┌─────────────── Media Center ──────────────────────────────────┐
Notes:                   └─ UniFi Switch 8 ─────────┬─ UniFi Access Point nanoHD (PoE)
═══ is Multi-Gigabit                                ├─ Sony Playstation 4 
─── is Gigabit                                      ├─ Pioneer VSX-S520
* = cable passed to Bedroom from Media Center       ├─ Sony XR65A80K (Google TV)
** = cable passed from Media Center to Bedroom      └─ Work Laptop** (Startech USB-PD Dock)

 

Retired/Other:

Spoiler

Laptop (Rozen-Zulu): Sony VAIO VPCF13WFX | Core i7-740QM | 8GB Patriot DDR3 | GT 425M | Samsung 850EVO 250GB SSD | Blu-ray Drive | Intel 7260 Wifi (lived a good life, retired with honor)

Testbed/Old Desktop (Kshatriya): Xeon X5470 @ 4.0GHz | ZALMAN CNPS9500 | Gigabyte EP45-UD3L | 8GB Nanya DDR2 400MHz | XFX HD6870 DD | OCZ Vertex 3 Max-IOPS 120GB | Corsair CX430M | HooToo USB 3.0 PCIe Card | Osprey 230 Video Capture | NZXT H230 Case

TrueNAS Server (La Vie en Rose): Xeon E3-1241v3 | Supermicro X10SLL-F | Corsair H60 | 32GB Micron DDR3L ECC 1600MHz | 1x Kingston 16GB SSD / Crucial MX500 500GB

Link to comment
Share on other sites

Link to post
Share on other sites

I am thinking of a home all SSD server with one (or two) of those ASUS 4x nvme pcie cards with consumer ssds. One common complain I hear is that all the ssds could fail at the same time since they have write maximums. That did not happen in this case in the video; I am inferring from the video that you ran the server for months until you replaced it with the new server so they did not fail at the same time. Is there any truth to it in practice?

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, sageofredondo said:

I am thinking of a home all SSD server with one (or two) of those ASUS 4x nvme pcie cards with consumer ssds. One common complain I hear is that all the ssds could fail at the same time since they have write maximums. That did not happen in this case in the video; I am inferring from the video that you ran the server for months until you replaced it with the new server so they did not fail at the same time. Is there any truth to it in practice?

We can help you with this down in the server/NAS board

 

However, an all-NVMe NAS is completely pointless for a home application. (Especially if all your networking gear is just regular Gigabit Ethernet, where all your transfers will be limited to about 100 MB/sec.) By all means have an SSD for your Plex/Jellyfin/Subsonic library metadata, but hard drives still make more sense for bulk storage in that application.

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, sageofredondo said:

I am thinking of a home all SSD server with one (or two) of those ASUS 4x nvme pcie cards with consumer ssds. One common complain I hear is that all the ssds could fail at the same time since they have write maximums. That did not happen in this case in the video; I am inferring from the video that you ran the server for months until you replaced it with the new server so they did not fail at the same time. Is there any truth to it in practice?

The failure mode of SSDs is a bit complex in practice.

 

Effectively speaking, Flash is inherently forgetful. The data is just stored as a bit of charge, it does slowly leak over time and eventually the Flash chip doesn't know what it originally were. Manufacturing tolerances also adds the fact that different chips forgets at different rates. Beyond that there is also the fact that with each write/erase cycle to a cell, it leaks faster. Eventually it forgets its contents fast enough to not be practical as data storage for a given application, ie data retention is quite limited in comparison to magnetic storage. (scratch disk applications can usually use a worn out SSD just fine for quite some time.)


Some people will pull out Microchip's data retention specs for their industrial flash chips and microcontrollers, stating that the "data retention of flash is 20 years at 85 C after 20K writes." But that is like saying that any car can drive at 700+ km/h just because a land speed racer did do that. (Industrial grade flash isn't remotely comparable to enterprise grade flash. Industrial flash is built for applications requiring long term reliability and where it is allowed to cost more than a dollar per MB to achieve said reliability in harsh environments, while Enterprise grade flash is built for the lowest possible GB/$ for use in mass storage in fairly well maintained ambient environments.)

But in general, flash doesn't really just give up the ghost after X hours unlike the typical hard drive that often is more predictable.
However, there is still some single points of failure in these devices, a cracked MLC capacitor shorting out the power rail will often make the drive into a brick, same for a few cracked solder joints, an excessively bent PCB developing tears/cracks in the internal copper tracks or just good old ESD, tin whiskers, conductive gunk/corrosion, or a slew of other issues. (At least SSDs don't also have the mechanical wear of HDDs that makes the list of single point of failures far longer...)

An SSD is fairly random when it will die/stop-working. But it can tend to have corrupted data in it long before it dies if one don't use it a lot.
HDDs are a bit more predictable due to their mechanical nature, but even here a given drive can fail more or less whenever, some last a few weeks, others go 10x beyond Mean Time Between Failures. But for thousands of drives, it will on average be quite close to MTBF.

In the end.
If one cares about data retention in flash media, it is quite wise to check the data on a somewhat regular interval and correct any errors that are found. (I would say yearly, if not monthly if the drives are getting on the old side of things. Also, this is wise to do for HDDs too, they can also have bit errors induced over time even if this is more rare. This is also not a particularly atypical feature of most storage software/controllers, however one can do like LMG did one time and forget to turn that feature on, and I wouldn't be surprised if some SSD controllers do this in the background.)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Somerandomtechyboi said:

Why 1rx16 rams

seems kinda dumb to populate all the ram channels but only have 1rx16 rams

Because ram capacity and maximum possible throughput doesn't really matter in this situation and configuration. Simply having all the ram channels populated is quite satisfactory. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Needfuldoer said:

We can help you with this down in the server/NAS board

 

However, an all-NVMe NAS is completely pointless for a home application. (Especially if all your networking gear is just regular Gigabit Ethernet, where all your transfers will be limited to about 100 MB/sec.) By all means have an SSD for your Plex/Jellyfin/Subsonic library metadata, but hard drives still make more sense for bulk storage in that application.

 

I am limited in space right now and I plan to move again. I also have sensory issues. The only moving parts being noctua fans is a big plus for me. Not having noise or deal with a NAS card and more resistance to moving damage is important for me.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, sageofredondo said:

 

I am limited in space right now and I plan to move again. I also have sensory issues. The only moving parts being noctua fans is a big plus for me. Not having noise or deal with a NAS card and more resistance to moving damage is important for me.

i find using 200 mm fans size with slower  overall speed but same air volume. much better then tiny fans.

also i have  just the front case fans and gpu led. to to know pc on!!!

its that quite

MSI x399 sli plus  | AMD theardripper 2990wx all core 3ghz lock |Thermaltake flo ring 360 | EVGA 2080, Zotac 2080 |Gskill Ripjaws 128GB 3000 MHz | Corsair RM1200i |150tb | Asus tuff gaming mid tower| 10gb NIC

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, dogwitch said:

i find using 200 mm fans size with slower  overall speed but same air volume. much better then tiny fans.

also i have  just the front case fans and gpu led. to to know pc on!!!

its that quite

Your avatar, a smallish Rainbow Dash is adorable @dogwitch

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, sageofredondo said:

Your avatar, a smallish Rainbow Dash is adorable @dogwitch

thank you.

i meet the voice actress that voice her.

MSI x399 sli plus  | AMD theardripper 2990wx all core 3ghz lock |Thermaltake flo ring 360 | EVGA 2080, Zotac 2080 |Gskill Ripjaws 128GB 3000 MHz | Corsair RM1200i |150tb | Asus tuff gaming mid tower| 10gb NIC

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry, a storage/servers noob here, but don't these convertors from NVMe M.2 to the server sled format create more points of failure? Are these convertors also enterprise level (low failure rate and such)? 

 

And also, this solves the issues of the drive replacement, but checking which drive has failed, is that something their server software can let them know of or they have to keep checking which one failed? Probably the indicator LEDs on top of each drive bay?

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, NemesisPrime_691 said:

Sorry, a storage/servers noob here, but don't these convertors from NVMe M.2 to the server sled format create more points of failure? Are these convertors also enterprise level (low failure rate and such)? 

There isn't any logic to them, it's just copper traces and connectors/physical interfaces. There's no electronics to fail so the added risk is very low. Of course this isn't really something you "should do" but you can and isn't a massive risk in general.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

When will you guys deploy the Weka FS server & are you guys using the same server?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×