Jump to content

The + is Dead - Long Live The 3! - Zen 3 information suggests big IPC and cache improvements

5x5

Zen 3 and rtx 3000 and ddr5. Maybe 2020 isn't that bad after all. Gonna upgrade my rig in 2021 with these

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, 5x5 said:

I mean most CPUs and GPUs these days come with boosting and auto-OCing which makes manual overclocking almost obsolete in most cases. A 2060 will boost so high that manual tuning would only net you about 100-200MHz at most

This. If I fiddle with my GTX 1080Ti I might gain 50MHz. And even that requires raised power limits and voltages and it still drops the clock to basically stock level over time as it warms up. I'd either have to run fans at stupid levels even on massive AORUS card or use water. And even then spending so much money or noise wouldn't warrant the pathetic uplift in clock. It basically clocks itself almost as good as it can go.

 

Same is with Ryzen. There are few tweaks that allow it to draw a bit more power and have more aggressive clock curve, but for general use, it's not worth it as you'll be sacrificing gaming and single thread performance with clocking up all cores at once. Which is a shame, if AMD gave users option to control things per core, we could make things in such a way that it would run even higher for just 1 or 2 cores when under load with one overclocking rule and when fully loading all cores, with another. This way you could pump up higher single thread clocks and slightly higher multi thread clocks. Currently, you can only adjust multicore and it then affects single core as it will apply the overclock on all cores only. And you can't ever get all core higher than single core.

 

And if that's too fiddly, why not just give us option to raise TDP limits or how aggressive the core clock curve is. This way you could just allow it to be more power hungry and hopefully keep clocks higher at higher loads, for longer. Assuming you cool it well. Currently you can't even do that. At least I didn't see any such control for Ryzen 5 2600X.

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, Zando Bob said:

Have seen people jumping from older HEDT to Ryzen, it's a solid platform if you don't mind losing some HEDT features and manual OCing ability. I actually went in reverse lol, hopped from a 2700X back to X58, then to X99. 
 

Indeed, these still pack a punch. Shame about the clocks though, what voltage were you capping at? My 5820K ended up at 4.5/1.35v, but I usually ran it at 4.2/1.2v because it was much cooler, and since I only play at 75Hz I don't notice the performance difference. The 5960X I have now will do 4.7 at 1.3v (caps at 4.8, even with 1.35v/2.1v vCore/Input Voltage it's shaky), I have it at 4.5/around 1.2v right now because again, don't notice the 200Mhz difference at my refresh rate, and it runs cooler. Uncore (RING in the BIOS most of the time) on both capped at 3.7Ghz, as is usual for boards without an OC socket. 

 

I couldn't get a stable 4.3Ghz at 1.3v even; and the bugger ran so hot at that. Surprisingly at 4.0Ghz runs okay at 1.1v, and doesn't exceed 60 degrees under AIDA64 load or folding.

I game at 3440x1440 100Hz, and it's been decent with that bump over 2560x1440. If I was at 1080p it would likely hit my GPU much worse with a bottleneck.

 

Since my job has been kind enough to supply us with workstations, even at home the past 3 years, I hadn't needed x99 features much anymore( nor the TitanV, it's just for folding and gaming now :/ ). So Ryzen would be great, otherwise I'd likely go Threadripper.

 

32 minutes ago, Bombastinator said:

Zen2 is also a horrible overclocker.  Looking at the difference between zen+ and zen2 I am not hopeful that there will be more OC room.

I never cared for Overclocking much, only reason I did it was because it was so easy at the time and saved me hundreds compared to buying a better x99 CPU at the time.

Ryzen's PBO/Boost is more than good enough for me.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Valentyn said:

I couldn't get a stable 4.3Ghz at 1.3v even; and the bugger ran so hot at that. Surprisingly at 4.0Ghz runs okay at 1.1v, and doesn't exceed 60 degrees under AIDA64 load or folding.

Ah yeah that's an oof on the bin then. 

1 minute ago, Valentyn said:

Ryzen's PBO/Boost is more than good enough for me.

Too good in fact, the reason OCing sucks on Ryzen is it (usually) can't beat stock PBO in actual real world use. You'll get better cinebench numbers, but in much else just slapping PBO on and making sure it doesn't over-pull voltage is the best for performance. Sucks for people like me who want to OC, but since you don't, it'll be fuckin excellent 👌

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Zando Bob said:

Ah yeah that's an oof on the bin then. 

Too good in fact, the reason OCing sucks on Ryzen is it (usually) can't beat stock PBO in actual real world use. You'll get better cinebench numbers, but in much else just slapping PBO on and making sure it doesn't over-pull voltage is the best for performance. Sucks for people like me who want to OC, but since you don't, it'll be fuckin excellent 👌

Exactly, I just want a fast stable system!

Plus it means I don't have to bother with custom loop watercooling, and the rest. I use to love building that stuff, and had a Phenom II 940 BE reaching 4Ghz at times; but in the end I think I just grew out of that hobby.

I want a good system, stable and gets the job done. I think needing it as a workstation did that to me :P

Plus I always had "accidents" with water cooling lol. Had a PSU go pop once, took out my pump in my Phenom system, and killed my 4870X2.
Had a pump go in my Corsair H110 as well, and even before that with Noctua fans it was loud and annoying.

So if I can get a decent Zen3 with a good boost, I'll stick with the trusty Noctua NH D15s. Cooler, quieter, and cheaper in the end.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Dash Lambda said:

Hang on.

 

One company is running away with IPC while the other keeps refreshing the same architecture and pushing clocks? That sound familiar to anyone?

It doesn’t to me but I missed a lot of pc history. Kinda curious on the theory.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Bombastinator said:

It doesn’t to me but I missed a lot of pc history. Kinda curious on the theory.

Intel is in the position AMD were in 2011 with the FX CPUs where they had clockspeeds, high heat output and lower IPC than Intel. Basically, the mighty have fallen.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Bombastinator said:

Zen2 is also a horrible overclocker.  Looking at the difference between zen+ and zen2 I am not hopeful that there will be more OC room.

I don't care if Zen3 is as bad of an overclocker as my 5820k. My 5820k is running stock, the IPC gains and huge bump in clock speeds will give me a great performance boost in the tasks I use daily.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Bombastinator said:

It doesn’t to me but I missed a lot of PC history. Kinda curious on the theory.

Okay, I'm gonna go back a bit farther than might be necessary, but anyway: Originally AMD was actually a second-supplier for Intel chips, they made their business reverse-engineering and copying them for early personal computers. But Intel started to get finicky about licensing and patents so AMD had to do something else. Intel owns the x86 instruction set, so AMD couldn't use it without an agreement with Intel. In 2000, AMD released the AMD64 extension to the x86 instruction set, which added a 64-bit mode. Intel was forced to enter a cross-licensing agreement so AMD could use x86 and Intel could use AMD64.

 

From that point, AMD had a few great years of competition. They were the first to push dual-cores, they bought ATI and started doing GPUs, and they they generally beat Intel on all fronts. But then, for reasons that still aren't clear, they just kind of stopped working on their CPUs. Around 2008 they introduced K10, which was pretty good, and tweaked that until 2011 when they launched Bulldozer, which was underwhelming from the start, and over the next 6 years they just kept bumping the clocks on Bulldozer. They didn't even do it with process optimizations or anything, they just kept raising the TDP. They were actually the first to make a 5Ghz CPU, and it was hot, power-hungry, expensive, and rather slow. They even stuck with PCIe 2.0, never bothered with 3.0 until Zen. Intel kept improving features, efficiency, and IPC and eventually gained such a lead that they stopped innovating too, keeping the desktop at 4 cores and rationing out token 5% performance boosts every now and then while quietly raising prices.

 

Now, after Ryzen, AMD's absolutely crushing it with IPC and efficiency, while Intel's stuck in the mud revising Skylake over and over again trying to compensate with clock speed. Hell, Intel's even still on PCIe 3.0, aren't they?

 

I just hope mid-2020s AMD doesn't turn into early/mid-2010s Intel.

"Do as I say, not as I do."

-Because you actually care if it makes sense.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Dash Lambda said:

Okay, I'm gonna go back a bit farther than might be necessary, but anyway: Originally AMD was actually a second-supplier for Intel chips, they made their business reverse-engineering and copying them for early personal computers. But Intel started to get finicky about licensing and patents so AMD had to do something else. Intel owns the x86 instruction set, so AMD couldn't use it without an agreement with Intel. In 2000, AMD released the AMD64 extension to the x86 instruction set, which added a 64-bit mode. Intel was forced to enter a cross-licensing agreement so AMD could use x86 and Intel could use AMD64.

 

From that point, AMD had a few great years of competition. They were the first to push dual-cores, they bought ATI and started doing GPUs, and they they generally beat Intel on all fronts. But then, for reasons that still aren't clear, they just kind of stopped working on their CPUs. Around 2008 they introduced K10, which was pretty good, and tweaked that until 2011 when they launched Bulldozer, which was underwhelming from the start, and over the next 6 years they just kept bumping the clocks on Bulldozer. They didn't even do it with process optimizations or anything, they just kept raising the TDP. They were actually the first to make a 5Ghz CPU, and it was hot, power-hungry, expensive, and rather slow. They even stuck with PCIe 2.0, never bothered with 3.0 until Zen. Intel kept improving features, efficiency, and IPC and eventually gained such a lead that they stopped innovating too, keeping the desktop at 4 cores and rationing out token 5% performance boosts every now and then while quietly raising prices.

 

Now, after Ryzen, AMD's absolutely crushing it with IPC and efficiency, while Intel's stuck in the mud revising Skylake over and over again trying to compensate with clock speed. Hell, Intel's even still on PCIe 3.0, aren't they?

 

I just hope mid-2020s AMD doesn't turn into early/mid-2010s Intel.

Just going to add - the reasons are lengthy and expensive legal issues that bled AMD dry of money as they were fighting Intel in courts over Intel's anti-competitive business practices. After court documents were released, we learned that in 2003-2004 Intel were paying OEMs like Dell (in Dell's case 1 million per month) to use Pentium 4 Prescott CPUs over AMD's superior Athlon 64 CPUs in their prebuilts. Thus, Intel retained market share at great cost and then AMD tried to litigate and that took years. In the meantime, the design decisions behind K10 evolved into the MCC design on Bulldozer while Intel used the mobile development team to create the Core 2 CPUs and further expand into the Core i models, moving the north bridge to the CPU and just generally adding improvements all-round. AMD, on the other hand, bet on multi-core adoption and focused their efforts towards 4/6/8 core consumer models but the software industry was much to stagnant and stayed on dual-cores till the early 2013s where quad-cores and hex-cores were finally being utilized properly.

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, 5x5 said:

Intel is in the position AMD were in 2011 with the FX CPUs

Not even close. While Intel has worse performance per dollar than most of AMDs offering, their unlocked K-series can still reach higher framerates than Amd chips.

 

FX in 2011 was just utterly awful. The 8150 was so slow that a dual core i3 was beating it in some games and an i5 was miles ahead.

 

Intel isn't in a favourable position right now but people seem to forget just how awful Bulldozer was.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Medicate said:

Not even close. While Intel has worse performance per dollar than most of AMDs offering, their unlocked K-series can still reach higher framerates than Amd chips.

 

FX in 2011 was just utterly awful. The 8150 was so slow that a dual core i3 was beating it in some games and an i5 was miles ahead.

 

Intel isn't in a favourable position right now but people seem to forget just how awful Bulldozer was.

“Not even close” might be overstating it.  They’re not the same though.  Intel doesn’t have a big a deficit, plus they’re diversified and incredibly cash rich.  They can still do to AMD what they did last time when AMD had a better product.  Use money to lie and cheat.  Anything is legal if you’re rich enough and reasonably careful in this country.  Trump killed one of his tenants in one of his high rises fairly recently.  Legally.  It took a long time, but it happened.  AMD is going to have to keep a serious weather eye on intel.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Medicate said:

Not even close. While Intel has worse performance per dollar than most of AMDs offering, their unlocked K-series can still reach higher framerates than AMD chips.

 

FX in 2011 was just utterly awful. The 8150 was so slow that a dual core i3 was beating it in some games and an i5 was miles ahead.

 

Intel isn't in a favourable position right now but people seem to forget just how awful Bulldozer was.

Well, I for one never said it was that bad right now. I just said it was starting to look familiar.

"Do as I say, not as I do."

-Because you actually care if it makes sense.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Medicate said:

Not even close. While Intel has worse performance per dollar than most of AMDs offering, their unlocked K-series can still reach higher framerates than Amd chips.

 

FX in 2011 was just utterly awful. The 8150 was so slow that a dual core i3 was beating it in some games and an i5 was miles ahead.

 

Intel isn't in a favourable position right now but people seem to forget just how awful Bulldozer was.

More like people are horribly overestimating bulldozer's performance deficit. It wasn't as fast but it was only slightly slower in games, like Intel is now. Like Intel's IPC now, AMD was 15-20% below Intel in gaming.
Gaming Performance - The Bulldozer Review: AMD FX-8150 Tested
Bulldozer Arrives: AMD FX-8150 Review > Encoding Performance ...
Product Qualification: AMD FX-8150 CPU

Link to comment
Share on other sites

Link to post
Share on other sites

Unified L3 among the 8C CCX is really dope, basically making the 8C the baseline, which is smart right now.

 

Excited to see what shows up. Hopefully this isn't a broadwell-esque situation where clock speeds tank as a result (but I doubt it).

 

Curious to see what the AVX512 throughput will be, if it will be half speed or full. Either way great stuff.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, 5x5 said:

More like people are horribly overestimating bulldozer's performance deficit. It wasn't as fast but it was only slightly slower in games, like Intel is now. Like Intel's IPC now, AMD was 15-20% below Intel in gaming.

This isn't true. It had an absolutely terrible performance deficit. When it first launched in the absolute best case situation, it was like 10% ahead of the X6 at same clocks, at worst it was as much as 25% behind. And that time period saw rapid and continuous improvements by Intel to make the chips irrelevant at all but the most diehard or cheap users.

 

Like X264 was about as good as it could get for bulldozer and it was barely ahead of a last gen chip with half the "cores" at 75% the clock speed.

 

People are rarely harsh ENOUGH on bulldozer.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Fasauceome said:

Intel is pushing really aggressively in the clock speed direction, but if AMD just randomly crapped out a super high clocked dual core or something as the "world's first 6GHz CPU," it really would be the best way to dunk on clock-chasing competition.

Pushing clock speed is very hard as you are tightening the timings on your transistors. This is why you need to increase the voltage if there is less time for them to `accumulate` the needed electrons you need to increase the voltage but this also risks exposing issuing in your perfect timing and error margins get much smaller (aka you end up with much much lower yields and thus much more costly chips).

Much better to do what AMD are doing and push Instructions per clock, this way you don't push the timings of the hardware (so you don't put pressure on your yields). You end up with just as many (if not more) instructions per second and that is what matters not the big labeled 5.5Ghz.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, hishnash said:

Pushing clock speed is very hard as you are tightening the timings on your transistors. This is why you need to increase the voltage if there is less time for them to `accumulate` the needed electrons you need to increase the voltage but this also risks exposing issuing in your perfect timing and error margins get much smaller (aka you end up with much much lower yields and thus much more costly chips).

Much better to do what AMD are doing and push Instructions per clock, this way you don't push the timings of the hardware (so you don't put pressure on your yields). You end up with just as many (if not more) instructions per second and that is what matters not the big labeled 5.5Ghz.

I'm not saying that AMD should change their business model, but that they should just toss a one off processor that takes the clock speed crown just to have it. Most end users have no idea what IPC is, and therefore more GHz = more better. It would potentially serve as something of a Halo product, "well AMD makes the fastest processor so their other processors must be good"

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

How many watts do I needATX 3.0 & PCIe 5.0 spec, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Fasauceome said:

I'm not saying that AMD should change their business model, but that they should just toss a one off processor that takes the clock speed crown just to have it. Most end users have no idea what IPC is, and therefore more GHz = more better. It would potentially serve as something of a Halo product, "well AMD makes the fastest processor so their other processors must be good"

A halo tier dual-core.

How delightfully 2005.

"Do as I say, not as I do."

-Because you actually care if it makes sense.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Valentyn said:

Finally a worthy potential upgrade for my 5820K!

Thats exactly where I am at with my 5820k. I was going to look at a 3950x or one of the thread rippers but I might wait now lol.

CPU: Intel i7 - 5820k @ 4.5GHz, Cooler: Corsair H80i, Motherboard: MSI X99S Gaming 7, RAM: Corsair Vengeance LPX 32GB DDR4 2666MHz CL16,

GPU: ASUS GTX 980 Strix, Case: Corsair 900D, PSU: Corsair AX860i 860W, Keyboard: Logitech G19, Mouse: Corsair M95, Storage: Intel 730 Series 480GB SSD, WD 1.5TB Black

Display: BenQ XL2730Z 2560x1440 144Hz

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, RejZoR said:

My 5820K runs at 4.5GHz at only 1.185V. It's tempting to move on, but I'm worried I'm not gonna see that much of a difference. Was thinking of getting some higher end Core i7 5000 or 6000 series, but they are hard to find and clocks are questionable. More cores wouldn't mean much if I had problems hitting at least 4.5GHz as well.

I went from a 5820k to a 6850k (at the time the 40-lane 5000 series was hard to find). Other than slightly improved ram speeds, overclocking on this thing is a nightmare - I can't even maintain 4.3ghz at 1.3v with a 360mm CLC. I probably would have seen better performance if I had stuck with Haswell. 

Spoiler

Muh lanes, though. 

8 hours ago, VegetableStu said:

man i wonder how long would TRX40 last ,_,

I would assume that we'd get one more generation and then when Zen 4 makes the jump to DDR5 (and possibly PCIe 5) we'd see another new chipset. The long term upgradability is great for consumer grade parts, but for workstation stuff is rather have boards that are more custom built for the current generation (or two). 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, schwellmo92 said:

This post references Adored’s video but they forgot to mention he also said cache latency would be slightly higher.

Larger caches always have slightly higher access times but the lower latency is from the average core latency going down from not crossing CCX boundaries.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, leadeater said:

Larger caches always have slightly higher access times but the lower latency is from the average core latency going down from not crossing CCX boundaries.

Always is the wrong word, but given a lack of other mitigating factors/refinements, yes.

 

But absolutely, unifying the module's cache should have some nice benefits in consumer products with not having to rely on scheduler/compiler optimizations as much (particularly relevant for legacy software).

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×