Jump to content

AMD Ryzen HAS BEEN ANNOUNCED!!!

DocSwag
3 hours ago, arnavvr said:

Funny part is that Darren can't watch 4K Netflix anyways due to having a Z170 board.

 

3 hours ago, MageTank said:

That was the joke, lol. 

 

3 hours ago, djdwosk97 said:

Wait, not only do you need a kaby lake CPU to watch 4k Netflix, you also need a kaby lake motherboard? 

 

Of course you do.

The sad thing is, you don't need a kaby Lake board you need a special board with an add on chip that like upscales the display port signal to be hdcp2.2 compliant.  It's reeeetardedeed

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, flipped_bit said:

Apples to oranges.  The 1700 (non-X) turbos at 3.7GHz.  The 7700K turbos at 4.5GHz.  

Are you saying that the benchmarks are apples vs oranges? Because that's like saying you shouldn't compare the 7700K vs the 1700 in Cinebench multithreading because "it's 4 cores vs 8".

Ryzen's 8 core chip does not clock as high as Intel's quad core, and on top of that it's behind in IPC. You can't just say the benchmark is invalid just because it measures the type of scenario where Ryzen appears to be behind, just like you can't say a benchmark is invalid when it measures the type of scenario where Ryzen is ahead.

 

The benchmark is not apples to oranges, and I am getting really sick and tired of people say it is.

 

7 hours ago, Sprawlie said:

Yes, so far it's looking like the i7-7700k is still the best gaming CPU. But what about gaming and twitch streaming? what are your in game FPS going to look like on the i7-7700k v the new Ryzen chips? What about if you're trying to game while your computer is running a flash video stream on one display, websites and other functionality on a 2nd display, and trying to game GTA @ 2k on Ultra!

Depending on what GPU you got, the i7 might still be better for streaming.

Reason: QuickSync

 

Streaming with an i7 does not add any extra strain on the CPU. Streaming with a Ryzen chip will, unless your discrete GPU can do it for you (like with an Nvidia card) and at that point, you don't really need a beefy CPU.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Kloaked said:

-snip-

He's had me on ignore for months now. :P

        Pixelbook Go i5 Pixel 4 XL 

  

                                     

 

 

                                                                           

                                                                              

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Do Rysen CPU's come with a soldered heat-spreader like the previous AM3 processors?

Or will delliding be necessary?

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Gdourado said:

Do Rysen CPU's come with a soldered heat-spreader like the previous AM3 processors?

Or will delliding be necessary?

Soldered yes, the die is still quite large so all of em should be soldered.

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, LAwLz said:

Are you saying that the benchmarks are apples vs oranges? Because that's like saying you shouldn't compare the 7700K vs the 1700 in Cinebench multithreading because "it's 4 cores vs 8".

Ryzen's 8 core chip does not clock as high as Intel's quad core, and on top of that it's behind in IPC. You can't just say the benchmark is invalid just because it measures the type of scenario where Ryzen appears to be behind, just like you can't say a benchmark is invalid when it measures the type of scenario where Ryzen is ahead.

 

The benchmark is not apples to oranges, and I am getting really sick and tired of people say it is.

 

 

Depending on what GPU you got, the i7 might still be better for streaming.

Reason: QuickSync

 

Streaming with an i7 does not add any extra strain on the CPU. Streaming with a Ryzen chip will, unless your discrete GPU can do it for you (like with an Nvidia card) and at that point, you don't really need a beefy CPU.

 

You're right on the streaming. Didn't realize how much better the new Nvidia GPU's are for this. Last time I was streaming I had been using a 290x which, did a horrible job, so i was software encoding. 

 

Just fired everything up and set it to Nvidia NVEC in OBS, and CPU usage was < 5%.

 

However, Pulling up my NHL.com hockey stream used 15% of my CPU and caused my GTA to dip < 30fps in some areas. But as I said earlier, I know I have CPU bottleneck right now with the 4670. I'm sure the 7700k will solve that issue for me. But, RyZen with it's more cores for the multi-tasking while gaming is promising

 

still won't buy anything until I see real world tests. Benchmarks don't tell you how well these things do always in real world scenarios like mine.

 

Quote

"Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so." - Douglas Adams

System: R9-5950x, ASUS X570-Pro, Nvidia Geforce RTX 2070s. 32GB DDR4 @ 3200mhz.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, LAwLz said:

Are you saying that the benchmarks are apples vs oranges? Because that's like saying you shouldn't compare the 7700K vs the 1700 in Cinebench multithreading because "it's 4 cores vs 8".

Ryzen's 8 core chip does not clock as high as Intel's quad core, and on top of that it's behind in IPC. You can't just say the benchmark is invalid just because it measures the type of scenario where Ryzen appears to be behind, just like you can't say a benchmark is invalid when it measures the type of scenario where Ryzen is ahead.

 

The benchmark is not apples to oranges, and I am getting really sick and tired of people say it is.

 

 

Depending on what GPU you got, the i7 might still be better for streaming.

Reason: QuickSync

 

Streaming with an i7 does not add any extra strain on the CPU. Streaming with a Ryzen chip will, unless your discrete GPU can do it for you (like with an Nvidia card) and at that point, you don't really need a beefy CPU.

Using x264 over Quick Sync or NVENC produces better quality and some settings don't work on those two no?

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Lays said:

Soldered yes, the die is still quite large so all of em should be soldered.

Maybe on the 8 core SKU's, but I doubt the smaller ones will be soldered. We will know in a week or two though. I plan on buying one just to test the memory controller, so I'll likely delid it just to check. Time to prepare the heatgun, so I don't rip it in half like that poor dude did with his 5960X on OCN, lol. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, MageTank said:

Maybe on the 8 core SKU's, but I doubt the smaller ones will be soldered. We will know in a week or two though. I plan on buying one just to test the memory controller, so I'll likely delid it just to check. Time to prepare the heatgun, so I don't rip it in half like that poor dude did with his 5960X on OCN, lol. 

So far all AMD CPUs, high-end or low-end, have been soldered making de-lidding impossible. There's no reason for AMD to solder one SKU of Ryzen and then use a TIM on another one.

Ye ole' train

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Sprawlie said:

However, Pulling up my NHL.com hockey stream used 15% of my CPU and caused my GTA to dip < 30fps in some areas. But as I said earlier, I know I have CPU bottleneck right now with the 4670. I'm sure the 7700k will solve that issue for me. But, RyZen with it's more cores for the multi-tasking while gaming is promising

That doesn't sound right. A 4670 should be more than enough for GTA 5. Hell, even my 2500K is.

 

25 minutes ago, Doobeedoo said:

Using x264 over Quick Sync or NVENC produces better quality and some settings don't work on those two no?

Depends on the settings. x264 has a greater range of speed:quality settings. The default is the "very fast" preset (if I recall correctly) and that should be pretty comparable to NVENC or Quicksync. Once you start recording at 60 FPS and/or HEVC though, I wouldn't be surprised if GPU encoders becomes better (if you need to constantly encode 60 frames per second).

 

For archival purposes when you can afford to spend like 4 hours encoding a single video, x264 and (probably) x265 will beat the crap out of GPU encoders in terms of size and quality.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, lots of unexplainable lag said:

So far all AMD CPUs, high-end or low-end, have been soldered making de-lidding impossible. There's no reason for AMD to solder one SKU of Ryzen and then use a TIM on another one.

There is a ton of reasons for them to do so. You are referring to their 28nm+ SKU's. They are 14nm now. The 8 core SKU's should, in theory, have twice the surface area to work with, so they will be fine. The smaller ones on the other hand, will risk cracking under the soldering process. Read up on it here: http://overclocking.guide/the-truth-about-cpu-soldering/

 

You cannot use 28nm and above being soldered as an excuse to assume 14nm and below will be soldered. Look at Intel. Their 6c SKU's and above are soldered, but 4c and below are not. There is a reason for this. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, lots of unexplainable lag said:

There's no reason for AMD to solder one SKU of Ryzen and then use a TIM on another one.

There is, for the same reason that Intel HEDT chips are soldered while their consumer chips are not. Size. Soldering consumer Intel chips would severely damage the chips (due to thermal expansion or something like that -- small die = problems for soldering). Presumably, the R3 and possibly R5 dies are small enough that soldering isn't feasible. 

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, MageTank said:

There is a reason for this. 

 

Yeah, so der8auer, Rockit and Coollaboraty can get some love.  :D

 

Seriously though, great explanation.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, LAwLz said:

Depends on the settings. x264 has a greater range of speed:quality settings. The default is the "very fast" preset (if I recall correctly) and that should be pretty comparable to NVENC or Quicksync. Once you start recording at 60 FPS and/or HEVC though, I wouldn't be surprised if GPU encoders becomes better (if you need to constantly encode 60 frames per second).

 

For archival purposes when you can afford to spend like 4 hours encoding a single video, x264 and (probably) x265 will beat the crap out of GPU encoders in terms of size and quality.

My understanding is that CPU encoding simply outrivals NVENC, Quicksync, and VCE when it comes to streaming. This is due to the fact that twitch limits all streamers (aside from special cases) bitrate to 3,500, which GPU encoders handle very poorly. Youtube streaming does allow for a 10,000+ bitrate, but you'd need an upload plan to match,

Intel i5 6600k~Asus Maximus VIII Hero~G.Skill Ripjaws 4 Series 8GB DDR4-3200 CL-16~Sapphire Radeon R9 Fury Tri-X~Phanteks Enthoo Pro M~Sandisk Extreme Pro 480GB~SeaSonic Snow Silent 750~BenQ XL2730Z QHD 144Hz FreeSync~Cooler Master Seidon 240M~Varmilo VA87M (Cherry MX Brown)~Corsair Vengeance M95~Oppo PM-3~Windows 10 Pro~http://pcpartpicker.com/p/ynmBnQ

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, LAwLz said:

That doesn't sound right. A 4670 should be more than enough for GTA 5. Hell, even my 2500K is.

 

Depends on the settings. x264 has a greater range of speed:quality settings. The default is the "very fast" preset (if I recall correctly) and that should be pretty comparable to NVENC or Quicksync. Once you start recording at 60 FPS and/or HEVC though, I wouldn't be surprised if GPU encoders becomes better (if you need to constantly encode 60 frames per second).

 

For archival purposes when you can afford to spend like 4 hours encoding a single video, x264 and (probably) x265 will beat the crap out of GPU encoders in terms of size and quality.

the 4670, when dedicated 100% towards the game keeps up fine. with FPS in that ranges from 35-70, depending on area of the map and whats on screen.

 

It's as i said earlier. I tend not to just game when i'm playing though. I tend to have GTA 5, NHL.com flash based stream going in chrome, Discord, Slack, few other random tabs in Chrome open. I'm running a 2560x1080p display for gaming, and 2 1080p displays for other activities. 

 

So it's all that "other" stuff which hurts my game performance. As I'm sharing 4 cores amongst all of that.

 

Quote

"Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so." - Douglas Adams

System: R9-5950x, ASUS X570-Pro, Nvidia Geforce RTX 2070s. 32GB DDR4 @ 3200mhz.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, LAwLz said:

Thanks to a certain reptile, I have the benchmarks here.

making a name for myself eh? 

Bildergebnis für sesame street dealer

 

RyzenAir : AMD R5 3600 | AsRock AB350M Pro4 | 32gb Aegis DDR4 3000 | GTX 1070 FE | Fractal Design Node 804
RyzenITX : Ryzen 7 1700 | GA-AB350N-Gaming WIFI | 16gb DDR4 2666 | GTX 1060 | Cougar QBX 

 

PSU Tier list

 

Link to comment
Share on other sites

Link to post
Share on other sites

I know this question can't really be answered until we actually have the results of how Ryzen actually performs, but I want to ask what would you guys do.

I had an AMD Phenom first generation until two weeks ago. My mobo died, so in the house I also had an old intel 775 mobo and a dual-core processor. So that's what I'm using now. Now initially I was planning on getting a new video card next month and later on this year or even beginning of 2018 to change the rest.

Now my budget is limited, so if I were to go for Ryzen, I would go for their R3 line-up and later on if I need upgrade the processor.

Thing is the R3 from what I understood comes out 2H, meaning second half of this year. So that could be in November.

However if I go for Intel, I would most likely go for the Pentium G4560, the one that's very close to the i3 which were I live is 3 times the price of the Pentium or maybe that's the price everywhere. The Pentium is a dual-core with hyper-threading. But I would get it next month and at the end of the year I'll get a video card.

 

So the question is. Wait for Ryzen and get a proper quad-core but once again I mention we really know how it's going to perform, or go for Intel next month and pray that Ryzen doesn't kick its arse?

What would you guys do?

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Darth Revan said:

So the question is. Wait for Ryzen and get a proper quad-core but once again I mention we really know how it's going to perform, or go for Intel next month and pray that Ryzen doesn't kick its arse?

What would you guys do?

I personally am buying the I3-7100 (the g4560 equivalent ish) ONLY because i have a deadline in May and cant wait. Otherwise I would wait, because 4 cores are better than 2 cores with hyper threading. (it can also OC so)

"Put as much effort into your question as you'd expect someone to give in an answer"- @Princess Luna

Make sure to Quote posts or tag the person with @[username] so they know you responded to them!

 RGB Build Post 2019 --- Rainbow 🦆 2020 --- Velka 5 V2.0 Build 2021

Purple Build Post ---  Blue Build Post --- Blue Build Post 2018 --- Project ITNOS

CPU i7-4790k    Motherboard Gigabyte Z97N-WIFI    RAM G.Skill Sniper DDR3 1866mhz    GPU EVGA GTX1080Ti FTW3    Case Corsair 380T   

Storage Samsung EVO 250GB, Samsung EVO 1TB, WD Black 3TB, WD Black 5TB    PSU Corsair CX750M    Cooling Cryorig H7 with NF-A12x25

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Darth Revan said:

I know this question can't really be answered until we actually have the results of how Ryzen actually performs, but I want to ask what would you guys do.

I had an AMD Phenom first generation until two weeks ago. My mobo died, so in the house I also had an old intel 775 mobo and a dual-core processor. So that's what I'm using now. Now initially I was planning on getting a new video card next month and later on this year or even beginning of 2018 to change the rest.

Now my budget is limited, so if I were to go for Ryzen, I would go for their R3 line-up and later on if I need upgrade the processor.

Thing is the R3 from what I understood comes out 2H, meaning second half of this year. So that could be in November.

However if I go for Intel, I would most likely go for the Pentium G4560, the one that's very close to the i3 which were I live is 3 times the price of the Pentium or maybe that's the price everywhere. The Pentium is a dual-core with hyper-threading. But I would get it next month and at the end of the year I'll get a video card.

 

So the question is. Wait for Ryzen and get a proper quad-core but once again I mention we really know how it's going to perform, or go for Intel next month and pray that Ryzen doesn't kick its arse?

What would you guys do?

 

 

 

 

 

I say go with the pentium as a stopgap. Use it until you can afford a real Intel quadcore, and upgrade later on. Sure, AMD's quad core might be cheaper, but it's gonna be several months before we even see it on the market. In that amount of time, you should be able to save up enough cash to buy a real Intel quad core. Let's be real, it's not like AMD's quad core is going to out-perform Intel's, so you will still end up with better performance, it will just come at a higher cost.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I know this is a known fact, but why exactly do larger dies make solder more feasible? I can't wrap my mind around the physics of this, do any of you mind explaining?

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, DocSwag said:

I know this is a known fact, but why exactly do larger dies make solder more feasible? I can't wrap my mind around the physics of this, do any of you mind explaining?

Thermal cycling damages solder via voids forming within the solder itself. As these voids form, thermal conductivity suffers, and thermal resistance is increased. Eventually, this will cause cracks to form in the solder (normally starting from the corners of the die) which will eventually lead to catastrophic failure for the CPU. The reason smaller dies cannot handle the soldering process, is because a smaller surface heats faster without spreading it out enough, resulting in the cracks. We are talking about thin glass after all. With a bigger surface area (specifically on the IHS side of the joint), the solder will cool faster. Not only that, but a bigger chip means more silicon which means less hotspots. The end result is less cracking.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, MageTank said:

Thermal cycling damages solder via voids forming within the solder itself. As these voids form, thermal conductivity suffers, and thermal resistance is increased. Eventually, this will cause cracks to form in the solder (normally starting from the corners of the die) which will eventually lead to catastrophic failure for the CPU. The reason smaller dies cannot handle the soldering process, is because a smaller surface heats faster without spreading it out enough, resulting in the cracks. We are talking about thin glass after all. With a bigger surface area (specifically on the IHS side of the joint), the solder will cool faster. Not only that, but a bigger chip means more silicon which means less hotspots. The end result is less cracking.

But why was a soldered IHS used until Sandy Bridge?

\\ QUIET AUDIO WORKSTATION //

5960X 3.7GHz @ 0.983V / ASUS X99-A USB3.1      

32 GB G.Skill Ripjaws 4 & 2667MHz @ 1.2V

AMD R9 Fury X

256GB SM961 + 1TB Samsung 850 Evo  

Cooler Master Silencio 652S (soon Calyos NSG S0 ^^)              

Noctua NH-D15 / 3x NF-S12A                 

Seasonic PRIME Titanium 750W        

Logitech G810 Orion Spectrum / Logitech G900

2x Samsung S24E650BW 16:10  / Adam A7X / Fractal Axe Fx 2 Mark I

Windows 7 Ultimate

 

4K GAMING/EMULATION RIG

Xeon X5670 4.2Ghz (200BCLK) @ ~1.38V / Asus P6X58D Premium

12GB Corsair Vengeance 1600Mhz

Gainward GTX 1080 Golden Sample

Intel 535 Series 240 GB + San Disk SSD Plus 512GB

Corsair Crystal 570X

Noctua NH-S12 

Be Quiet Dark Rock 11 650W

Logitech K830

Xbox One Wireless Controller

Logitech Z623 Speakers/Subwoofer

Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Vode said:

But why was a soldered IHS used until Sandy Bridge?

Sandy Bridge quad core dies are 216mm^2

Skylake quad cores dies are 122mm^2

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Vode said:

But why was a soldered IHS used until Sandy Bridge?

The die size was physically larger back then. Nehalem was 45nm. We are currently at 14nm. Quite a drastic difference in size if you ask me.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, MageTank said:

The die size was physically larger back then. Nehalem was 45nm. We are currently at 14nm. Quite a drastic difference in size if you ask me.

Yeah, but wasn't Penryn around 110mm2? 

 

I'm aware of the difference in nodes die size is what matters.

\\ QUIET AUDIO WORKSTATION //

5960X 3.7GHz @ 0.983V / ASUS X99-A USB3.1      

32 GB G.Skill Ripjaws 4 & 2667MHz @ 1.2V

AMD R9 Fury X

256GB SM961 + 1TB Samsung 850 Evo  

Cooler Master Silencio 652S (soon Calyos NSG S0 ^^)              

Noctua NH-D15 / 3x NF-S12A                 

Seasonic PRIME Titanium 750W        

Logitech G810 Orion Spectrum / Logitech G900

2x Samsung S24E650BW 16:10  / Adam A7X / Fractal Axe Fx 2 Mark I

Windows 7 Ultimate

 

4K GAMING/EMULATION RIG

Xeon X5670 4.2Ghz (200BCLK) @ ~1.38V / Asus P6X58D Premium

12GB Corsair Vengeance 1600Mhz

Gainward GTX 1080 Golden Sample

Intel 535 Series 240 GB + San Disk SSD Plus 512GB

Corsair Crystal 570X

Noctua NH-S12 

Be Quiet Dark Rock 11 650W

Logitech K830

Xbox One Wireless Controller

Logitech Z623 Speakers/Subwoofer

Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×