Jump to content

On Linus' 'future proofing' video. Has anyone ever done the maths on long cycle high end v medium cycle mid end?

gonvres

Discussing this clip. Lets say your yearly PC budget is $500 per year.

 

What are the actual performance numbers when taking into account say a $1500 system every 3 years versus a $3000 system every 6 years. In terms of performance, etc.

 

I have a feeling the 3 year $1500 PC cycle would give you top end performance for longer compared to the 6 year $3000 system cycle and would also kind of protect you against component life failures as you're not stretching the life of your parts. Not that a $1500 PC is anywhere near out of date after 3 years generally.

Link to comment
Share on other sites

Link to post
Share on other sites

GPUs will cycle heavily depending on investement, but a strong CPU will last some time.

I'm still on x58 which I originally purchased in 2009, with only ancillaries being upgraded (GPU, HDD to SSD).

Link to comment
Share on other sites

Link to post
Share on other sites

Generally my upgrade strategy is to upgrade CPU MB and RAM every 4-5 years and GPUs every 2 years. Obviously depending on what available at the time or in the coming months. It might not be the most cost effective way of doing it but I work hard, have kids and a house to pay for but it's what I like to spend my disposable income on, rather then wasting it down the pub. At least that's how I justify it to myself and my wife.. I also tell her things cost half the amount they do :)    

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Rammix said:

 it's what I like to spend my disposable income on, rather then wasting it down the pub. At least that's how I justify it to myself and my wife.. I also tell her things cost half the amount they do :)    

I thought I was the only one!  I got busted when she recognized the new gtx 980 box as being expensive and my second one lol

Edit: I've got the most expensive 800$ computer I've ever seen hahaha

LTT Community Standards                                               Welcome!-A quick guide for new members to LTT

Man's Machine- i7-7700k@5.0GHz / Asus M8H / GTX 1080Ti / 4x4gb Gskill 3000 CL15  / Custom loop / 240gb Intel SSD / 3tb HDD / Corsair RM1000x / Dell S2716DG

The Lady's Rig- G3258@4.4GHz(1.39v) on Hyper 212 / Gigabyte GA-B85M / gtx750 / 8gb PNY xlr8 / 500gb seagate HDD / CS 450M / Asus PB277Q

Link to comment
Share on other sites

Link to post
Share on other sites

All components have a price/performance curve, typically somewhere int the mid range you get the best performance for the dollar. Low end cards and CPUs tend to be poor value for money and equally the high end stuff has poor returns as well. As a result you can often get 60-80% of the performance for half the price, that last 20% is often really expensive.

 

As a result buying mid range and then in two years again buying mid range to get twice the performance will net a better average and better performance in 2 years time. The high end card will be more than 2x the price for 2x the performance, so 2 mid range cards bought 2 years apart will get that high end performance in 2 years time for less money.

 

The other key thing to always remember with components is the speed of progress vastly differs. CPUs and memory and storage tends to move quite slowly, GPUs are still moving relatively quickly, monitor technology is advancing quite rapidly after a long time being stagnant. A case bought 15 years ago will still work, ATX hasn't changed! So you can upgrade only parts you don't need to do the entire PC everytime and buying parts that offer the best performance/$ more often is a better route to average performance over the long term. You can't future proof because the future hasn't arrive yet and in the PC industry that means 2x the transistors for the same price (Moore's law).

Link to comment
Share on other sites

Link to post
Share on other sites

Even if it were cheaper to go medium range with more frequent purchases i'd still rather go with high end stuff, its all about that ultra eye candy :) 

Link to comment
Share on other sites

Link to post
Share on other sites

I think it get more interesting when you take into account buying new at launch, buying new current gen and buying previous gen refurbished/clearance and sale. Currently i think shorter term (eg 3 year) upgrade&replacement path you're probably better going for value in previous gen intel (sandy/ivy/haswell), ddr3/pcie3/sata3, preferably with nvme option and current gen mid range for GPU (or something like a titan from previous gen nvidia). Longer term you're probably better off going for x99 or z170 now, or waiting till end of year and seeing how new products stack up..

Having said all that i don't think you'd be able to come up with a definitive solution or neat graph as easily for desktops as you can with laptops etc... where april is usually the best time to buy laptops in the states year-on-year...

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just look at the basic scaling for GPUs in a game like the division on High Quality at 1080p (gamegpu.ru FPS numbers, newegg prices):

http://gamegpu.com/mmorpg-/-onlayn-igry/tom-clancy-s-the-division-test-gpu.html

 

GeForce 950     $          42 fps

GeForce 960     $190     50 fps

GeForce 970     $300      70 fps

GeForce 980 ti  $480      104 fps

 

Radeon 370 $130 47 fps

Radeon 380X $205 64 fps

Fury X    $600       111 fps

 

What does that come out to in terms of $ per FPS?

  

GeForce 960 190 / 50 = 3.8 $/fps

GeForce 970 300 / 70 =  4.3 $/fps

GeForce 980 ti 480/104 = 4.6 $/fps

 

Radeon 370 130 / 47 = 2.8 $/fps

Radeon 380X 205 / 64 = 3.2 $/fps

Fury X 600 / 111 = 5.4 $/fps

 

How does that play out in practice. Lets say you were buying a card in the 7000 series v 600 series cards and you bought a 670. It would have been about $300 or so and now it would be unplayable at 37 fps. You could have bought a 690 for $1000 and today that would be worth 70 fps, future proof right? Well not really because $300 today would get you a GeForce 970 which will also give you 70 fps but for only $300. Thus for the same performance today you only spent $600 verses $1000 dollars a few years ago, now admittedly over that period you had better performance (when SLI scaled) but you also spent considerably more and its more than just the $1000 due to net present value making todays $300 outlay for a 970 actually less than $300 from a few years ago.

 

Ergo you can play on high/very high and even ultra settings on games for a lot less by riding the mid range cards instead of "future proofing" with a high end card.

Link to comment
Share on other sites

Link to post
Share on other sites

Honestly, these days, a good CPU from 5 years ago is still perfectly fine today, considering games generally use the GPU more anyway.
As for the GPU, as long as you're fine lowering the graphics over time, a high end GPU will certainly last you longer than a mid range one. Heck, I just replaced my HD5870 late last year (end of 2015), I originally bought it in 2009. It was still perfectly capable of playing "most" of the latest games at low/medium setting while retaining over 30fps. Some games, like Shadow of Mordor or badly optimized games like Dead Rising 3 were essentially unplayable no matter what though.

 

But if you want proper stats on this, you will need a couple older system from 3~5 years ago that haven't been upgraded and a few newer systems to compare against.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

This is a big reason Intel is loosing so much on the PC side. You buy a CPU and it will last you 5 years or more. We are getting so good at technology there is no reason to upgrade making companies loose money. Pretty soon we will get a pattern of upgrades ever so often. Then companies will have to depend on people upgrading then

ƆԀ S₱▓Ɇ▓cs: i7 6ʇɥפᴉƎ00K (4.4ghz), Asus DeLuxe X99A II, GT҉X҉1҉0҉8҉0 Zotac Amp ExTrꍟꎭe),Si6F4Gb D???????r PlatinUm, EVGA G2 Sǝʌǝᘉ5ᙣᙍᖇᓎᙎᗅᖶt, Phanteks Enthoo Primo, 3TB WD Black, 500gb 850 Evo, H100iGeeTeeX, Windows 10, K70 R̸̢̡̭͍͕̱̭̟̩̀̀̃́̃͒̈́̈́͑̑́̆͘͜ͅG̶̦̬͊́B̸͈̝̖͗̈́, G502, HyperX Cloud 2s, Asus MX34. פN∩SW∀S 960 EVO

Just keeping this here as a backup 9̵̨̢̨̧̧̡̧̡̧̡̧̡̡̢̢̡̢̧̡̢̡̡̢̧̛̛̛̛̛̛̱̖͈̠̝̯̹͉̝̞̩̠̹̺̰̺̲̳͈̞̻̜̫̹̱̗̣͙̻̘͎̲̝͙͍͔̯̲̟̞͚̖̘͉̭̰̣͎͕̼̼̜̼͕͎̣͇͓͓͎̼̺̯͈̤̝͖̩̭͍̣̱̞̬̺̯̼̤̲͎̖̠̟͍̘̭͔̟̗̙̗̗̤̦͍̫̬͔̦̳̗̳͔̞̼̝͍̝͈̻͇̭̠͈̳͍̫̮̥̭͍͔͈̠̹̼̬̰͈̤͚̖̯͍͉͖̥̹̺͕̲̥̤̺̹̹̪̺̺̭͕͓̟̳̹͍̖͎̣̫͓͍͈͕̳̹̙̰͉͙̝̜̠̥̝̲̮̬͕̰̹̳͕̰̲̣̯̫̮͙̹̮͙̮̝̣͇̺̺͇̺̺͈̳̜̣̙̻̣̜̻̦͚̹̩͓͚̖͍̥̟͍͎̦͙̫̜͔̭̥͈̬̝̺̩͙͙͉̻̰̬̗̣͖̦͎̥̜̬̹͓͈͙̤̜̗͔̩̖̳̫̑̀̂̽̈́̈́̿͒̿̋̊͌̾̄̄̒̌͐̽̿̊͑̑̆͗̈̎̄͒̑̋͛̑͑̂͑̀͐̀͑̓͊̇͆̿͑͛͛͆́͆̓̿̇̀̓͑͆͂̓̾̏͊̀̇̍̃́̒̎̀̒̄̓̒̐̑̊̏̌̽̓͂͋̓̐̓͊̌͋̀̐̇̌̓̔͊̈̇́̏͒̋͊̓̆̋̈̀̌̔͆͑̈̐̈̍̀̉̋̈́͊̽͂̿͌͊̆̾̉͐̿̓̄̾͑̈́͗͗̂̂́̇͂̀̈́́̽̈́̓̓͂̽̓̀̄͌̐̔̄̄͒͌̈́̅̉͊̂͒̀̈́̌͂̽̀̑̏̽̀͑̐̐͋̀̀͋̓̅͋͗̍́͗̈́̆̏̇͊̌̏̔̑̐̈́͑̎͑͆̏̎́̑̍̏̒̌̊͘͘̚̕̚̕̕̚̕̚̕̕͜͜͜͜͜͝͝͠͠͝͝͝͝͝͝͝͠͝͝ͅͅͅͅͅͅͅ8̵̨̛̛̛̛̮͍͕̥͉̦̥̱̞̜̫̘̤̖̬͍͇͓̜̻̪̤̣̣̹̑͑̏̈́̐̐́̎͒̔͒̌̑̓̆̓͑̉̈́́͋̌͋͐͛͋̃̍̽̊͗͋͊̂̅͊͑́͋͛̉̏̓͌̾̈́̀͛͊̾͑̌̀̀̌̓̏̑́̄̉̌͂́͛̋͊̄͐͊̈́̀̌̆̎̿̓̔̍̎̀̍̚̕̕͘͘͘̕̚͝͝͠͠͠0̶̡̡̡̢̨̨͕̠̠͉̺̻̯̱̘͇̥͎͖̯͕̖̬̭͔̪̪͎̺̠̤̬̬̤̣̭̣͍̥̱̘̳̣̤͚̭̥͚̦͙̱̦͕̼͖͙͕͇̭͓͉͎̹̣̣͕̜͍͖̳̭͕̼̳̖̩͍͔̱̙̠̝̺̰̦̱̿̄̀͐͜͜ͅͅt̶̡̨̡̨̧̢̧̢̨̧̧̧̧̢̡̨̨̢̨̢̧̢̛̛̛̛̛̠͍̞̮͇̪͉̩̗̗͖̫͉͎͓̮̣̘̫͔̘̬̮̙̯̣͕͓̲̣͓͓̣̹̟͈̱͚̘̼̙̖̖̼̙̜̝͙̣̠̪̲̞̖̠̯̖̠̜̱͉̲̺͙̤̻̦̜͎̙̳̺̭̪̱͓̦̹̺͙̫̖̖̰̣͈͍̜̺̘͕̬̥͇̗̖̺̣̲̫̟̣̜̭̟̱̳̳̖͖͇̹̯̜̹͙̻̥̙͉͕̜͎͕̦͕̱͖͉̜̹̱̦͔͎̲̦͔̖̘̫̻̹̮̗̮̜̰͇̰͔̱͙̞̠͍͉͕̳͍̰̠̗̠̯̜̩͓̭̺̦̲̲͖̯̩̲̣̠͉̦̬͓̠̜̲͍̘͇̳̳͔̼̣͚̙͙͚͕̙̘̣̠͍̟̪̝̲͇͚̦̖͕̰̟̪͖̳̲͉͙̰̭̼̩̟̝̣̝̬̳͎̙̱͒̃̈͊̔͒͗̐̄̌͐͆̍͂̃̈́̾͗̅̐͒̓̆͛̂̾͋̍͂̂̄̇̿̈͌̅̈́̃̾̔̇̇̾̀͊͋̋̌̄͌͆͆̎̓̈́̾̊͊̇̌̔̈́̈́̀̐͊̊̍͑̊̈̓͑̀́̅̀̑̈́̽̃̽͛̇́̐̓̀͆̔̈̀̍̏̆̓̆͒̋́̋̍́̂̉͛̓̓̂̋̎́̒̏̈͋̃̽͆̓̀̔͑̈́̓͌͑̅̽́̐̍̉̑̓̈́͌̋̈́͂̊́͆͂̇̈́̔̃͌̅̈́͌͛̑̐̓̔̈́̀͊͛̐̾͐̔̾̈̃̈̄͑̓̋̇̉̉̚̕̚͘̕̚̚̕̕͜͜͜͜͜͜͜͜͜͜͜͜͜͝͝͝͠͝͝͝͝͝͠ͅͅͅͅͅi̵̢̧̢̧̡̧̢̢̧̢̢̢̡̡̡̧̧̡̡̧̛̛͈̺̲̫͕̞͓̥̖̭̜̫͉̻̗̭̖͔̮̠͇̩̹̱͈̗̭͈̤̠̮͙͇̲͙̰̳̹̲͙̜̟͚͎͓̦̫͚̻̟̰̣̲̺̦̫͓̖̯̝̬͉̯͓͈̫̭̜̱̞̹̪͔̤̜͙͓̗̗̻̟͎͇̺̘̯̲̝̫͚̰̹̫̗̳̣͙̮̱̲͕̺̠͉̫̖̟͖̦͉̟͈̭̣̹̱̖̗̺̘̦̠̯̲͔̘̱̣͙̩̻̰̠͓͙̰̺̠̖̟̗̖͉̞̣̥̝̤̫̫̜͕̻͉̺͚̣̝̥͇̭͎̖̦̙̲͈̲̠̹̼͎͕̩͓̖̥̘̱̜͙̹̝͔̭̣̮̗̞̩̣̬̯̜̻̯̩̮̩̹̻̯̬̖͂̈͂̒̇͗͑̐̌̎̑̽̑̈̈́͑̽́̊͋̿͊͋̅̐̈́͑̇̿̈́̌͌̊̅͂̎͆̏̓͂̈̿̏̃͑̏̓͆̔̋̎̕͘͘͘͜͜͜͜͜͜͜͝͝͠͠ͅͅͅͅͅͅͅͅͅZ̴̧̢̨̢̧̢̢̡̧̢̢̢̨̨̨̡̨̧̢̧̛̛̬̖͈̮̝̭̖͖̗̹̣̼̼̘̘̫̠̭̞͙͔͙̜̠̗̪̠̼̫̻͓̳̟̲̳̻̙̼͇̺͎̘̹̼͔̺̹̬̯̤̮̟͈̭̻͚̣̲͔͙̥͕̣̻̰͈̼̱̺̤̤͉̙̦̩̗͎̞͓̭̞̗͉̳̭̭̺̹̹̮͕̘̪̞̱̥͈̹̳͇̟̹̱̙͚̯̮̳̤͍̪̞̦̳̦͍̲̥̳͇̪̬̰̠͙͕̖̝̫̩̯̱̘͓͎̪͈̤̜͎̱̹̹̱̲̻͎̖̳͚̭̪̦̗̬͍̯̘̣̩̬͖̝̹̣̗̭͖̜͕̼̼̲̭͕͔̩͓̞̝͓͍̗̙̯͔̯̞̝̳̜̜͉̖̩͇̩̘̪̥̱͓̭͎͖̱̙̩̜͎̙͉̟͎͔̝̥͕͍͓̹̮̦̫͚̠̯͓̱͖͔͓̤͉̠͙̋͐̀͌̈́͆̾͆̑̔͂͒̀̊̀͋͑̂͊̅͐̿́̈́̐̀̏̋̃̄͆͒̈́̿̎́́̈̀̀͌̔͋͊̊̉̿͗͊͑̔͐̇͆͛̂̐͊̉̄̈́̄̐͂͂͒͑͗̓͑̓̾̑͋̒͐͑̾͂̎̋̃̽̂̅̇̿̍̈́́̄̍͂͑̏̐̾̎̆̉̾͂̽̈̆̔́͋͗̓̑̕͘̕͘͜͜͜͜͜͝͝͝͝͠͠͝ͅo̶̪͆́̀͂̂́̄̅͂̿͛̈́̿͊͗́͘͝t̴̡̨̧̨̧̡̧̨̡̢̧̢̡̨̛̪͈̣̭̺̱̪̹̺̣̬̖̣̻͈̞̙͇̩̻̫͈̝̭̟͎̻̟̻̝̱͔̝̼͍̞̼̣̘̤̯͓͉̖̠̤͔̜̙͚͓̻͓̬͓̻̜̯̱̖̳̱̗̠̝̥̩͓̗̪̙͓̖̠͎̗͎̱̮̯̮͙̩̫̹̹̖͙̙͖̻͈̙̻͇͔̙̣̱͔̜̣̭̱͈͕̠̹͙̹͇̻̼͎͍̥̘͙̘̤̜͎̟͖̹̦̺̤͍̣̼̻̱̲͎̗̹͉͙̪̞̻̹͚̰̻͈͈͊̈́̽̀̎̃̊́̈́̏̃̍̉̇̑̂̇̏̀͊̑̓͛̽͋̈́͆́̊͊̍͌̈́̓͊̌̿̂̾̐͑̓̀́͒̃̋̓͆̇̀͊̆͗̂͑͐̀͗̅̆͘̕͘̕̕͜͜͝͝͝͝͝͝͝ͅͅͅͅͅͅͅͅͅḁ̶̢̡̨̧̡̡̨̨̧̨̡̡̢̧̨̡̡̛̛̛͍̱̳͚͕̩͍̺̪̻̫̙͈̬͙̖͙̬͍̬̟̣̝̲̼̜̼̺͎̥̮̝͙̪̘̙̻͖͇͚͙̣̬̖̲̲̥̯̦̗̰̙̗̪̞̗̩̻̪̤̣̜̳̩̦̻͓̞̙͍͙̫̩̹̥͚̻̦̗̰̲̙̫̬̱̺̞̟̻͓̞͚̦̘̝̤͎̤̜̜̥̗̱͈̣̻̰̮̼̙͚͚̠͚̲̤͔̰̭̙̳͍̭͎̙͚͍̟̺͎̝͓̹̰̟͈͈̖̺͙̩̯͔̙̭̟̞̟̼̮̦̜̳͕̞̼͈̜͍̮͕̜͚̝̦̞̥̜̥̗̠̦͇͖̳͈̜̮̣͚̲̟͙̎̈́́͊̔̑̽̅͐͐͆̀͐́̓̅̈͑͑̍̿̏́͆͌̋̌̃̒̽̀̋̀̃̏̌́͂̿̃̎̐͊̒̀̊̅͒̎͆̿̈́̑̐̒̀̈́̓̾͋͆̇̋͒̎̈̄̓̂͊̆͂̈́̒̎͐̇̍̆̋̅̿̔͒̄̇̂̋̈́͆̎̔̇͊̊̈́̔̏͋́̀͂̈́̊͋͂̍̾̓͛̇̔̚͘̚̕̚͘͘̕̕̕̚͘͘̚̕̚̕͜͜͜͝͝͝͝͝͝͝͝ͅͅͅͅͅç̵̧̢̨̢̢̢̧̧̡̨̡̢̧̧̧̨̡̡̨̨̢̢̢̧̨̢̨̢̛̛͉̗̠͇̹̖̝͕͚͎̟̻͓̳̰̻̺̞̣͚̤͙͍͇̗̼͖͔͕͙͖̺͙̖̹̘̘̺͓̜͍̣̰̗̖̺̗̪̘̯̘͚̲͚̲̬̞̹̹͕̭͔̳̘̝̬͉̗̪͉͕̞̫͔̭̭̜͉͔̬̫͙̖̙͚͔͙͚͍̲̘͚̪̗̞̣̞̲͎͔͖̺͍͎̝͎͍̣͍̩̟͈͕̗͉̪̯͉͎͖͍̖͎̖̯̲̘̦̟̭͍͚͓͈͙̬͖̘̱̝̜̘̹̩̝̥̜͎̬͓̬͙͍͇͚̟̫͇̬̲̥̘̞̘̟̘̝̫͈̙̻͇͎̣̪̪̠̲͓͉͙͚̭̪͇̯̠̯̠͖̞̜͓̲͎͇̼̱̦͍͉͈͕͉̗̟̖̗̱̭͚͎̘͓̬͍̱͍̖̯̜̗̹̰̲̩̪͍̞̜̫̩̠͔̻̫͍͇͕̰̰̘͚͈̠̻̮͊̐̿̏̐̀̇̑̐̈͛͑͑̍̑̔̃̈́̓̈́̇̐͑̐̊̆͂̀̏͛̊̔̍̽͗͋̊̍̓̈́̏̅͌̀̽́̑͒͒̓͗̈́̎͌͂̕̚͘͘͜͜͜͜͜͠͝͝͝͝ͅͅͅͅͅͅͅS̵̡̡̧̧̨̨̡̢̡̡̡̡̧̧̡̧̢̫̯͔̼̲͉͙̱̮̭̗͖̯̤͙̜͚̰̮̝͚̥̜̞̠̤̺̝͇̻̱͙̩̲̺͍̳̤̺̖̝̳̪̻̗̮̪̖̺̹̭͍͇̗̝̱̻̳̝̖̝͎̙͉̞̱̯̙̜͇̯̻̞̱̭̗͉̰̮̞͍̫̺͙͎̙̞̯̟͓͉̹̲͖͎̼̫̩̱͇̲͓̪͉̺̞̻͎̤̥̭̺̘̻̥͇̤̖̰̘̭̳̫̙̤̻͇̪̦̭̱͎̥̟͖͕̣̤̩̟̲̭̹̦̹̣͖̖͒̈́̈́̓͗̈̄͂̈́̅̐̐̿̎̂͗̎̿̕͘͜͜͜͜͝͝ͅͅt̸̡̡̧̧̨̡̢̛̥̥̭͍̗͈̩͕͔͔̞̟͍̭͇̙̺̤͚͎͈͎͕̱͈̦͍͔͓̬͚̗̰̦͓̭̰̭̎̀̂̈́̓̒̈́̈́̂̄̋́̇̂͐͒̋̋̉͐̉̏̇͋̓̈́͐̾͋̒͒͐̊̊̀̄͆̄͆̑͆̇̊̓̚̚̕̚̕͜͠͝͝ͅͅơ̵̡̨̡̡̡̨̛̺͕̼͔̼̪̳͖͓̠̘̘̳̼͚͙͙͚̰͚͚͖̥̦̥̘̖̜̰͔̠͕̦͎̞̮͚͕͍̤̠̦͍̥̝̰̖̳̫̮̪͇̤̱̜͙͔̯͙̙̼͇̹̥̜͈̲̺̝̻̮̬̼̫̞̗̣̪̱͓̺̜̠͇͚͓̳̹̥̳̠͍̫͈̟͈̘̯̬̞͔̝͍͍̥̒̐͗͒͂͆̑̀̿̏́̀͑͗̐́̀̾̓́̌̇̒̈́̌̓͐̃̈́̒̂̀̾͂̊̀̂͐̃̄̓̔̽̒̈́̇̓͌̇̂̆̒̏̊̋͊͛͌̊̇̒̅͌̄̎̔̈́͊́̽̋̈̇̈́́͊̅͂̎̃͌͊͛͂̄̽̈́̿͐̉̽̿́́̉͆̈́̒́̂̾̄̇̌̒̈̅̍̿̐͑̓͊̈́̈̋̈́̉̍̋̊̈̀̈́̾̿̌̀̈́͌̑̍́̋̒̀̂̈́́̾̏̐̅̈̑͗͐̈͂̄̾̄̈́̍̉͑͛͗͋̈́̃̄̊́́͐̀̀̽̇̓̄̓̃͋͋̂̽̔̀̎͌̈́̈́̑̓̔̀̓͐͛͆̿̋͑͛̈́͂̅̋̅͆͗̇́̀̒́̏͒̐̍͂̓͐͐̇̂̉̑̊͑̉̋̍͊̄̀͂̎͒̔͊̃̏̕̚̕̕͘͘͘̚͘̚͘̕͘̚͘̚̚̚̕͘͜͜͜͝͝͠͠͝͝͠͠͝͝͝͝͝͝͝͝͝ͅͅͅc̴̨̡̢̢̢̡̡̢̛̛̛̻͇̝̣͉͚͎͕̻̦͖̤̖͇̪̩̤̻̭̮̙̰̖̰̳̪̱̹̳̬͖̣͙̼̙̰̻̘͇͚̺̗̩̫̞̳̼̤͔͍͉̟͕̯̺͈̤̰̹̍̋́͆̾̆̊͆͋̀͑͒̄̿̄̀̂͋̊͆́͑̑̽͊̓́̔̽̌͊̄͑͒͐̑͗̿̃̀̓̅́̿͗̈́͌̋̀̏̂͌̓́̇̀͒͋̌̌̅͋͌̆͐̀̔̒͐̊̇̿̽̀̈́̃̒̋̀̈́̃̏̂̊͗̑̊̈̇̀̌͐̈́̉̂̏͊̄͐̈̽͒̏̒̓́̌̓̅́̓̃͐͊͒̄͑̒͌̍̈́̕͘̚͘̕͘̚̕͜͝͠͝͝͝ͅǩ̴̢̢̢̧̨̢̢̢̨̨̨̢̢̢̨̧̨̡̡̢̛̛̛̛̛̛̛̜̥̩̙͕̮̪̻͈̘̯̼̰̜͚̰͖̬̳͖̣̭̼͔̲͉̭̺͚̺̟͉̝̱̲͎͉̙̥̤͚͙̬̪̜̺͙͍̱̞̭̬̩̖̤̹̤̺̦͈̰̗̰͍͇̱̤̬̬͙̙̲̙̜͖͓̙̟̙̯̪͍̺̥͔͕̝̳̹̻͇̠̣͈̰̦͓͕̩͇͈͇̖͙͍̰̲̤̞͎̟̝̝͈͖͔͖̦̮̗̬̞̞̜̬̠̹̣̣̲̮̞̤̜̤̲̙͔͕̯͔͍̤͕̣͔͙̪̫̝̣̰̬̬̭̞͔̦̟̥̣̻͉͈̮̥̦̮̦͕̤͇̺͆͆̈͗̄̀̌̔̈́̈̉̾̊̐̆̂͛̀̋́̏̀̿͒̓̈́̈́͂̽̾͗͊̋̐̓̓̀̃̊̊͑̓̈̎̇͑̆̂̉̾̾̑͊̉̃́̑͌̀̌̐̅̃̿̆̎̈́̀̒́͛̓̀̊́̋͛͒͊̆̀̃̊͋̋̾̇̒̋͂̏͗͆̂̔́̐̀́͗̅̈̋̂̎̒͊̌̉̈̈́͌̈́̔̾̊̎́͐͒̋̽̽́̾̿̚̕͘͘̚̕̕̕̚̚̕̚̕͘͜͜͜͝͠͝͝͝͝͝͝͝͝ͅͅͅͅͅͅB̸̢̧̨̡̢̧̨̡̡̨̡̨̡̡̡̢̨̢̨̛̛̛̛̛̛͉̞͚̰̭̲͈͎͕͈̦͍͈̮̪̤̻̻͉̫̱͔̞̫̦̰͈̗̯̜̩̪̲̻̖̳͖̦͎͔̮̺̬̬̼̦̠̪̤͙͍͓̜̥̙̖̫̻̜͍̻̙̖̜̹͔̗̪̜̖̼̞̣̠̫͉̯̮̤͈͎̝̪͎͇͙̦̥͙̳̫̰̪̣̱̘̤̭̱͍̦͔̖͎̺̝̰̦̱̣͙̙̤͚̲͔̘̱̜̻͔̥̻͖̭͔̜͉̺͕͙͖̜͉͕̤͚̠̩̮̟͚̗͈͙̟̞̮̬̺̻̞͔̥͉͍̦̤͓̦̻̦̯̟̰̭̝̘̩̖̝͔̳͉̗̖̱̩̩̟͙͙͛̀͐̈́̂̇͛̅̒̉̏̈́̿͐́̏̃̏̓̌̽͐̈́͛̍͗͆͛̋̔̉͂̔̂̓̌͌͋̂͆̉͑̊̎́̈́̈̂͆͑́̃̍̇̿̅̾́́̿̅̾̆̅̈́̈̓͒͌͛̃͆̋͂̏̓̅̀͂̽̂̈̈́̎̾̐͋͑̅̍̈́̑̅̄͆̓̾̈́͐̎̊͐̌̌̓͊̊̔̈́̃͗̓͊͐̌͆̓͗̓̓̾̂̽͊͗́́́̽͊͆͋͊̀̑̿̔͒̏̈́́̏͆̈́͋̒͗͂̄̇̒͐̃͑̅̍͒̎̈́̌̋́̓͂̀̇͛̋͊͆̈́̋́̍̃͒̆̕̚̚̕̕̕͘̕̚̚͘̕͜͜͜͜͝͠͠͝͠͝͝͝͝͠͝͝͝͝ͅͅͅͅͅI̵̡̢̧̨̡̢̨̡̡̢̡̧̡̢̢̢̡̢̛̛͕͎͕̩̠̹̩̺̣̳̱͈̻̮̺̟̘̩̻̫͖̟͓̩̜̙͓͇̙̱̭̰̻̫̥̗̠͍͍͚̞̘̫͉̬̫̖̖̦͖͉̖̩̩̖̤̺̥̻̝͈͎̻͓̟̹͍̲͚͙̹̟̟̯͚̳̟͕̮̻̟͈͇̩̝̼̭̯͚͕̬͇̲̲̯̰̖̙̣̝͇̠̞̙͖͎̮̬̳̥̣̺̰͔̳̳̝̩̤̦̳̞̰̩̫̟͚̱̪̘͕̫̼͉̹̹̟̮̱̤̜͚̝̠̤̖̮̯̳͖̗̹̞̜̹̭̿̏͋̒͆̔̄̃̾̓͛̾̌́̅̂͆̔͌͆͋̔̾́̈̇̐̄̑̓̂̾́̄̿̓̅̆͌̉̎̏̄͛̉͆̓̎͒͘̕̕͜͜͜͜͜͜͜͝͠ͅͅƠ̷̢̛̛̛̛̛̛̛̛̟̰͔͔͇̲̰̮̘̭̭̖̥̟̘̠̬̺̪͇̲͋͂̅̈́̍͂̽͗̾͒̇̇̒͐̍̽͊́̑̇̑̾̉̓̈̾͒̍̌̅̒̾̈́̆͌̌̾̎̽̐̅̏́̈̔͛̀̋̃͊̒̓͗͒̑͒̃͂̌̄̇̑̇͛̆̾͛̒̇̍̒̓̀̈́̄̐͂̍͊͗̎̔͌͛̂̏̉̊̎͗͊͒̂̈̽̊́̔̊̃͑̈́̑̌̋̓̅̔́́͒̄̈́̈̂͐̈̅̈̓͌̓͊́̆͌̉͐̊̉͛̓̏̓̅̈́͂̉̒̇̉̆̀̍̄̇͆͛̏̉̑̃̓͂́͋̃̆̒͋̓͊̄́̓̕̕̕̚͘͘͘̚̕̚͘̕̕͜͜͝͝͝͠͝͝͝͝͠ͅS̷̢̨̧̢̡̨̢̨̢̨̧̧̨̧͚̱̪͇̱̮̪̮̦̝͖̜͙̘̪̘̟̱͇͎̻̪͚̩͍̠̹̮͚̦̝̤͖̙͔͚̙̺̩̥̻͈̺̦͕͈̹̳̖͓̜͚̜̭͉͇͖̟͔͕̹̯̬͍̱̫̮͓̙͇̗̙̼͚̪͇̦̗̜̼̠͈̩̠͉͉̘̱̯̪̟͕̘͖̝͇̼͕̳̻̜͖̜͇̣̠̹̬̗̝͓̖͚̺̫͛̉̅̐̕͘͜͜͜͜ͅͅͅ.̶̨̢̢̨̢̨̢̛̻͙̜̼̮̝̙̣̘̗̪̜̬̳̫̙̮̣̹̥̲̥͇͈̮̟͉̰̮̪̲̗̳̰̫̙͍̦̘̠̗̥̮̹̤̼̼̩͕͉͕͇͙̯̫̩̦̟̦̹͈͔̱̝͈̤͓̻̟̮̱͖̟̹̝͉̰͊̓̏̇͂̅̀̌͑̿͆̿̿͗̽̌̈́̉̂̀̒̊̿͆̃̄͑͆̃̇͒̀͐̍̅̃̍̈́̃̕͘͜͜͝͠͠z̴̢̢̡̧̢̢̧̢̨̡̨̛̛̛̛̛̛̛̛̲͚̠̜̮̠̜̞̤̺͈̘͍̻̫͖̣̥̗̙̳͓͙̫̫͖͍͇̬̲̳̭̘̮̤̬̖̼͎̬̯̼̮͔̭̠͎͓̼̖̟͈͓̦̩̦̳̙̮̗̮̩͙͓̮̰̜͎̺̞̝̪͎̯̜͈͇̪̙͎̩͖̭̟͎̲̩͔͓͈͌́̿͐̍̓͗͑̒̈́̎͂̋͂̀͂̑͂͊͆̍͛̄̃͌͗̌́̈̊́́̅͗̉͛͌͋̂̋̇̅̔̇͊͑͆̐̇͊͋̄̈́͆̍̋̏͑̓̈́̏̀͒̂̔̄̅̇̌̀̈́̿̽̋͐̾̆͆͆̈̌̿̈́̎͌̊̓̒͐̾̇̈́̍͛̅͌̽́̏͆̉́̉̓̅́͂͛̄̆͌̈́̇͐̒̿̾͌͊͗̀͑̃̊̓̈̈́̊͒̒̏̿́͑̄̑͋̀̽̀̔̀̎̄͑̌̔́̉̐͛̓̐̅́̒̎̈͆̀̍̾̀͂̄̈́̈́̈́̑̏̈́̐̽̐́̏̂̐̔̓̉̈́͂̕̚̕͘͘̚͘̚̕̚̚̚͘̕̕̕͜͜͝͠͠͝͝͝͝͠͝͝͝͠͝͝͝͝͝͝ͅͅͅī̸̧̧̧̡̨̨̢̨̛̛̘͓̼̰̰̮̗̰͚̙̥̣͍̦̺͈̣̻͇̱͔̰͈͓͖͈̻̲̫̪̲͈̜̲̬̖̻̰̦̰͙̤̘̝̦̟͈̭̱̮̠͍̖̲͉̫͔͖͔͈̻̖̝͎̖͕͔̣͈̤̗̱̀̅̃̈́͌̿̏͋̊̇̂̀̀̒̉̄̈́͋͌̽́̈́̓̑̈̀̍͗͜͜͠͠ͅp̴̢̢̧̨̡̡̨̢̨̢̢̢̨̡̛̛͕̩͕̟̫̝͈̖̟̣̲̖̭̙͇̟̗͖͎̹͇̘̰̗̝̹̤̺͉͎̙̝̟͙͚̦͚͖̜̫̰͖̼̤̥̤̹̖͉͚̺̥̮̮̫͖͍̼̰̭̤̲͔̩̯̣͖̻͇̞̳̬͉̣̖̥̣͓̤͔̪̙͎̰̬͚̣̭̞̬͎̼͉͓̮͙͕̗̦̞̥̮̘̻͎̭̼͚͎͈͇̥̗͖̫̮̤̦͙̭͎̝͖̣̰̱̩͎̩͎̘͇̟̠̱̬͈̗͍̦̘̱̰̤̱̘̫̫̮̥͕͉̥̜̯͖̖͍̮̼̲͓̤̮͈̤͓̭̝̟̲̲̳̟̠͉̙̻͕͙̞͔̖͈̱̞͓͔̬̮͎̙̭͎̩̟̖͚̆͐̅͆̿͐̄̓̀̇̂̊̃̂̄̊̀͐̍̌̅͌̆͊̆̓́̄́̃̆͗͊́̓̀͑͐̐̇͐̍́̓̈́̓̑̈̈́̽͂́̑͒͐͋̊͊̇̇̆̑̃̈́̎͛̎̓͊͛̐̾́̀͌̐̈́͛̃̂̈̿̽̇̋̍͒̍͗̈͘̚̚͘̚͘͘͜͜͜͜͜͜͠͠͝͝ͅͅͅ☻♥■∞{╚mYÄÜXτ╕○\╚Θº£¥ΘBM@Q05♠{{↨↨▬§¶‼↕◄►☼1♦  wumbo╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ╚̯̪̣͕̙̩̦͓͚̙̱̘̝̏̆ͤ̊̅ͩ̓̏̿͆̌Θ̼̯͉ͭͦ̃͊͑̉ͯͤ̈́ͬ͐̈́͊ͤͅº͍̪͇͖̝̣̪̙̫̞̦̥ͨ̂ͧ̄̿£̺̻̹̠̯͙͇̳ͬ̃̿͑͊ͨͣ

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, SamStrecker said:

This is a big reason Intel is loosing so much on the PC side. You buy a CPU and it will last you 5 years or more. We are getting so good at technology there is no reason to upgrade making companies loose money. Pretty soon we will get a pattern of upgrades ever so often. Then companies will have to depend on people upgrading then

Its the other way around. Software can not depend on hardware performance that isn't there, hardware drives the market for software that uses it which drives sales. Games aren't written against some nebulous ideal and then scaled down to fit on the hardware of the day instead they are written against a component they know a reasonable amount about with a bit of a guess as to the actual performance level.

 

Intel literally competes with itself for sales as they make a reason to upgrade. The problem is that now CPU performance is growing so slowly they aren't giving more performance to software (hence all the cloud software today where developers can use multiple CPUs) developers hence the software uses aren't growing. Don't ever get into the thinking that todays stuff is good enough, its extremely limited in its performance for what we might want to do, a vast range of calculations and software ideas aren't viable until we have 10000x what we have today and beyond. Its stuck due to physics not because no one can think of anything to do with more.

Link to comment
Share on other sites

Link to post
Share on other sites

given a decent budget, a "high end" PC will be better for the economy.

 

RAM only need to be upgraded during generational changes. so 3-5 years between those.

Same for Mobo aslong as you get two generations of CPUs.

If your CPU has many cores, it will generally last longer. So even a 6 core 12 thread Westmere/Nehalem CPU will last a decent while.

 

Now, depending on price, Today in 2016 you can get a mid-range system to last a while. But going back 5 years, the CPUs were much less futureproof.

 

I am willing to bet that a i7 4790k / FX 8350 + 2x R9 290X/390X will last you for YEARS before they must be upgraded.

But a Athlon x4 860k/i3/G3258 will be shit before next year.

Link to comment
Share on other sites

Link to post
Share on other sites

Now that stuff is stagnating more, the 1500 bucks will last a relatively long time and probably do better than 3000 every 6.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Prysin said:

I am willing to bet that a i7 4790k / FX 8350 + 2x R9 290X/390X will last you for YEARS before they must be upgraded.

But a Athlon x4 860k/i3/G3258 will be shit before next year.

I especially agree with this. Which is why I always hate to see these so called 'balanced' rigs with the G3258 and something like a r380x. Realistically you're putting more money into the part that will be out dated sooner. A minimum of an i5 should be in any system imo just due to how long cpus are lasting.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, gonvres said:

I especially agree with this. Which is why I always hate to see these so called 'balanced' rigs with the G3258 and something like a r380x. Realistically you're putting more money into the part that will be out dated sooner. A minimum of an i5 should be in any system imo just due to how long cpus are lasting.

you may not need an i5.

An i5 is only effective on the mid range, it can never do better then mid-range unless you OC the shit outta it. More cores/Threads will beat out MHz and even IPC in the long run. if you want shit to last, get an i7. Even an i7 from years ago will beat or atleast match a i5 from today. Whilst a i5 from years ago is noticeably slower then a i5 or hell, even an i3 today.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Prysin said:

you may not need an i5.

An i5 is only effective on the mid range, it can never do better then mid-range unless you OC the shit outta it. More cores/Threads will beat out MHz and even IPC in the long run. if you want shit to last, get an i7. Even an i7 from years ago will beat or atleast match a i5 from today. Whilst a i5 from years ago is noticeably slower then a i5 or hell, even an i3 today.

That is not true at all.

 

1) More cores/threads will not beat out MHz and IPC "in the long run". If history has taught us anything then it is that IPC is king for most applications. There are some edge cases but it's still uncommon. Now, that might change in the future, but we have been hearing things about how games and other applications will use more cores in the future for ages now. I heard people talking about it a decade ago, and next to no progress has been made. There might be 10 more years until something significant happens, and at that point you will probably have replaced even a high end i7 you bought today.

 

2) A modern i5 and an i7 from previous generation will trade blows. It will highly depend on which application you are using which chip wins. It is worth noting that even an i5 from years ago will stack up fairly well against an i5 from today (especially in things like games). Processors haven't really improved that much the last couple of generations.

 

3) An i5 from years ago is not slower than today's i3s. I don't even understand how you can say that when you in the previous statement said that threads and cores are more important than IPC. You can't have it both ways.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, LAwLz said:

That is not true at all.

 

1) More cores/threads will not beat out MHz and IPC "in the long run". If history has taught us anything then it is that IPC is king for most applications. There are some edge cases but it's still uncommon. Now, that might change in the future, but we have been hearing things about how games and other applications will use more cores in the future for ages now. I heard people talking about it a decade ago, and next to no progress has been made. There might be 10 more years until something significant happens, and at that point you will probably have replaced even a high end i7 you bought today.

 

2) A modern i5 and an i7 from previous generation will trade blows. It will highly depend on which application you are using which chip wins. It is worth noting that even an i5 from years ago will stack up fairly well against an i5 from today (especially in things like games). Processors haven't really improved that much the last couple of generations.

 

3) An i5 from years ago is not slower than today's i3s. I don't even understand how you can say that when you in the previous statement said that threads and cores are more important than IPC. You can't have it both ways.

i assume you want proof, so i will supply some. This applies only to games, as productivity software almost always prefer threads + IPC over sheer MHz + IPC.

 

 

 

 

 

 

 

Now, we know that a modern i5 will match a old days i7, but how badly is the i7 lagging behind?

anandtech.com/bench/product/1544?vs=287

 

I also said a modern i3 can match a old days i5. So how would a i3 6100TE overclocked to match up to a i5 2500ks clocks???

anandtech.com/bench/product/1645?vs=288

 

As we can see, it stacks up. If that had been a full fledged i3 6100 running at 3.7Ghz, the i5 would get a damn good run for its money.

 

 

While it may sound like a oxymoron to claim that more threads = better for the long term, then also state that fewer threads will match up to more threads in the long run, it is simply a observation of how things tend to end up. Infact, a CPU with more threads will tend to last longer. The single best proof of this is the FX CPUs... They are fucking 4 years old and still keeping up (just barely). People are still running Sandy and Ivy i7s with great performance, while on this very forum we see people with their sandy i5s wondering if they should upgrade to a 6600k or a 4790k because their current CPU isnt up to snuff.

 

Just observe these forums for a few weeks and you realize that these threads are popping up every now and then. However i have yet to notice a single thread about people upgrading from a 2600k or a 3770k because they NEED to. Ive seen quite a few threads about people doing it because they WANT to. But a need for upgrade is not equal to a desire to upgrade. I think we can agree on that.

 

As for the Threads vs IPC argument.

How would you think a i7 3960x vs a 6600k. Skylake has at least 20-25% single core advantage over Ivy bridge, which is reflected in Cinebench single thread benches done by AT. However even with a clock speed advantage and a IPC advantage, the 6600k doesnt win out nearly as much as one would imagine. Now, i suspect the 3960x isnt able to use all its threads in all the tests, but that is irrelevant for this comparison.

 

Further on, we can compare the FX 8350 to the 6600k. Sure it has 4 more threads then the i5, but the i5 has over 75% faster single core performance by any measurable means.

Yet, despite the FX having only ONE relevant statistical advantage over the i5, aka thread count, it is keeping up. Despite having worse single thread performance, older instructions, slower chipset, horrendrous IMC, terrible core scaling and worse of all, much much much much slower cache (nearly 50% slower then a haswell CPU in raw speed and response).

 

anandtech.com/bench/product/1544?vs=697

 

Despite all this, it is clinging on. Despite every sane statistic saying it shouldnt keep up at all, it keeps up.

 

And you are right, because processor tech hasnt really increased in performance that much over the last 5-6 years, programmers has only had one solution before them. Use more threads. And there is nothing hinting to single core performance inreasing enough in the coming years for this trend to change anytime soon. i will go as far as to bet that by 2025, the i3 will have 4 full cores, the i5 4 cores + HT and the i7 6 cores + HT. Because as consumers require more and more effects, and software requires more and more processing power, we will need more parrellellism.

GPGPU is already being used in GPUs for gaming. If we went 10 years back and said we will need GPU acceleration for gaming you would be a laughing stock. Nobody would ever believe you.They wouldnt even imagine that games COULD demand that much power by then. I mean, back then even HPC wasn't fully dependent on parrallellized accelerators.

Times move on, and by simply observing how things are evolving, the statement that threads > raw power will hold true even more so in the future.

 

DX12 CPU benchmarks will be the testament to whether my predictions are right or wrong.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Prysin said:

i assume you want proof, so i will supply some. This applies only to games, as productivity software almost always prefer threads + IPC over sheer MHz + IPC.

Yes please. I want proof from reputable sites as well. So not something you did yourself or from sites like TrustedRussianBenchmarkGuy.ru.

I don't want some benchmark specifically designed to remove GPU bottlenecks either (so for example pairing it with a Titan X). The reason I don't want that is because that is not your average system. Nobody is buying a Titan X and going "hmm... Should I get this i3 or this i5 to go along with the Titan X?". The person you replied to mentioned the 380X so something in that range would be good.

 

Edit: Didn't notice the spoiler.

 

Digital Foundry videos:

Spoiler

The first video is not a good example of the i3-6100 matching the 2500K because both are being locked at 60 FPS by V-sync. How about turning V-sync off and see what numbers you get then? Surely you must see the issue with that test as well.

By the same logic, the i3 and an i7 are just as good as each other for gaming because both get 30 FPS in minesweeper (Minesweeper is locked at 30 FPS).

 

 

The second video is quite frankly stupid. It shows an overclocked i3 (4.4GHz) vs stock i5s. The i3 also got much faster RAM than the i5 parts.

You can get pretty much any CPU or GPU to outperform a higher tier CPU or GPU by overclocking the lower end chip. An overclocked 380X beating a stock 390X doesn't mean the 380X is better. Right? Overclock all chips as much as they can and give them the same RAM and you will probably get a different result. The third video even says that changing out the RAM gave a big performance increase to the 2500K (in the fourth video they say changing RAM and overclocking the 2500K gave it a 42% performance boost). That video was also flawed because they did exactly what I said earlier in the post. They paired an i3 with a Titan X. That is not a realistic setup.

 

The third video once again uses a Titan X. They even say the test was specifically designed to test CPUs in an unrealistic scenario. To quote the video "by doing our best to make these games CPU limited". They even say that using their testing methodology, the gaming experience will not be pleasant (because it stutters a lot) and should not be taken as setup recommendations.

That's why, as some of the comments on the video points out, DigitalFoundry gets completely different results than most other benchmarks. Because they are not doing real world performance tests. They are deliberately trying to create big differences between the chips.

 

 

I think we can completely disregard the first three videos because they were so stupid and silly. I think even you can see that they are terribly misleading and doesn't tell the whole story, right? The fourth video is a lot better (and invalidates the second video you posted) though. The interesting benchmarks in that video are the ones showing the 2500K vs the 6500K when both got a GTX 970 (1, 2, 3, 4). I will totally agree that there is a difference between the 2500K and the 6500 in the tests. The thing is though that the difference (when using a GTX 970, not some Titan X) is for example 60 FPS to 70 FPS (in the racing game it shows 60 FPS to 80 FPS). That's when you give the 6500 the big advantage in RAM speed as well (1600MHz vs 3200MHz). I strongly disagree that a ~7 FPS difference (which it might end up being when giving them equally fast RAM) is "noticeably slower"... And that's old i5 vs new i5, not old i5 vs new i3 like you were talking about.

 

Rest:

Spoiler

 

2 hours ago, Prysin said:

Now, we know that a modern i5 will match a old days i7, but how badly is the i7 lagging behind?

anandtech.com/bench/product/1544?vs=287

So the old i7 matches the new i5 in gaming. What about the old i5 vs the modern i5? It's a shame that Anandtech didn't test that, because I would not be surprised if the 2600K and 2500K were very, very similar in those tests (on Anandtech's setup).

 

2 hours ago, Prysin said:

I also said a modern i3 can match a old days i5. So how would a i3 6100TE overclocked to match up to a i5 2500ks clocks???

anandtech.com/bench/product/1645?vs=288

Eh... Are we looking at the same benchmarks? The 2500K is either tied or wins by a significant margin. The only benchmark it loses in is single threaded Cinebench. Switch to multithreaded Cinebench and the i5 wins. Did you miss that some benchmarks says "higher is better" and some say "lower is better"?

Also, the 2500K gains a lot more from being overclocked (since you are overclocking 4 cores instead of 2). The only thing that test shows is that the i5-2500K can trade blows with the i3-6100 if you keep them at stock speeds and give the i3 the advantage of faster RAM (in this particular test it was

If you keep the i5-2500K at stock then it will trade blows with the i3 (assuming you also give the i3 faster RAM than the 2133MHz vs 1333MHz or 1600MHz) but when something can use more than 2 cores the i5 pushes ahead (like HandBrake, Cinebench, WinRAR, 7-zip and POV-Ray).

 

2 hours ago, Prysin said:

While it may sound like a oxymoron to claim that more threads = better for the long term, then also state that fewer threads will match up to more threads in the long run, it is simply a observation of how things tend to end up. Infact, a CPU with more threads will tend to last longer. The single best proof of this is the FX CPUs... They are fucking 4 years old and still keeping up (just barely). People are still running Sandy and Ivy i7s with great performance, while on this very forum we see people with their sandy i5s wondering if they should upgrade to a 6600k or a 4790k because their current CPU isnt up to snuff.

You are not making any sense right now. Either more threads is better in the long run, or higher IPC is better in the long run. Which one are you saying is the best? You can't say both because both can't be "the best" simultaneously. Right now it seems like you are swapping sides as you see fit. See a benchmark that favors IPC? Then you claim that is the way to go. See a benchmark that favors threads? Then you say that's the way to go. You have to pick side if you want to deal with absolutes.

Also, I strongly disagree if you are trying to say the FX chips holds up but the 2500K doesn't. Even in the videos you linked they were neck and neck.

 

2 hours ago, Prysin said:

As for the Threads vs IPC argument.

How would you think a i7 3960x vs a 6600k. Skylake has at least 20-25% single core advantage over Ivy bridge, which is reflected in Cinebench single thread benches done by AT. However even with a clock speed advantage and a IPC advantage, the 6600k doesnt win out nearly as much as one would imagine. Now, i suspect the 3960x isnt able to use all its threads in all the tests, but that is irrelevant for this comparison.

Maybe I am just tired but I really don't understand what you are trying to say here. I don't understand which side you are on or what your point is. It seems like a sentence is missing or something.

 

2 hours ago, Prysin said:

Further on, we can compare the FX 8350 to the 6600k. Sure it has 4 more threads then the i5, but the i5 has over 75% faster single core performance by any measurable means.

Yet, despite the FX having only ONE relevant statistical advantage over the i5, aka thread count, it is keeping up. Despite having worse single thread performance, older instructions, slower chipset, horrendrous IMC, terrible core scaling and worse of all, much much much much slower cache (nearly 50% slower then a haswell CPU in raw speed and response).

Keeps up in what? The video you linked showed it being about the same as the 2500K in games. In other tests like HandBrake, x264 SYSmark, Octane, Linux-bench, Krakt and many other it gets completely destroyed by the i5. Again, did you forget that some benchmarks are "higher is better" and some are "lower is better"? Because that's the only way I can see someone saying the 8350 and the 6600K are neck and neck in those benchmarks.

 

2 hours ago, Prysin said:

GPGPU is already being used in GPUs for gaming. If we went 10 years back and said we will need GPU acceleration for gaming you would be a laughing stock. Nobody would ever believe you.They wouldnt even imagine that games COULD demand that much power by then. I mean, back then even HPC wasn't fully dependent on parrallellized accelerators.

Ehh... Do you realize that 2006 was 10 years ago? We have been using GPU acceleration for games since the 70's. Nobody would have laughed at you for saying we needed GPU acceleration for games in 2006. They would have said "yeah we know. We have been doing it for 30 years already".

 

If you are talking about GPGPU (which is not the same as just GPU acceleration) then they wouldn't have been surprised by that statement either. Again, 2006 was 10 years ago. CUDA was almost ready (released in 2007) and Khronos and Microsoft were probably working on DirectCompute and OpenCL at the time (both released in 2009). Anyone laughing at you for suggesting GPGPU in games would have been an idiot because major companies were already working on their own APIs for it.

In fact, the Ageia PhysX PPU was shipping their physics card in 2006, and that card was specifically meant to offload the CPU and do the types of things we today do on the GPU (because nobody wanted to have a separate physics card). For all intents and purposes, GPGPU was already being developed, planned and used back in 2006 (for gaming).

 

 

2 hours ago, Prysin said:

Times move on, and by simply observing how things are evolving, the statement that threads > raw power will hold true even more so in the future.

Define "raw power".

2 hours ago, Prysin said:

DX12 CPU benchmarks will be the testament to whether my predictions are right or wrong.

Of course you will see differences in CPU benchmarks. CPU benchmarks are designed to show the differences. The more relevant question is, will you see a difference in regular gaming benchmarks that represent your average gaming system? Besides, even if you are right it's just a prediction right now, not actual facts. Again, I have been hearing people talk about multi-core scaling "soon being fixed" since late 2006 early 2007 (with the Q6600 being released)... and now 10 years later we have finally reached that point they were talking about happening "soon". Will it take another 10 years to get to the point you are talking about? We don't know.

 

 

I do agree that more threads is a good thing. Personally I will be getting a 6 or 8 core next time I upgrade, but I am not entirely sure I agree with you that a person buying a computer today should get an i7 over and i5 because it will be better in the long run, and I definitely not agree that a modern i3 outperforms an older i5.

 

 

Side note:

Have I mentioned how much I hate the new forum? With big posts like this it is complete shit. It is extremely slow and unresponsive. I wanted to move text between different spoilers but because some of the text is in quotes it fucks everything up. I can't even delete that line break at the top of the second spoiler. Delete doesn't delete it, and backspace doesn't delete it either. The cursor just jumps and selects the quote below it. I really really really REALLY miss being able to switch to HTML view... It made everything so much easier and faster.

Link to comment
Share on other sites

Link to post
Share on other sites

@LAwLz

I can see you are still writing. So ill just put this in here. But atm i am off on a vacation, So i only got my Samsung tablet with me. Thus its such a chore to dig up more sources, so i couldnt be bothered.

 

The anandtech benches has comparisons with R9 285s and GTX 770s, both in single and SLI mode. So those should give you what you want.

Digital Foundry is showing pure CPU difference, not system difference. But while it is debatable whether their methods are right or wrong, they are doing something right.

They use the same GPU with every CPU. Which is something you do not always see when looking at other benchmarks comparing the latest Skylake with Sandy.

 

Now, there is the elephant in the room, which can affect things, and that is GPU drivers. Some benchmarkers do not actually update their CPU benches to reflect how the CPU would do when the GPU drivers and or game has been updated. So honestly, i feel more comfortable looking at a pure CPU vs CPU comparison to prove my point, rather then a comparison of the most balanced setup.

Which is the core of benchmarking issue. If looking at CPUs, are you looking at CPU performance or system total performance.

You can have a Sandy i3 and a Pascal GPU, however if you were to put a Skylake i3 with a Fermi GPU the Skylake CPU would be completely destroyed in games. Because the Pascal GPU would be an order of magnitude faster.

So same GPU for all products are the best comparison, and a Titan X will show you the limits of the CPU, rather then the GPU

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Prysin said:

@LAwLz

I can see you are still writing. So ill just put this in here. But atm i am off on a vacation, So i only got my Samsung tablet with me. Thus its such a chore to dig up more sources, so i couldnt be bothered.

 

The anandtech benches has comparisons with R9 285s and GTX 770s, both in single and SLI mode. So those should give you what you want.

Digital Foundry is showing pure CPU difference, not system difference. But while it is debatable whether their methods are right or wrong, they are doing something right.

They use the same GPU with every CPU. Which is something you do not always see when looking at other benchmarks comparing the latest Skylake with Sandy.

 

Now, there is the elephant in the room, which can affect things, and that is GPU drivers. Some benchmarkers do not actually update their CPU benches to reflect how the CPU would do when the GPU drivers and or game has been updated. So honestly, i feel more comfortable looking at a pure CPU vs CPU comparison to prove my point, rather then a comparison of the most balanced setup.

Which is the core of benchmarking issue. If looking at CPUs, are you looking at CPU performance or system total performance.

You can have a Sandy i3 and a Pascal GPU, however if you were to put a Skylake i3 with a Fermi GPU the Skylake CPU would be completely destroyed in games. Because the Pascal GPU would be an order of magnitude faster.

So same GPU for all products are the best comparison, and a Titan X will show you the limits of the CPU, rather then the GPU

The problem with that methodology is that it does not show the results the users will see. Like the Digital Foundry video says, using their setup gaming will be very unpleasant because it is stuttery. If you deliberately design a test to show differences between CPUs then it's not a surprise that they show differences between the CPUs. Their videos does not represent real world performance differences. They might as well be running the game at 800x600.

 

I also don't agree when you say Digital Foundry shows "pure CPU difference, not system difference" because DigitalFoundry uses very different memory setups for each system. Like I pointed out in the post above, they frequently gave the 2500K much slower RAM despite saying in one video that faster RAM boosted performance of the 2500K by about 15%.

 

The areas where I disagree with you are:

1) You say that these are gaming benchmarks. I don't think they are gaming benchmarks because they do not use setups any sane person would use and not even DF themselves recommend anyone using their setup for gaming. For all intents and purposes they are synthetic benchmarks showing CPU performance of game engines. They are not benchmarks showing how well CPUs handles playing games.

 

2) You say that a modern i3 is as good as an old i5. If we ignore that you didn't specify gaming I still have to disagree. The videos you linked were laughable at best and had a ton of issues (see point 1). The other links did not compare a modern i3 vs an old i5 for gaming. Some links did compare i3 vs i5 and after looking at those I did not come to the same conclusion as you (again, are you sure you didn't miss that some benchmarks are "lower is better" and some are "higher is better"?). They were trading blows when not all 4 cores could be utilized, and when they could the i5 won. Since you are trying to argue that threads/cores are more important than IPC I really don't see how you can recommend a new i3 over and old i5. Do you not see that you are contradicting yourself here?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, LAwLz said:

snip

first up, replying to your last post. 

I hate the new forums. When writing on tablets/mobile. If you put spoilers for videos, all text after the last spoiler tag is dragged inside the spolier. Also, forums updates slower, so forums feel a LOT less alive.

 

Replying to your last post:

1 - i agree and disagree with DFs testing methodology. It shows you, clearly, what the capacities of the CPUs and their respective chipset can produce. Exactly because you CANT use DDR4 with Sandy, these tests will never be truly equal unless someone does a DDR3 skylake test. Even then, the IMC in skylake is much more efficient, the cache is faster. So it would still not be a pure architecture comparison.

The videos are great CPU performance benchmarks, but they are NOT perfect gaming benchmarks. But that goes for GPU benchmarks done by pretty much every fucking benchmarking site too. Everyone and their mother has a 6700k or 5960x at 4.5GHz that they use to compare 750Tis... Nobody gets a 300-1000 USD CPU to play games with a 100-200 USD GPU. So going by YOUR logic, every GPU benchmark from EVERY FUCKING BENCHMARKER is worthless. Because it is an unrealistic test by default.

So what do you fucking want? a i3 6100 + R9 380X vs i5 2500k + GTX 560?, it wouldnt be a competition at all. 

You have to draw a line in the sand somewhere and decide what you want to test. Your argument against DFs test is as stupid as it gets.

OFC it isnt a "realistic test", but i can say that of almost every benchmark there is. "it isnt realistic because X Y Z"...

Take BF4 benches, not a single fucking one is done online a 64 player server, which is the most demanding part of the game. And the part most people are going to play. Yet we say BF4 benches are all fine, despite it not even remotely depicting the worst part of the game by ANY standard.

What about MMOs? Why isnt a multi player RAID tested? That is what people use their PCs for, even their 2000 USD gaming PC..

One can argue for and against these benchmarks all day, i do so myself when people uses Synthetics to prove FX CPUs are shit, when both the poster and myself knows the FX CPU can pull its own weight in games (which is what most threads on this forum is about).

But does that invalidate the Synthetics? No, ofcourse not. Because in workloads that uses those types of workloads, the FX will suck balls. 

 

The deal about using a TitanX is to show the CPU potential. Of course it is unrealistic to use a TitanX with a i3, but it is equally unrealistic to use a i7 with a R9 380 in a gaming computer. If you want to test a GPU, you eliminate the CPU bottleneck by using a 5960x. If you want to test a CPU you eliminate the GPU bottleneck by using one or multiple TitanXs. Because that is the fastest single GPU card there is ATM.

 

2 - 

I would argue that a i3 6100 is better then a sandy i5 2500K CPU for gaming. Yes i would. But why?

Because even used, the 2500k will cost the same or more (most likely more going by how well Intel K SKUs retain value).. All this while having an outdated chipset with no real upgrade path, older instructions, being less efficient and well, the list keeps going on.

Would i reccomend a i3 6100 over a i5 4690k? NO. Because the generational improvements from Haswell to Skylake isnt enough to warrant the performance loss. The generational gap between Sandy and Skylake however is great enough for a i3 to actually compete with a stock i5 2500k.

 

Ill be frank, like every single thing in this world, there is an exception.

You know that i have argued that a FX 6300 is more value for money then a i3 4170. In gaming this holds mostly true with some exceptions. People argue that "oh but look at Arma 3". Well, that is one game, even DX9 games can use 4 cores at most, although DX9 uses mostly single core for most of the stuff.Since mid 2006, we have had game engines using 4 cores effectively (Source Engine). The only real benefit of getting a Haswell system today is for a i5 or up, but that is a notable investment over a lower cost FX setup, or a lower end skylake setup with a much better upgrade path.

But would i argue that a FX 6300 is better then a i3 6100? No. Because said i3 is so much faster, core for core, the chipset is so much better, the upgrade path, including the upcoming Kaby Lake CPUs makes it a great deal more valuable.

Would i get a 2600k over a 6100? I am not sure, the 2600k will have overwhelmingly better performance, but its used value and the limitations of the chipset and architecture in terms of overclocking means there isnt that much more performance to drag out of it, so in cases where single core performance is needed, the i3 would smoke the i7. And lose by a landslide in some other cases.

 

However i do recognize that if i bought a 2600k when new, i would be better off then buying a locked i5 every 2-3 years to keep the CPU up to date.

 

In terms of GPUs, it is really hard to argue for the way to go about things.

If you go by Nvidia, you are most likely going to need a new GPU every 2 years no matter what. If going by AMD you will probably get 3-4 years out of each GPU, purely based on how these two companies are supporting their products long terms through drivers. That being said, if you arent a idiot and realize that Crossfire/SLI isnt what it used to be, getting dual GPU setups is one way to keep performance levels high.

If i were to rebuild my computer today, i would get a X99 based CPU alongside a single 980Ti/FuryX, then just wait until 980Tis/FuryXs drop in price and grab a second one.

Link to comment
Share on other sites

Link to post
Share on other sites

Also you can overclock the i5 if it starts to lag behind. Can't do that with the i3.

15 minutes ago, Prysin said:

The generational gap between Sandy and Skylake however is great enough for a i3 to actually compete with a stock i5 2500k.

 

 

Would i get a 2600k over a 6100? I am not sure, the 2600k will have overwhelmingly better performance, but its used value and the limitations of the chipset and architecture in terms of overclocking means there isnt that much more performance to drag out of it, so in

Yes, a stock 2500K. But who buys a 2500K to run it at stock speeds? Nobody. You plug in 1.35V, set the turbo to 4.5GHz and it'll beat that Skylake i3. My experience when using my 2500K in CPU limited games like WoW raiding, was that my frame-rate was directly proportional to the clock speed (as tends to be the case in CPU limited scenarios). A 30% overclock to 4.5GHz gave me approximately 30% more FPS in raids.

 

The IPC improvements from SB through Haswell to Skylake have been what, 25-30%? So that 2500K or 2600K at 4.5 - 4.6 GHz would be about on the money to be equal-ish to the Skylake i3 in single-threaded tasks (assuming they don't make excessive use of the more modern instruction sets).

 

That said, would I buy a 2600K over an i3 6100? No. I'd take the ~320 euro for an LGA 11xx i7, take a packed lunch to work for a couple of weeks and buy a 5820K instead (and overclock that to 4.5GHz). Because that way I've hedged my bets: I have pretty decent IPC in single-threaded workloads, and very decent performance in heavily multi-threaded workloads. Not to mention the upgrade path to that 10c/20t 6950X if multi-core REALLY takes off, as well as the higher-speed DDR4 RAM that will come over the next 3-4 years.

Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Asus RoG STRIX GTX 1080ti OC | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Logitech G910 | Sennheiser 599 | Blue Yeti | Logitech G502

 

Nikon D500 | Nikon 300mm f/4 PF  | Nikon 200-500 f/5.6 | Nikon 50mm f/1.8 | Tamron 70-210 f/4 VCII | Sigma 10-20 f/3.5 | Nikon 17-55 f/2.8 | Tamron 90mm F2.8 SP Di VC USD Macro | Neewer 750II

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Fetzie said:

Also you can overclock the i5 if it starts to lag behind. Can't do that with the i3.

Yes, a stock 2500K. But who buys a 2500K to run it at stock speeds? Nobody. You plug in 1.35V, set the turbo to 4.5GHz and it'll beat that Skylake i3. My experience when using my 2500K in CPU limited games like WoW raiding, was that my frame-rate was directly proportional to the clock speed (as tends to be the case in CPU limited scenarios). A 30% overclock to 4.5GHz gave me approximately 30% more FPS in raids.

 

The IPC improvements from SB through Haswell to Skylake have been what, 25-30%? So that 2500K or 2600K at 4.5 - 4.6 GHz would be about on the money to be equal-ish to the Skylake i3 in single-threaded tasks (assuming they don't make excessive use of the more modern instruction sets).

 

That said, would I buy a 2600K over an i3 6100? No. I'd take the ~320 euro for an LGA 11xx i7, take a packed lunch to work for a couple of weeks and buy a 5820K instead (and overclock that to 4.5GHz).

pro tip -> buy half baked baguettes, put in oven in the morning. Fill with the topping of your choice. Put in plastic bag alongside a 2L bottle of water and go to work.

 

I used to buy lunch, that cost me roughly 350 USD a month, buying baguettes and stuff to fill the baguettes with cost like 25-30 bucks a month.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Prysin said:

1 - i agree and disagree with DFs testing methodology. It shows you, clearly, what the capacities of the CPUs and their respective chipset can produce. Exactly because you CANT use DDR4 with Sandy, these tests will never be truly equal unless someone does a DDR3 skylake test. Even then, the IMC in skylake is much more efficient, the cache is faster. So it would still not be a pure architecture comparison.

The thing is though, they were pairing the second slowest DDR3 with the fastest DDR4. You can get DDR 3 and 4 which are the same speeds, but they didn't even try that. You don't have to be fair down to the cache level (because that's not important for consumers), but if you establish that using faster RAM will give a 15% performance increase then maybe you shouldn't give that benefit to the new chips and not to the old ones. I could understand it more if they gave both platforms the slowest DDR3 and DDR4 kits they could get, or both platforms the fastest DDR3 and DDR4 they could get, but that's not what they did. They gave the old chip mediocre/budget RAM, and the new chip the best they could get.

 

19 hours ago, Prysin said:

The videos are great CPU performance benchmarks, but they are NOT perfect gaming benchmarks. But that goes for GPU benchmarks done by pretty much every fucking benchmarking site too. Everyone and their mother has a 6700k or 5960x at 4.5GHz that they use to compare 750Tis... Nobody gets a 300-1000 USD CPU to play games with a 100-200 USD GPU. So going by YOUR logic, every GPU benchmark from EVERY FUCKING BENCHMARKER is worthless. Because it is an unrealistic test by default.

No, not all benchmarks are worthless because they use a 1000 dollar CPU.

1) Because most people who buy a half decent CPU will get GPU bottlenecked. Changing the CPU will have a quite small impact on performance.

2) A high end CPU and a low end GPU will still give pretty consistent performance. Like DigitalFoundry themselves said, with their setup (which they do not recommend) the games are really stuttery and not pleasant to play on.

3) Most games don't take advantage of the extra cores in those 1000 dollar chips, so they are mostly just sitting there doing nothing.

 

Do you honestly not understand the difference and how one is a far more accurate representation of what consumers buying your average system can expect?

 

 

19 hours ago, Prysin said:

So what do you fucking want? a i3 6100 + R9 380X vs i5 2500k + GTX 560?, it wouldnt be a competition at all. 

Are you serious right now?

How about i3 6100 + 380X vs 2500K + 380X?

Seems fair right? A lot more accurate representation of what a consumer can expect than i3-6100 + Titan X vs 2500K + Titan X.

 

 

19 hours ago, Prysin said:

You have to draw a line in the sand somewhere and decide what you want to test. Your argument against DFs test is as stupid as it gets.

Are you trying to be dishonest? You said that you were talking about gaming, and then you post sources which you admit do not represent actual gaming. Your sources are not evidence that support the claims you made because the DF videos do not talk about gaming. They are talking about an unrealistic setup which they themselves do not recommend for gaming, and even their own videos did not prove that a new i3 outperforms an old i5.

 

 

19 hours ago, Prysin said:

The deal about using a TitanX is to show the CPU potential. Of course it is unrealistic to use a TitanX with a i3, but it is equally unrealistic to use a i7 with a R9 380 in a gaming computer. If you want to test a GPU, you eliminate the CPU bottleneck by using a 5960x. If you want to test a CPU you eliminate the GPU bottleneck by using one or multiple TitanXs. Because that is the fastest single GPU card there is ATM.

No it's not unrealistic to put an i7 and a 380 in a gaming computer. I've seen a ton of setups like that.

 

I completely agree that if you want to test a CPU then you eliminate the GPU bottleneck. However, CPU tests and gaming tests are two completely different things. Just because part A performs X% better in a CPU benchmark doesn't mean it will give you X% higher FPS in a game. You are posting a synthetic CPU benchmark and try to say those scores will translate accurately to gaming. It doesn't. I can show you a 22 core Xeon destroying an LGA 1151 i7 in an x264 benchmark, but that does not mean the Xeon will give you 5 times higher FPS in Battlefield. Right?

 

The problem here is that you said you were talking about gaming, but then you didn't post gaming benchmarks. You posted someone using a game and deliberately setting it up in an unrealistic and not recommended way to show the biggest difference possible between the chips.

 

 

19 hours ago, Prysin said:

I would argue that a i3 6100 is better then a sandy i5 2500K CPU for gaming. Yes i would. But why?

Because even used, the 2500k will cost the same or more (most likely more going by how well Intel K SKUs retain value).. All this while having an outdated chipset with no real upgrade path, older instructions, being less efficient and well, the list keeps going on.

Would i reccomend a i3 6100 over a i5 4690k? NO. Because the generational improvements from Haswell to Skylake isnt enough to warrant the performance loss. The generational gap between Sandy and Skylake however is great enough for a i3 to actually compete with a stock i5 2500k.

Those are very different reasons from what you stated earlier. You said that the i3 6100 will perform noticeably better than the 2500K. That's the part I disagreed with (especially when you take overclocking into consideration). Do you admit that you were wrong in saying that? Or will you argue that for someone with your average GPU (again, not talking about someone pairing an i3 with a Titan X here) will see a noticeable jump in performance going from a 2500K to an i3 6100?

 

 

19 hours ago, Prysin said:

However i do recognize that if i bought a 2600k when new, i would be better off then buying a locked i5 every 2-3 years to keep the CPU up to date.

But earlier you said that people should buy i7s and not i5s... Make up your mind already.

 

 

Also, I really don't understand why you keep shoehorning in stuff about AMD here. It sounds like half your post is completely unrelated lip service to AMD for no reason, and then you throw in some Nvidia bashing for no reason. We were not talking about AMD or Nvidia here. We were strictly talking about Intel and how the modern i3s compare to older i5s and i7s.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×