Jump to content

As far as Far Cry 4 is concerned - it should work on dual-cores with HT. Proof :

CPU_01.png

 

So, apparently its either some bug you've encountered, or mistakes were made during testing. Also, I would like to point out again, like many people did before me, that average FPS is not the only statistic for performance. Average FPS does not tell the full story, not even the half of the story in fact. Proper testing with FCAT, quite easily reveals that, for example, i7 has massive advantage over i5 in terms of frame time delta in a lot of latest AAA games(but that is not the case with many of the older titles). Which means smoother gaming, less stuttering. At the very least, providing full FPS statistics - MIN/MAX/AVERAGE - would be appropriate. 60 FPS average with drops to 20 FPS is not nearly the same as 60 FPS average with drops to 40. Other than that - nice video, quite helpful, especially when I need to link some arguments against "overclocked g3258 ultimate gaming CPU better than i7" crowd.

Link to comment
Share on other sites

Link to post
Share on other sites

*sigh* it seems I'll be getting an i7 4790k. Well that's 1 less SSD for my rig.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Proper testing with FCAT, quite easily reveals that, for example, i7 has massive advantage over i5 in terms of frame time delta. Which means smoother gaming, less stuttering.

Never have I seen any proof of that on either techreport, pcper or guru3d. But maybe I've missed something and you can point me in the direction of some test that proves your statement?
Link to comment
Share on other sites

Link to post
Share on other sites

Never have I seen any proof of that on either techreport, pcper or guru3d. But maybe I've missed something and you can point me in the direction of some test that proved your statement?

By all means. http://www.eurogamer.net/articles/digitalfoundry-2014-core-i5-4690k-core-i7-4790k-review

 

And the video:

 

 

Notice how much the i5 graphs are less consistent, with far more dips into 10+ ms range, and very jagged graph overall. The i7 also sometimes dips below - but not nearly as often, and for the most part, the graph is relatively very smooth.

 

EDIT : Worthy of note - this is of course the benchmark scenario, reduced resolution to make sure GPU is not the bottleneck/cause here. In real-life situation, the difference is less pronounced. However, its there, and you can expect for it to matter really soon. Heck, there is already difference today, but only presents itself in more "CPU heavy" games, like Arma. Arma's inability to run well on anything that is not an i7, or at the very least heavily overclocked i5 is well known. A lot of it is due to optimisation, but the fact that the game itself is very complex open world simulation plays a huge part as well.

 

EDIT 2 : Also worthy of note - in a situation where a very high-performance multi-GPU solution is used, like GTX 980 SLI, or even Titan SLI - in other words, where GPU is less of a bottleneck even at 1080p/4K resolution/High graphical settings - the difference is also there.

Link to comment
Share on other sites

Link to post
Share on other sites

Awesome! 

Current system - ThinkPad Yoga 460

ExSystems

Spoiler

Laptop - ASUS FX503VD

|| Case: NZXT H440 ❤️|| MB: Gigabyte GA-Z170XP-SLI || CPU: Skylake Chip || Graphics card : GTX 970 Strix || RAM: Crucial Ballistix 16GB || Storage:1TB WD+500GB WD + 120Gb HyperX savage|| Monitor: Dell U2412M+LG 24MP55HQ+Philips TV ||  PSU CX600M || 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Every video with Luke it's like he's constantly catching invisible volley balls. I can't help it.  :mellow:

Link to comment
Share on other sites

Link to post
Share on other sites

Let me tell you this had to be another approach ,
Here is a test with my dual xeon system and as you can see almost all cores are in use how much % is used that is the real problem developers need to address this issue better i just want to see the if there is dif results with dx12

 
https://www.youtube.com/watch?v=JhM_-QWgKyY

Link to comment
Share on other sites

Link to post
Share on other sites

I wish you did this video but benched against games that use CPU more than GPU.  Arma 3 for example

CPU: i7 4770K 4.2Ghz, Mobo: Asrock Fatal1ty Z97x Killer, GFX: EVGA GTX 780 Ti SC ACX (x2 SLI), RAM: G.Skill 1600Mhz CAS 9 16GB, DSK Intel 530 120GB OS, Crucial M500 120GB, WD 1TB Blue, WD 1TB Green, PSU: Corsair AX1200i, Case: Obsidian 750D. 

SERVER HP ProLiant Microserver N54L, FreeNAS: ZFS, 8TB (4x 2TB WD Red), RAID Z2, 16GB ECC RAM, 1Gb/s Link Aggregated:  Running as NAS, Plex, & ownCloud

Link to comment
Share on other sites

Link to post
Share on other sites

I wish you did this video but benched against games that use CPU more than GPU.  Arma 3 for example

 

I don't have Arma 3 =( 

Link to comment
Share on other sites

Link to post
Share on other sites

I wonder why is the i3 3220 faster than the FX-4320. Maybe it have a faster core to core speed.

Link to comment
Share on other sites

Link to post
Share on other sites

Odd that Luke thinks the i3 Dual Core with HT wouldn't launch with FC4, maybe  slight oversight expecting that since the G3258 didn't work it wouldn't work with all Dual Cores?

Link to comment
Share on other sites

Link to post
Share on other sites

Interesting results, but I wonder how the "quad core" AMD CPU's stack up in all of this considering they'd only have 2 (slower than Intel) FPU's to work with. Does the dual core Pentium G3258 have the advantage in the budget arena, or would the AMD cores manage to outperform it with better multithreaded integer performance? In terms of raw compute performance, the G3258 should in theory have the advantage, but it would be interesting to see a real-world comparison. Budget-minded builders are looking at a big jump in price from the dual core Intel offerings to the quad cores, so with this video, the AMD quads sound much more attractive from that perspective.

 

 

I wonder why is the i3 3220 faster than the FX-4320. Maybe it have a faster core to core speed.

 

 

It's faster per thread, and also because the FX-4320 (and all Bulldozer/Piledriver/Steamroller-derived designs) advertise their number of cores in the number of integer units they have; Per every two integer units, there is one floating point unit, an arrangement called a module. The FX-4320 therefore has two FPU's (two modules), and floating point performance I find matters a lot in gaming. Anandtech has an article about Kaveri APU's where they found that a single Haswell core tends to outperform two AMD modules (four cores) in terms of floating point performance. We're talking about Ivy Bridge, so Haswell definitely is faster still, but the performance boost in Haswell is still less than 100%, so it stands to reason the i3 3220 would still outperform the FX-4320.

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah it run it fine .

I got a BIG FPS boost with the i5 on that game .

Should have done benchmarks :(

It was the only game so far I noticed a difference between the i3 and i5.

 

Only reason you should consider getting an i5 over an i3 paired with that/those cards is so you don't have to upgrade later

Im actually thinking about getting a Xeon 1231v3 for when I start doing more 3d renders and hardware emulation projects :D 

So listen to zap @tsk lol :P

Link to comment
Share on other sites

Link to post
Share on other sites

I'm rocking with a dual cote HT CPU. I find it enough, even though I'm a tab whore, but it just barely handles it. (and maybe ram speeds play in it too)
I consider quad core a necesseity nowdays. Averedge Joe will have a long lasting PC with it.

But I'm getting into 3D modelling, and I tend to stream and game, so I think I could make use of 6 cores. I thing 6 is THE sweet spot. Maybe 6core HT? Anyone?

Link to comment
Share on other sites

Link to post
Share on other sites

I saw it on Vessel.  With people trying to shoot holes in all that was said in the video and people suggesting games to be benchmarked a certain way and suggesting other games to be used I don't even know what to think of the video anymore.  All I know is I am happy with what I bought for my pc and my son's pc and it works fine for me in my pc and him in his pc.  They can keep bringing out cpus with more cores and keep bringing out higher resolution monitors but I won't be moving from 1080p til at least three years from now so I don't have to think about all this crap being thrown out there.  I am thankful for that.

Too many ****ing games!  Back log 4 life! :S

Link to comment
Share on other sites

Link to post
Share on other sites

Only reason you should consider getting an i5 over an i3 paired with that/those cards is so you don't have to upgrade later

Im actually thinking about getting a Xeon 1231v3 for when I start doing more 3d renders and hardware emulation projects :D

So listen to zap @tsk lol :P

 

I got it to upgrade to a gtx 970 or equivalent later.

Yes listen to me , I know what I´m talking about :D

Link to comment
Share on other sites

Link to post
Share on other sites

I got it to upgrade to a gtx 970 or equivalent later.

Yes listen to me , I know what I´m talking about :D

 

Good idea baha, im curious to see how DX12 will change gpu/cpu talking and what we need from a cpu

Probably wait till then to upgrade from my cute little G3258

Its as if people don't see post count sometimes isn't it  :P

Link to comment
Share on other sites

Link to post
Share on other sites

In Cities:Skylines the FPS is not enough to measure. The CPU usage goes up if you increase the game speed, and the in game time will go slower if your CPU can't keep up.

Link to comment
Share on other sites

Link to post
Share on other sites

Good idea baha, im curious to see how DX12 will change gpu/cpu talking and what we need from a cpu

Probably wait till then to upgrade from my cute little G3258

Its as if people don't see post count sometimes isn't it  :P

 

It seems to me like DX12 will probably impact performance most on the low end CPU SKU's in particular, mainly thanks to draw call bundles. It should give longer legs to APU's and the value-oriented Intel chips like the G3258, which I think is something AMD is banking on (I think Mantle was motivated primarily by the performance disparity from Intel to add value to the APU lineup).

 

At least, until developers begin to take advantage of the extra free CPU horsepower it provides, but if I've learned anything about adoption of new technologies over the years, by the time that actually becomes a consideration, your G3258 will be retiring if it isn't already.

Link to comment
Share on other sites

Link to post
Share on other sites

Which graphics card did they use?

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

It seems to me like DX12 will probably impact performance most on the low end CPU SKU's in particular, mainly thanks to draw call bundles. It should give longer legs to APU's and the value-oriented Intel chips like the G3258, which I think is something AMD is banking on (I think Mantle was motivated primarily by the performance disparity from Intel to add value to the APU lineup).

 

At least, until developers begin to take advantage of the extra free CPU horsepower it provides, but if I've learned anything about adoption of new technologies over the years, by the time that actually becomes a consideration, your G3258 will be retiring if it isn't already

 

Low end and high end cpu's draw calls will see massive boosts because of how many cores you have (and dont have :P ) if rumors are correct about DX12

The thing with the G3258, is that everything is moving towards multicore optimization (which is a very very good thing), which on its own knocks it down a few levels

Once everything is operating on quad and upward cores it will make the G3258 irrelevant :P 

Great little chip, but not gonna stick around, especially after DX12 hopefully hehe :P

Link to comment
Share on other sites

Link to post
Share on other sites

I am surprised there was not a test with the glitch monster: Assassins Creed Unity!

The other thing is,,,, are all the cores being used? There was no data being shown for such, just FPS.
I run multiple screens but only game on the one and use the other(s) to have my hardware monitors ticking away.

So starting with AC as an example, used all 8 cores on mine at about 50% tops (so not even 2ghz per core),
but other (frankly better games) only use 2-3 or 4 cores with the others sat on idle unless I am recording or have some other program working in the background.

Another problem, is what were the chips running the cores at? with the afore mentioned for it to be truly credible needs to have a cap on the Ghz each core puts out, like say 4Ghz.

This whole report was lacking for me. Needed a lot more into the consideration.

 

Its not my fault I am grumpy, you try having a porcelain todger that's always hard! 

Link to comment
Share on other sites

Link to post
Share on other sites

Yes please, continue doing such tests for the future. Games now finally discovered they can use more than 1 or 2 cores, as multiplatform engines move out of 7th gen.

 

And as it was already written here, DX12 and Mantle/Vulcan can change this space a bit. 

Link to comment
Share on other sites

Link to post
Share on other sites

Low end and high end cpu's draw calls will see massive boosts because of how many cores you have (and dont have :P ) if rumors are correct about DX12

The thing with the G3258, is that everything is moving towards multicore optimization (which is a very very good thing), which on its own knocks it down a few levels

Once everything is operating on quad and upward cores it will make the G3258 irrelevant :P

Great little chip, but not gonna stick around, especially after DX12 hopefully hehe :P

 

It's my understanding that DX12 isn't just using more cores to initiate draw calls, but is also using draw call bundles to reduce their number significantly which is the main performance benefit of DX12 (and the reason why DX12 isn't going to significantly improve performance on the Xbox One - It already implements draw call bundles in DX11.X). To me that means that lower end CPU's that would normally choke on a high draw call load are going to see the greatest benefit from DX12 adoption.

 

Anandtech did an article about DX12 performance scaling vs number of cores and the numbers seem to agree with me, at least for StarSwarm. If you compare those numbers to a later article detailing performance scaling across lower-end CPU's, you can see that under DX12, the low-end CPU's jump to essentially the same performance point as the first article's emulated Core i5 4670K (labeled 4 cores), assuming the GTX 680 roughly equates to a GTX 770. While a straight up dual core like the G3258 isn't going to match a 4670K, the i3 definitely appears to do so quite handily, and if the first article is any indication, even a dual core keeps up pretty well with a massive performance benefit, though 4 threads is still the sweet spot.

Link to comment
Share on other sites

Link to post
Share on other sites

We don't have to wonder that much what DX12 will do, mantle has already shown us and the bottom line is what type of CPU you have will matter less in gaming. thief-d3d.png

thief-mantle.png

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×