Jump to content

Why didn't intel enable "hyperthreading" they would have been pretty competitive with AMD

9 minutes ago, Underi said:

I don't think the modern i5 processors need hyper-threading.

Pretty sure there are videos(GN,HUB, etc) showing that the lack of HT makes i5 worse for gaming than similar Ryzen and Intel CPUs with HT due to bad 0.1/1% lows in some modern games even when it reaches higher avg. FPS.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Mira Yurizaki said:

Many apps that are running are likely using more than a dozen threads. Not every thread is running at the same time. If you have an application that's compute bound, then sure, you should limit the number of worker threads to the number of threads available in the system.

Yep.  And what systems are games designed to run on? Consoles.  How many thread spaces do current consoles have for games? 6.  How many will they have when PS5/Scarlett come out? At least 12.  Possibly more.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Bombastinator said:

Yep.  And what systems are games designed to run on? Consoles.  How many thread spaces do current consoles have for games? 6.  How many will they have when PS5/Scarlett come out? At least 12.  Possibly more.

I don't see how this relates to an app having more threads than the number of threads a CPU can run being a problem.  If an app has more threads ready to run than the CPU can process at once, at worst all that means is the CPU can't keep up with the processing demands of the application. There's no additional overhead for having more threads available to run than the number of threads the CPU can run at once. Threads either run for their time slice or they yield and stop early. If threads are constantly going on the CPU and yielding shortly after, that  might be a problem, but that's more of an issue with how the software was designed/implemented.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Mira Yurizaki said:

I don't see how this relates to an app having more threads than the number of threads a CPU can run being a problem.  If an app has more threads ready to run than the CPU can process at once, at worst all that means is the CPU can't keep up with the processing demands of the application. There's no additional overhead for having more threads available to run than the number of threads the CPU can run at once. Threads either run for their time slice or they yield and stop early. If threads are constantly going on the CPU and yielding shortly after, that  might be a problem, but that's more of an issue with how the software was designed/implemented.

I’m not sure I agree about the no overhead for waiting threads thing, but let’s assume it’s true.
 

game performance is measured in fps.  Waiting threads drops fps.  Also bad implementation is something of a hallmark of game design.  This could get into the old multitreading vs multitasking argument.  I consider the argument moot because regardless of what advantages or disadvantages multitasking might have, for am64 and games for better or worse it basically no longer exists.  Every thread has a whole OS stuffed into it now.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

No HT/SMT in 2019 is a no-go.

 

Intel segmented HT which was historically an i7 premium feature into a whole new tier, the i9.

 

That's bullshit.

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, Bombastinator said:

game performance is measured in fps.  Waiting threads drops fps.  Also bad implementation is something of a hallmark of game design.

The flaw with this is not accounting for processor performance or what the thread is really doing. Consider a quad core, 2.0GHz processor and a single-core 8.0 GHz processor. Same specifications otherwise. And there's a 4-thread application that runs on them. Unless the threads all have the exact same amount of work, the single-core processor is going to have higher performance overall minus the context switching overhead.

 

Do you ever wonder why despite the PS4 and XB1 having 8-core processors, 8-cores has never been a requirement for games? It's because those 8-core processors have much lower performance compared to say a Core i5 of the time. The sheer per-core performance of a desktop class processor was more than enough to make up for the lack of threads they could run at the same time. And do you wonder why Ashes of the Singularity, a game that is known for scaling well with multiple cores, doesn't have a console port? It's because Ashes actually requires that level of performance that the consoles can't deliver.

 

This is in general and I hate to ask this, but do people even know exactly how SMT works?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Mira Yurizaki said:

The flaw with this is not accounting for processor performance or what the thread is really doing. Consider a quad core, 2.0GHz processor and a single-core 8.0 GHz processor. Same specifications otherwise. And there's a 4-thread application that runs on them. Unless the threads all have the exact same amount of work, the single-core processor is going to have higher performance overall minus the context switching overhead.

 

Do you ever wonder why despite the PS4 and XB1 having 8-core processors, 8-cores has never been a requirement for games? It's because those 8-core processors have much lower performance compared to say a Core i5 of the time. The sheer per-core performance of a desktop class processor was more than enough to make up for the lack of threads they could run at the same time. And do you wonder why Ashes of the Singularity, a game that is known for scaling well with multiple cores, doesn't have a console port? It's because Ashes actually requires that level of performance that the consoles can't deliver.

 

This is in general and I hate to ask this, but do people even know exactly how SMT works?

8 ghz processor.  There aren’t any. Silicon has a theoretical hard limit of around 5ghz.

 
Jaguar reserves 2 cores for the OS.  There are only 6 cores to write for.  They’re slow too.  


GHz does help.  A lot.  One of the things I wonder about with the upcoming consoles is that the smt is being done on a really slow chip.  2.1 ghz.  They’re going to cut that up, but there’s not that much there to start with.

 

let me provide a couple counter examples. Ones that exist.


1. the core 2 duo vs the pentim single core.  A core 2 duo 2.1 ghz could sometimes wipe on a pentium single core 5ghz even though it had only total 4.2 ghz(es?) Not all the time though.  The pentium could beat it if it was running a single thread game.  But only a single thread game.  Most of the games of the era were single thread, which made a 5ghz pentium a fairly rocking gaming chip.  Ghz does beat cores IF there is space for the threads. 

 

 

2. My current i7 4770k.  It’s 4/8.  There are a lot of i5 4570ks out there.  My i7 is a lead chip not a golden one.  It will only oc to 4.0, making it more or less a 4790.  So a 4/4 chip should go like a 4/8 chip with similar total ghz in games.  It should be faster even because hyperthreading eats cycles too and it isn’t very efficient.  Except it doesn’t.  At least not all the time.  Sometimes it does.  Run one of those single thread games and it will keep right up.  Maybe even beat it a little.

 It’s an old chip though. Games have moved on and grown more and more threads.  I ran into a guy here the other day that had a pretty Similar cpu to mine except his was 4/4.  He even had a pretty similar GPU. (I forget which) I’ve got a gtx970. Anyway his gpu was bound by his cpu, pretty badly, whereas my cpu is bound by my gpu.  Also pretty badly.  They’re the same chip though.  The only difference is my hyperthreading is turned on and his isnt.  At one time his chip was faster.  Just not any more.

 

This isn’t about even the present so much as the projected future:

Right now 8 core CPUs like the 9700ks kick butt.  They’ve got 8 cores of 5ghz. As fast as 8cores will ever be.  As long as those 8 cores don’t get filled up it’s fine.  It can even take a few more than 8 threads because there will be some extra space in those 5ghz cores.  Slower 8cores like the 3700x can keep up and even beat it though in some scenarios.  Scenarios where the 3700x’s 16 thread spaces wind up being more useful.
 

I think the 9700k does well right now because right now games are being written for 6 thread CPUs, leaving it 2 left over for things like OS and housekeeping.  What is going to happen though when that dog slow 2.1ghz smt 8 core the PS5/Scarlett is going to wield shows up though?  It’s basically the same chip as the old jaguar, just with smt available.  So 6 thread spaces for games turn into 12 (or maybe 14?) on more or less the same cpu.  Depends on what the game programmers do.  They might abandon the smt and write for a 6 core chip.  (It would arguably be smarter) They also might not though.  If they don’t, machines with less than the number of cores they write for could start to have big problems.  The 4 thread space 4xxx chips did.

i personally think what they should probably do is take one thread and waste it on writing a custom multitasking OS (unless the Sony or Microsoft OS of the console in question already does multitasking well. I don’t know) and write a multitasking game to completely fill the rest of the processor cycles of the CPU and save themselves two years worth of hunting thread crash headaches which multitasking simply doesn’t have a problem with.

I don’t think they will though.  They don’t know how to write for multitask.  All they can do is thread.  So threads we will get.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Bombastinator said:

8 ghz processor.  There aren’t any. Silicon has a theoretical hard limit of around 5ghz.

Did I have to provide a "this is a hypothetical" disclaimer? Because you know, it was hypothetical.

 

Quote

1. the core 2 duo vs the pentim single core.  A core 2 duo 2.1 ghz could sometimes wipe on a pentium single core 5ghz even though it had only total 4.2 ghz(es?) Not all the time though.  The pentium could beat it if it was running a single thread game.  But only a single thread game.  Most of the games of the era were single thread, which made a 5ghz pentium a fairly rocking gaming chip.  Ghz does beat cores IF there is space for the threads. 

Then how come a 1.86 GHz Core 2 Duo achieves better performance than a 3.73 GHz Pentium XE 965?

 

12589.png

 

12736.png

 

EDIT: One more game because this game was definitely developed during a time when single core systems were basically the only system:

gcore_unreal.gif

 

That's pretty bad.

 

Quote

2. My current i7 4770k.  It’s 4/8.  There are a lot of i5 4570ks out there.  My i7 is a lead chip not a golden one.  It will only oc to 4.0, making it more or less a 4790.  So a 4/4 chip should go like a 4/8 chip with similar total ghz in games.  It should be faster even because hyperthreading eats cycles too and it isn’t very efficient.  Except it doesn’t.  At least not all the time.  Sometimes it does.  Run one of those single thread games and it will keep right up.  Maybe even beat it a little.

Technically speaking, a single thread game doesn't exist. At least not in Windows. A game that runs all of its task synchronously sure, but not a single thread. In any case, I don't see what this has to do with anything.

Quote

 It’s an old chip though. Games have moved on and grown more and more threads.  I ran into a guy here the other day that had a pretty Similar cpu to mine except his was 4/4.  He even had a pretty similar GPU. (I forget which) I’ve got a gtx970. Anyway his gpu was bound by his cpu, pretty badly, whereas my cpu is bound by my gpu.  Also pretty badly.  They’re the same chip though.  The only difference is my hyperthreading is turned on and his isnt.  At one time his chip was faster.  Just not any more.

Having more threads doesn't mean anything depending on how the software was designed and implemented. Modern games may have as many threads as earlier ones, but because modern ones have better software design backing it, things are simply more efficient. If anything, what's really been the bottleneck for PC gaming are getting graphics up to speed. DX11 and OpenGL are primarily single task-based API. There's only one GPU task list generator and that's where most of the graphics bottleneck is in a lot of games. This is practically the whole point of DX12 and Vulkan. But even then, switching to it simply doesn't give you better performance if the performance bottleneck wasn't the graphics subsystem to begin with.

 

Quote

I think the 9700k does well right now because right now games are being written for 6 thread CPUs, leaving it 2 left over for things like OS and housekeeping.  What is going to happen though when that dog slow 2.1ghz smt 8 core the PS5/Scarlett is going to wield shows up though?  It’s basically the same chip as the old jaguar, just with smt available.  So 6 thread spaces for games turn into 12 (or maybe 14?) on more or less the same cpu.  Depends on what the game programmers do.  They might abandon the smt and write for a 6 core chip.  (It would arguably be smarter) They also might not though.  If they don’t, machines with less than the number of cores they write for could start to have big problems.  The 4 thread space 4xxx chips did.

If SMT is not going to be a thing for the next generation consoles, it's because SMT ruins deterministic behavior. Games are soft-real time applications, and the basic requirement for having a stable real-time application is having the system be deterministic as possible. When you're on a platform where performance can be limited and you need to pull out every trick in the book, knowing how the system behaves within a high degree of certainty is a huge benefit.

 

Quote

i personally think what they should probably do is take one thread and waste it on writing a custom multitasking OS (unless the Sony or Microsoft OS of the console in question already does multitasking well. I don’t know) and write a multitasking game to completely fill the rest of the processor cycles of the CPU and save themselves two years worth of hunting thread crash headaches which multitasking simply doesn’t have a problem with.

That's... what they're doing now. That's what they've been doing. At least Sony has since the PS3.

 

If there's any reason why PC games don't seem to utilize the CPU often, it's because the task is too easy to do on the CPU and Windows isn't going to fire up more cores if one or two cores can handle it. It's the same reason why if you play an old 3D game on a modern GPU, the GPU isn't going to crank up everything to max speed, because there's no point in generating a million FPS* other than for craps and laughs.

 

*exaggeration

 

Quote

I don’t think they will though.  They don’t know how to write for multitask.  All they can do is thread.  So threads we will get.

Okay, do you? Maybe you should do it if you know better than people who've been in the industry for decades.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Mira Yurizaki said:

Did I have to provide a "this is a hypothetical" disclaimer? Because you know, it was hypothetical.

 

Then how come a 1.86 GHz Core 2 Duo achieves better performance than a 3.73 GHz Pentium XE 965?

 

12589.png

 

12736.png

 

EDIT: One more game because this game was definitely developed during a time when single core systems were basically the only system:

gcore_unreal.gif

 

That's pretty bad.

 

Technically speaking, a single thread game doesn't exist. At least not in Windows. A game that runs all of its task synchronously sure, but not a single thread. In any case, I don't see what this has to do with anything.

Having more threads doesn't mean anything depending on how the software was designed and implemented. Modern games may have as many threads as earlier ones, but because modern ones have better software design backing it, things are simply more efficient. If anything, what's really been the bottleneck for PC gaming are getting graphics up to speed. DX11 and OpenGL are primarily single task-based API. There's only one GPU task list generator and that's where most of the graphics bottleneck is in a lot of games. This is practically the whole point of DX12 and Vulkan. But even then, switching to it simply doesn't give you better performance if the performance bottleneck wasn't the graphics subsystem to begin with.

 

If SMT is not going to be a thing for the next generation consoles, it's because SMT ruins deterministic behavior. Games are soft-real time applications, and the basic requirement for having a stable real-time application is having the system be deterministic as possible. When you're on a platform where performance can be limited and you need to pull out every trick in the book, knowing how the system behaves within a high degree of certainty is a huge benefit.

 

That's... what they're doing now. That's what they've been doing. At least Sony has since the PS3.

 

If there's any reason why PC games don't seem to utilize the CPU often, it's because the task is too easy to do on the CPU and Windows isn't going to fire up more cores if one or two cores can handle it. It's the same reason why if you play an old 3D game on a modern GPU, the GPU isn't going to crank up everything to max speed, because there's no point in generating a million FPS* other than for craps and laughs.

 

*exaggeration

 

Okay, do you? Maybe you should do it if you know better than people who've been in the industry for decades.

Re: the dual core/pentium thing.  
Actually that would be you agreeing with me, not the reverse.  In your example the dual core and the pentium have almost exactly the same total ghz(es?).  That’s not refutation, that’s support.

 

re: Asingle thread game doesn’t exist, at least not in windows

the vague one there is “Windows”. Which windows?  95? Or win10?  They’re totally different.

Threading was invented to get around limitations of Microsoft refusing to write a decent multitasker for their OS for years and years.  They did do it finally though.

 

Re: SMT

no opinion.  We’re talking about different things.

 

re: that’s what they’re doing now.

thats possible.  Games are still written thread style anyway though.  At least so I am told.

 

Re:

Why don’t you write games instead then?

Not an argument.  It’s just a standard appeal to authority fallacy.

Why don’t I write games?  I suck at programming.  That doesn’t change anything though.

 

You want to prove me wrong on the threading v multitasking thing? It could happen.  I could be wrong.  For me this is all arm chair theory, and that stuff is known to have problems. An example is needed written in a good multitasker, for multitask, that is actually written well, and Pepsi challenges against a similarly well thought out threaded app.  One that doesn’t cheat on favor of threading by doing crap multitask. There’s been one of those already.

 

 Your best argument so far was the deterministic behavior thing.  That could have merit.  Maybe enough merit, maybe not.  I still think the speed of the CPUs might do it alone.  They might not be able to run a thread fast enough in the space allotted.  Maybe it won’t. SMT is better than hyperthreading at that.  Hyperthreading doesn’t S well. Time will tell.

 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Mira Yurizaki said:

I'm going to need a source that says Intel is intentionally crippling i7s or whatever to turn them into i5s.

 

How about intel itself?

https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html

the difference between an i5 and an i7 is literally a bit switch.  The i5 (old ones anyway, it may have changed) actually has(had?) all the hyperthreading hardware in it already.  It was just a question of whether the hyperthreading was turned on or off.  You can still turn off hyperthreading in the z97 bios and turn an i7 effectively into an i5.  I could try that actually and run some tests.  It’s very likely already been done though.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Bombastinator said:

How about intel itself?

https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html

the difference between an i5 and an i7 is literally a bit switch.  The i5 (old ones anyway, it may have changed) actually has(had?) all the hyperthreading hardware in it already.  It was just a question of whether the hyperthreading was turned on or off.  You can still turn off hyperthreading in the z97 bios and turn an i7 effectively into an i5.  I could try that actually and run some tests.  It’s very likely already been done though.

It has already been done. HUGE performance hit in some applications.

 

I7-7700k HT on

145 avg, 107 1% low, 101% 0.1% low

 

vs

 

i7-7700k HT off

122 avg, 71 1% low, 24 0.1% low

 

mll-cpu-benchmark_2500k.png

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, Plutosaurus said:

It has already been done. HUGE performance hit in some applications.

 

I7-7700k HT on

145 avg, 107 1% low, 101% 0.1% low

 

vs

 

i7-7700k HT off

122 avg, 71 1% low, 24 0.1% low

 

mll-cpu-benchmark_2500k.png

That’s the 7xxx though.  There may have been a change between 4xxx and 7xxx. The 4790k is on there but not with HT0. Be interesting to compare it to a 4590 running at the same MHz. There is a 4690k it’s got a different base clock though.  Also different cache.  Might be a better comparison.  Exactly what is under the hood of these things is something folks have been playing games with for a long time.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Bombastinator said:

That’s the 7xxx though.  There may have been a change between 4xxx and 7xxx. The 4790k is on there but not with HT0. Be interesting to compare it to a 4590 running at the same MHz. There is a 4690k it’s got a different base clock though.  Also different cache.  Might be a better comparison.  Exactly what is under the hood of these things is something folks have been playing games with for a long time.

The point is the technology - its the same chip with the feature disabled vs on at same frequency has very drastic results.

 

Take note that the 2600k, which ismuch older and has HT, also performs "better" than the simulated 7600k, and also better than the 2500k and the 3570k.

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

sorry if someone already posted this I was skimming through, sometimes bios has a setting for hyper-threading not sure if you checked there

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Plutosaurus said:

The point is the technology - its the same chip with the feature disabled vs on at same frequency has very drastic results.

True.  Has there been a change in what is attached to the feature though?  There might or might not be.  You could be dead right.  I just think it would be interesting to check anyway.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Bombastinator said:

True.  Has there been a change in what is attached to the feature though?  There might or might not be.  You could be dead right.  I just think it would be interesting to check anyway.

my biggest complaint is intel segmenting HT.

 

it used to be an i7 "high end feature" that suddenly became an i9 feature. It's artificially creating a higher segment to get actual gains, much like nvidia's bs with turing.

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Plutosaurus said:

my biggest complaint is intel segmenting HT.

 

it used to be an i7 "high end feature" that suddenly became an i9 feature. It's artificially creating a higher segment to get actual gains, much like nvidia's bs with turing.

Yep.  Shell game.

i don’t know anything about the Turing stuff.  Wouldn’t surprise me.  It’s the problem with duolopolies. The market dominator can pull all kinds of stuff.

Edited by Bombastinator
Ihateautocorrect also addition

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Bombastinator said:

Yep.  She’ll game.

i don’t know anything about the Turing stuff.

1070ti == 2060 performance, same price ($349)

1080 == 2070 performance, same price ($499)

1080 ti == 2080 performance, same price ($699)

 

2080 ti, much more expensive, gains in every way, forcing you into higher segment to actually get gains. $999

 

You had to go up a level in GPU segmentation to actually get performance gains.

 

Meanwhile,

 

i7-4790k < i7-6700k, same price range, gains in every way.

i7-6700k < i7-7700k, same price range, gains in every way.

i7-7700k < i7-8700k, same price range, gains in every way.

 

then you get 

 

i7-8700k <= i7-9700k, same price range....uh, better sometimes, worse other times? What gives?

i7-8700k < i9-9900k, much more expensive, gains in every way, forcing you into higher segment to actually get gains.

 

So my point is they fucked over the consumer and made a new segment to push what should have just been an i7. 

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

Just ran into a really old video of Linus explaining hyperthreading.  It’s on lifehacker, which is generally a bad sign.

 

not sure what the rules are on posting this kind of stuff.  It’s a techquickie video embedded in a lifehacker article.

 

ill post a link to the lifehacker article and report myself.  Let the authorities figure it out.

 

https://lifehacker.com/how-hyper-threading-really-works-and-when-its-actuall-1394216262

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Bombastinator said:

Why don’t I write games?  I suck at programming.  That doesn’t change anything though.

It does on my end. It just means you have little practical knowledge on the subject matter at hand, working with outright false or misleading information and at times spitting out such.

 

If you want to continue being an armchair expert then I'll just sit this one out.

Link to comment
Share on other sites

Link to post
Share on other sites

I think if a program doesn't fully support HT it's a bad program, or at least outdated as hell. 

 

I have a gaming PC for one reason (to play games on it)  and I'll definitely try not to support outdated software in the future ,  luckily my fav game already uses HT very well.  Exceptional well actually. 

 

And yeah, Intel got really fucked by this spectre / meltdown thing lol. 

 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Mark Kaine said:

I think if a program doesn't fully support HT it's a bad program, or at least outdated as hell. 

 

I have a gaming PC for one reason (to play games on it)  and I'll definitely try not to support outdated software in the future ,  luckily my fav game already uses HT very well.  Exceptional well actually. 

 

And yeah, Intel got really fucked by this spectre / meltdown thing lol. 

 

 

So did AMD. I’ve got friends that abandoned am64 entirely over it.  They went ARM

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

This turned into a pissing contest. Not sure why it had to turn to personal attacks.

 

IMO, Intel did nobody any favors with the 9th generation as a whole. It would have been better for them to leave the 8th generation as is and keep producing it, with a special edition i9-8900k (9900k) to act as the halo product. 

 

OR skip 9th generation completely and launch what will be the rumored 10th gen as 9th gen. (i3 4/8, i5 6/12, i7 8/16, i9 10/20)

 

Intel was incredibly disappointing for 9th generation. Specifically, the 9700k was a disgrace.

 

Steve at Gamersnexus said it better than me:

 

https://www.gamersnexus.net/hwreviews/3421-intel-i7-9700k-review-benchmark-vs-8700k-and-more

 

Quote

 

Conclusion

The Intel i7-9700K received ample criticism at unveil for being the first “gaming,” S-class i7 in recent history to drop hyperthreading. The move was accompanied by an increase in physical core count to 8C, but followed the previous move from 4C/8T to 6C/12T, and thus felt like an odd middle-step that had forgotten the lessons learned by the 8700K. The decision left enthusiasts feeling ripped-off; rather than a clear improvement in the product category, Intel had made a sort of lateral step.

 

Now that we’ve tested it, we can see that benchmarking positions the 9700K oft superior in gaming tasks, largely a result of frequency, to the preceding 8700K. This doesn’t remain true in every case, like in Blender workloads where the additional threads of the 8700K prove advantageous. The price increase of the 9700K over the 8700K also feels off-putting, and so the gains the 9700K makes in gaming are lost when considering the price increase. At the same price, it’d be more tenable, but an increase to $400 to $430 is unpalatable in the face of Intel’s similarly performing i7-8700K at cheaper prices.

The move did not feel productive for Intel. The 9700K is fine. It’s not a bad product, it does well in testing (overall), and it both wins and loses some tests, as any product would do. The oddity is just that it’s losing tests against its predecessor, even when those are simple tests of value, not necessarily performance. This was true for the likes of the RTX 2080 as well, for instance, where performance was fine, but value was a clear regression from the previous generation. We feel similarly about the 9700K. We need to see price come down to around where the 8700K is – around $350 – to really feel comfortable with the 9700K. Even then, it feels like an odd, lateral move from the 8700K before it.

 

 

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×