Jump to content

[Rumor] Skylake can virtualise the entire CPU to act as a single thread, 'inverse Hyper Threading’!? 2.4 times faster..

TrigrH

Ok Mr. Computer Science Major. At least enlighten us and explain to us how it would work, or do you only stop by, complain about "missing resarch" and then move along?

What would this software layer be? A background running program? Does it have to be application specific? Would it help with gaming? Which applications would it help with?

You know, not everybody can be as godly smart as you, solving the problems of humanity in your thesis.

It's based on the same premise as Morphcore which you can find a basic diagram of in the article. Read Intel's white paper on it, but the short of it is something akin to a virtualization or emulation layer just above the OS which would do in-line rearrangement of instructions to extract thread-level parallelism. It's a more coarse version of VISC which Nvidia employed with the Denver architecture which it originally aimed for x86 use instead of ARM. The reason MorphCore is more likely to be successful is VISC requires having a VLIW design philosophy to truly shine, and no one has come up with a workable, extensible design for it despite being a 25-year-old concept. Basically in MorphCore the software layer picks apart the instructions into related sets and deploys them to individual threads which it has running in the background, like a virtual machine, but lighter weight and native.

In general it will help with any application built on one thread that could have been parallelized at the thread level, including DX 9/11 games. Yes it would run in the background. It's very much application agnostic. There are some problems in computer science which cannot be parallelized at all, but they're niche or small pieces of bigger programs. The benefit of MorphCore will never be as big as the effects of programming for multiple threads in the first place (unless the programmer sucks at parallelism completely), but it will help give a big boost to older applications no one is ever going to recode.

And fair warning, don't mock people just because they seem to be less than experts on the surface. Everyone has something to teach you, even if it's a lesson in how not to act. You will lose many opportunities in life if you mock people with wisdom and life lessons to share.

Second fair warning, I have a reputation on this forum as one who commands the facts and doesn't let BS slide either in provision of fact or expansion of logic. If you step into the ring, 99 times out of 100, I will bury you. Ask around. I'm not as cocky as I sound. I'm just right when I am, and otherwise I don't type anything, or I ask questions.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

And why would that be?

A 5820K + "low" end (which is high-end compared to consumer) X99 board costs about the same as a 4c/8t consumer chip with high-end motherboard anyway.

 

Also, Star Citizen devs promised 6 cores utilized.

The X99 platform is actually still a bit more expensive to be honest :P

The 5820k might be just a little more expensive but from what I've seen, a half decent X99 motherboard costs near twice a good Z97 mobo :)

Also Star citizen is not released yet and won't be this year either.

We could go completely off topic and talk about 15 years from now, where all our games utilize 16 physical cores and all games are played in VR :)

Not the point though, main point of my post was to mention that 1 super fast logical core would have limited use (especially now that you're saying, star citizen will use 6)

Link to comment
Share on other sites

Link to post
Share on other sites

The X99 platform is actually still a bit more expensive to be honest :P

The 5820k might be just a little more expensive but from what I've seen, a half decent X99 motherboard costs near twice a good Z97 mobo :)

Also Star citizen is not released yet and won't be this year either.

We could go completely off topic and talk about 15 years from now, where all our games utilize 16 physical cores and all games are played in VR :)

Not the point though, main point of my post was to mention that 1 super fast logical core would have limited use (especially now that you're saying, star citizen will use 6)

And what about all the DX 9 and 11 games from the past 10 years we all still love to play?

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

And what about all the DX 9 and 11 games from the past 10 years we all still love to play?

Well yeah but realistically, any game that stutters on a Intel Pentium G3258 will see the same issue with 1 super fast core.

And again, anything running DX9/11 will probably see little to no benefit from a single faster core if a £50 Pentium G3258 can max them out :P

I personally have no hope for this in gaming applications. As I previously mentioned, maybe in some professional uses

Link to comment
Share on other sites

Link to post
Share on other sites

There's a needed software layer, a global front end. Jeez people do your research before you comment...

If there really is a required software layer, Skylake wouldn't be the only CPU architecture that could utilize it. Sandy Bridge, Ivy Bridge, Haswell, and Broadwell could all utilize it - assuming it's true in the first place.

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

oh man that sounds good.

If you want to join a really cool Discord chatroom with some great guys here from LTT and outside this community then PM me!

Link to comment
Share on other sites

Link to post
Share on other sites

Well yeah but realistically, any game that stutters on a Intel Pentium G3258 will see the same issue with 1 super fast core.

And again, anything running DX9/11 will probably see little to no benefit from a single faster core if a £50 Pentium G3258 can max them out :P

I personally have no hope for this in gaming applications. As I previously mentioned, maybe in some professional uses

There are some crappy programmed MMOs that would highly benefit from 1-2 supercores instead of 4 because they are in fact CPU bottlenecked.

Worst example of this would be "Tera" with Unreal 3 engine. Man, that game runs like shit.

 

I really don't get what you are saying about the G3258...  It's not like it has bigger cores then a 4770k, just 2 of them instead of 4 (and less cache and no HT and no iGPU).

Like I said, there are games that are CPU bottlenecked, even on a 5960X. Because they simply don't use more than 1-2 cores no matter how many you throw at them.

Usually because of shitty programming and usually online ones with many ppl on the screen.

Link to comment
Share on other sites

Link to post
Share on other sites

There are some crappy programmed MMOs that would highly benefit from 1-2 supercores instead of 4 because they are in fact CPU bottlenecked.

Worst example of this would be "Tera" with Unreal 3 engine. Man, that game runs like shit.

 

I really don't get what you are saying about the G3258...  It's not like it has bigger cores then a 4770k, just 2 of them instead of 4 (and less cache and no HT and no iGPU).

Like I said, there are games that are CPU bottlenecked, even on a 5960X. Because they simply don't use more than 1-2 cores no matter how many you throw at them.

Usually because of shitty programming and usually online ones with many ppl on the screen.

Hahah thought you were talking about WoW for a moment.

The reason I mentioned the G3258 is simply because that thing overclocks like a beast so has some sick single/dual core performance.

and as for the poor programmed games, I don't think that will be fixed until we see DX12 implemented and CPU bottlenecking is removed/reduced.

I agree with what you said thuogh. Even a 5960x can bottleneck some things (not many but some).

We'll see when this becomes a thing

Link to comment
Share on other sites

Link to post
Share on other sites

I highly doubt this because that's the opposite of what Intel seeks to do.

 

Is it really? It would achieve the same goal as Turboboost—increasing single-core performance by leveraging inactive cores. It's just a much, much more aggressive approach. Intel's been doing that since the first generation of Core processors.

 

I have to wonder if this is actually active and usable in retail Skylake chips, or if maybe it's buried/hidden in the BIOS somewhere. They've been benchmarked and reviewed heavily, and I'm sure at least some of those have included Cinebench R15's single-core test. My 6600K sure doesn't seem to score much better there than I'd expect it to. 5–6% over a friend's 4690K, I think it was.

Link to comment
Share on other sites

Link to post
Share on other sites

If there really is a required software layer, Skylake wouldn't be the only CPU architecture that could utilize it. Sandy Bridge, Ivy Bridge, Haswell, and Broadwell could all utilize it - assuming it's true in the first place.

I'm afraid that's not true, because that software layer has far tighter coupling than an OS. There would be specific microcode built in for it, special opcodes since it would be Intel's proprietary solution. This is why I said it's best to read the 58 page white paper. Me trying to cover all these details enough to give you an idea will still take a lot of space to post.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Awesome news, but its not something that will make me run to the shops and buy skylake. When my current CPU (4670k) can no longer pull its weight then I will upgrade. 

Link to comment
Share on other sites

Link to post
Share on other sites

Is it really? It would achieve the same goal as Turboboost—increasing single-core performance by leveraging inactive cores. It's just a much, much more aggressive approach. Intel's been doing that since the first generation of Core processors.

I have to wonder if this is actually active and usable in retail Skylake chips, or if maybe it's buried/hidden in the BIOS somewhere. They've been benchmarked and reviewed heavily, and I'm sure at least some of those have included Cinebench R15's single-core test. My 6600K sure doesn't seem to score much better there than I'd expect it to. 5–6% over a friend's 4690K, I think it was.

Link to comment
Share on other sites

Link to post
Share on other sites

Is it really? It would achieve the same goal as Turboboost—increasing single-core performance by leveraging inactive cores. It's just a much, much more aggressive approach. Intel's been doing that since the first generation of Core processors.

I have to wonder if this is actually active and usable in retail Skylake chips, or if maybe it's buried/hidden in the BIOS somewhere. They've been benchmarked and reviewed heavily, and I'm sure at least some of those have included Cinebench R15's single-core test. My 6600K sure doesn't seem to score much better there than I'd expect it to. 5–6% over a friend's 4690K, I think it was.

I feel like turbo boost is becoming more and more useless for Intel. At the very start with lynnfield, it was >20% increase in single core performance to roughly 10% with Devils canyon. I don't even enable it anymore because it's a negligible difference.

Link to comment
Share on other sites

Link to post
Share on other sites

I feel like turbo boost is becoming more and more useless for Intel. At the very start with lynnfield, it was >20% increase in single core performance to roughly 10% with Devils canyon. I don't even enable it anymore because it's a negligible difference.

Turbo boosting is much more effective in thermal-, power-restrained processors, that are already running low clockspeed.

PC HEDT processors dont usually have these restraining limitations, hence why they are running closer to the 'max' clockspeed.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

According to Ryan Shrout from Pcper who's currently at IDF confirmed its sadly not a thing. Sorry guys....

I say wait for the end of the events. Also, it is a terrible term for this.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I say wait for the end of the events. Also, it is a terrible term for this.

It is quite an old term, or atleast it is very similar to previous rumors going back almost 10 years.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

You can use many threads in Java -_-

Let me correct myself: Minecraft. (Yes, that seriously bothers me :/)

My Rig "Jenova" Ryzen 7 3900X with EK Supremacy Elite, RTX3090 with EK Fullcover Acetal + Nickel & EK Backplate, Corsair AX1200i (sleeved), ASUS X570-E, 4x 8gb Corsair Vengeance Pro RGB 3800MHz 16CL, 500gb Samsung 980 Pro, Raijintek Paean

Link to comment
Share on other sites

Link to post
Share on other sites

Let me correct myself: Minecraft. (Yes, that seriously bothers me :/)

I don't really know about Minecraft in particular. I thought it was just the server that was single-threaded?

MacBook Pro 15' 2018 (Pretty much the only system I use)

Link to comment
Share on other sites

Link to post
Share on other sites

wait maybe i miss understood something but wasent this what AMD tryed to do with their whole modules idea to have 2 cores operate like 1 core to improve performance? ovbiously it didnt work well for them but thats how i understood the whole modules idea from AMD... again i could be wrong... but i was thinking of trying out skylake then i remebered im an AMD fanboy and should continue spending my money on AMD till they make a comeback (which might not happen v.v)....

Link to comment
Share on other sites

Link to post
Share on other sites

wait maybe i miss understood something but wasent this what AMD tryed to do with their whole modules idea to have 2 cores operate like 1 core to improve performance? ovbiously it didnt work well for them but thats how i understood the whole modules idea from AMD... again i could be wrong... but i was thinking of trying out skylake then i remebered im an AMD fanboy and should continue spending my money on AMD till they make a comeback (which might not happen v.v)....

 

What AMD did was that was to have 2 cores share resources. This gave worse single threaded performance, but much better multi threaded performance,... for the few apps that actually used all 8 of those cores.

The biggest  BURNOUT  fanboy on this forum.

 

And probably the world.

Link to comment
Share on other sites

Link to post
Share on other sites

Link to comment
Share on other sites

Link to post
Share on other sites

Let me correct myself: Minecraft. (Yes, that seriously bothers me :/)

iirc Minecraft has been a multithreaded game for a few major updates/patches
Spoiler

My system is the Dell Inspiron 15 5559 Microsoft Signature Edition

                         The Austrailian king of LTT said that I'm awesome and a funny guy. the greatest psu list known to man DDR3 ram guide

                                                                                                               i got 477 posts in my first 30 days on LinusTechTips.com

 

Link to comment
Share on other sites

Link to post
Share on other sites

iirc Minecraft has been a multithreaded game for a few major updates/patches

Only thing that is multithreaded is the chat, yipee...

 

I don't really know about Minecraft in particular. I thought it was just the server that was single-threaded?

Nope, a single instance of minecraft is always running on a single thread. Client & Server-wise.

My Rig "Jenova" Ryzen 7 3900X with EK Supremacy Elite, RTX3090 with EK Fullcover Acetal + Nickel & EK Backplate, Corsair AX1200i (sleeved), ASUS X570-E, 4x 8gb Corsair Vengeance Pro RGB 3800MHz 16CL, 500gb Samsung 980 Pro, Raijintek Paean

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×