Jump to content

gtx 1180 leaks and infofmation from wccftech

Guest savagepain
Just now, Coaxialgamer said:

Well , my point isn't that volta isn't a significant improvement over pascal  : it clearly is , as you've just stated. My point is that the the only current implementation of volta is gv100 , with all it's bells and whistles. What i'm saying is that what we consumers actually get will likely be considerably different , to the point that that the consumer arch and gv100 arch will be similar by name only .

oh yeah the consumer GPU is going to be way different lol

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Coaxialgamer said:

 

Nvidia can afford to be complacent , and i think that's what they'll do for the foreseeable future . I mean , the gtx 1080 is nearly 2 years old at this point . If that's not a sign nvidia is holding back , i don't know what is...

 

I may seem senile when I repeat myself on every thread with that, but I think they're not being complacent at all. I think they are genuinely trying to do better, but they can't. That's why their own CEO said that Pascal is unbeatable. It's unbeatable for amd he said, but it might be unbeatable for them as well. That's why they do not give real progress that much outside of process refinement and tensor cores, because arch wise they're hitting a wall and they don't know how to do better (or more probably, they know how to do better, but that'll mean push a very or slightly different way to compute images, which could help AMD catch up). Because AMD is quite competitive granted the engine works well enough in their favour (see doom for instance where they compete quite well (and quite better than on other games)).

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, laminutederire said:

I may seem senile when I repeat myself on every thread with that, but I think they're not being complacent at all. I think they are genuinely trying to do better, but they can't. That's why their own CEO said that Pascal is unbeatable. It's unbeatable for amd he said, but it might be unbeatable for them as well. That's why they do not give real progress that much outside of process refinement and tensor cores, because arch wise they're hitting a wall and they don't know how to do better (or more probably, they know how to do better, but that'll mean push a very or slightly different way to compute images, which could help AMD catch up). Because AMD is quite competitive granted the engine works well enough in their favour (see doom for instance where they compete quite well (and quite better than on other games)).

Volta was being designed prior to Pascal.  Pascal was introduced later, nV changed their road maps.  I think when they got Maxwell back and it did so well, they knew they could wait on Volta and make another generation of modified Maxwell.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Razor01 said:

Volta was being designed prior to Pascal.  Pascal was introduced later, nV changed their road maps.  I think when they got Maxwell back and it did so well, they knew they could wait on Volta and make another generation of modified Maxwell.

Well the current Volta is Maxwell with more CUDA cores and tensor cores.. So they might have changed that as well

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, laminutederire said:

Well the current Volta is Maxwell with more CUDA cores and tensor cores.. So they might have changed that as well

 

You mean Pascal ?

 

Yeah it is a few modifications nothing major.

 

Volta might have changed too you are correct.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Razor01 said:

 

You mean Pascal ?

 

Yeah it is a few modifications nothing major.

 

Volta might have changed too you are correct.

Well Pascal is Maxwell with better clocks so... either is fine for this argument :P

But I think we're at the point where we might need better software to get better and more beautiful though.

Link to comment
Share on other sites

Link to post
Share on other sites

How could they put so over 3500 Cuda cores in a 400mm2 die? The gp102 needs like 472mm2 for the same amount of cores. The 12nm process does not offer that type of density improvement when using high performance oriented libraries. However, it can offer up to 10% improvement in density. 

Link to comment
Share on other sites

Link to post
Share on other sites

Pretty curious to see how they'll perform relative to the current cards; especially pricing.

If they turn out to be great little mining cards history will repeat itself, lets just pray upon GabeN/Linus & other gaming/tech gods that it'll not happen

gameN

Edited by Sfekke
Added extra "joke"

When the PC is acting up haunted,

who ya gonna call?
"Monotone voice" : A local computer store.

*Terrible joke I know*

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Lawliet93 said:

How could they put so over 3500 Cuda cores in a 400mm2 die? The gp102 needs like 472mm2 for the same amount of cores. The 12nm process does not offer that type of density improvement when using high performance oriented libraries. However, it can offer up to 10% improvement in density. 

It's probably closer to 420 mm2 and they might have stripped out a chunk of logic that handled compute workloads. In theory, that's part of the power of branching their designs, but we'll have to see. Though the main thing that comes to mind is "yields must be pretty good" or the 1170 is going to be rather down on SMs.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Sfekke said:

Pretty curious to see how they'll perform relative to the current cards; especially pricing.

If they turn out to be great little mining cards history will repeat itself, lets just pray upon GabeN/Linus & other gaming/tech gods that it'll not happen

gameN

Probably expect $749 and $849 for the 1180 and the FE. 1170 will probably come in at $549 or something. They'll be functionally discounting each tier to their current pricing during the shortage, then drop MSRP when AMD launches their next generation.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Taf the Ghost said:

Probably expect $749 and $849 for the 1180 and the FE. 1170 will probably come in at $549 or something. They'll be functionally discounting each tier to their current pricing during the shortage, then drop MSRP when AMD launches their next generation.

They're maybe selling those at their Pascal equivalent prices because yields are the same...

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, laminutederire said:

They're maybe selling those at their Pascal equivalent prices because yields are the same...

Yields should be fine, given it's just an improved node. Nvidia & AMD are both selling out of their entire wafer production, though neither has increased production. Memory is still the limiting factor, so they'll just enjoy the really cash-flush days in the market. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, laminutederire said:

I may seem senile when I repeat myself on every thread with that, but I think they're not being complacent at all. I think they are genuinely trying to do better, but they can't. That's why their own CEO said that Pascal is unbeatable. It's unbeatable for amd he said, but it might be unbeatable for them as well. That's why they do not give real progress that much outside of process refinement and tensor cores, because arch wise they're hitting a wall and they don't know how to do better (or more probably, they know how to do better, but that'll mean push a very or slightly different way to compute images, which could help AMD catch up). Because AMD is quite competitive granted the engine works well enough in their favour (see doom for instance where they compete quite well (and quite better than on other games)).

They are not being complacent at all.  This idea that companies are complacent or sit on their hands is an internet narrative perpetuated by people who know nothing about business.  No business is eve intentionally complacent  because a complacent business is a dead business.

 

I find it alarming that people look at Intel and Nvidia and cry complacency while AMD brings nothing to the table for 5 years and they consider that to be something different.   They are all working very hard to bring the next biggest and best product. none of them want to become 3dFX or Cyrix.

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, mr moose said:

They are not being complacent at all.  This idea that companies are complacent or sit on their hands is an internet narrative perpetuated by people who know nothing about business.  No business is eve intentionally complacent  because a complacent business is a dead business.

 

I find it alarming that people look at Intel and Nvidia and cry complacency while AMD brings nothing to the table for 5 years and they consider that to be something different.   They are all working very hard to bring the next biggest and best product. none of them want to become 3dFX or Cyrix.

 

What I find more alarming is that people believe that if spend time and money you'll produce something better. That's the philosophical equivalent of thinking the human race is godlike.

In practice the GPU market is mature enough that it's not trivial to find something new which works well. Previously you'd tweak the same arch, put it on a smaller node, you'd get better clocks and more cores which led to better perf. But now everything is complex and there is no obvious solution. What Nvidia seems to do is going bigger on a smaller node, which is kinda the easiest way to squeeze a bit of perf. Everything else will take more than time or money, it'll take someone smart being lucky to find a good idea.

Link to comment
Share on other sites

Link to post
Share on other sites

I wouldn't say its luck lol, its understanding what the current limitations are and working around those limitations and that does cost money and time lol.  But without the ideas, money and time doesn't matter.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, laminutederire said:

What I find more alarming is that people believe that if spend time and money you'll produce something better. That's the philosophical equivalent of thinking the human race is godlike.

In practice the GPU market is mature enough that it's not trivial to find something new which works well. Previously you'd tweak the same arch, put it on a smaller node, you'd get better clocks and more cores which led to better perf. But now everything is complex and there is no obvious solution. What Nvidia seems to do is going bigger on a smaller node, which is kinda the easiest way to squeeze a bit of perf. Everything else will take more than time or money, it'll take someone smart being lucky to find a good idea.

And the same with CPU's. It's a lot of hard work improving when even the atoms are starting to hamper your efforts.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, BasicallyAMod said:

because then they can sell out of both

They cannot manufacture both without sacrificing the amount of units they can produce from either. They don't get more fabs out of no where. 

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, Razor01 said:

I wouldn't say its luck lol, its understanding what the current limitations are and working around those limitations and that does cost money and time lol.  But without the ideas, money and time doesn't matter.

There is a part of luck. You can be smart and not solve the problem anyway just because you tried 99 things, and you didn't saw the 100th.

Limitations at this point are more about physics barriers and mathematical lower bounds. And there closed you get to those, the more important it becomes important to see the right needle in the complex haystack. Being smart is what allows you to find it more easily and to eventually find it. Luck is what makes you find it before you die because you explored the unknown in the right order.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Taf the Ghost said:

The important bit is that Volta & Turing means Nvidia has fully split their design branches. Until this generation, they really were building the Big GPU, then cutting down the designs.

It was split a while ago, mostly anyway. The big dies actually have FP64 cores in the SMs, every lesser die does not. FP64 aren't just disabled either they actually are not present which is how they cut the die size down and get the cost down.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, laminutederire said:

There is a part of luck. You can be smart and not solve the problem anyway just because you tried 99 things, and you didn't saw the 100th.

Limitations at this point are more about physics barriers and mathematical lower bounds. And there closed you get to those, the more important it becomes important to see the right needle in the complex haystack. Being smart is what allows you to find it more easily and to eventually find it. Luck is what makes you find it before you die because you explored the unknown in the right order.

 

Architecture is mathematical barriers, from what they have done before, and what they need to change to get to where they want to go, physics barriers aren't going to change lol would be nice but yeah lol.  That is up the fabs to figure out.  Back to architecture this is why to get real performance boosts, new architectures need to be made and not just tweaking existing designs since you can only get so much out of them, bottlenecks will not be removed by tweaking. 

 

Sum is greater that the parts, but in this case, if a part is holding the sum back, that is all it will get lol.

 

Luck is involved to some degree but its just like with Maxwell, luck there was they got higher than expected clock speeds.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Lawliet93 said:

How could they put so over 3500 Cuda cores in a 400mm2 die? The gp102 needs like 472mm2 for the same amount of cores. The 12nm process does not offer that type of density improvement when using high performance oriented libraries. However, it can offer up to 10% improvement in density. 

It's actually fairly easy to do :

for starters , there's the density improvements of the new node. That's probably 10-15% density improvements right there

Second , gp102 actually has 3840 cores , not 3584 . That's nearly 10% less area there too.

Third , the memory controllers : by using 256 bit controllers vs 384 bit on gp102 , you'll also save a lot of area.

All and all , I'd be surprised if the chip isn't under 400mm² actually

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Normally I'd agree but this is actually realistic,

No it's not, I have it on good authority (heard it on the interwebz) that the next Nvidia card is going to be a 2080 not 1180.  So it must be wrong.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Taf the Ghost said:

Probably expect $749 and $849 for the 1180 and the FE. 1170 will probably come in at $549 or something.

Nvidia's behaviour with the 1070Ti would seem to indicate that the FE will be the same price as MSRP. Nvidia appears to have learned their lesson with attempting to price their cards above OEMs.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, mr moose said:

No it's not, I have it on good authority (heard it on the interwebz) that the next Nvidia card is going to be a 2080 not 1180.  So it must be wrong.

Nah I heard they are completely changing the naming scheme to align everything with GPP. They're all named GeForce Gaming Performance Accelerated Mathematics Devices (GPP AMD). 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×