Jump to content

gtx 1180 leaks and infofmation from wccftech

Guest savagepain
34 minutes ago, cj09beira said:

i think they need to do both anyways because even if they go mcm they will endup at the 64rop/cu limit in no time flat (good chance it could happen in the second gen),which one will be first i have no clue, but vega already having IF gluing everything together could point to a design mcm in the works

The IF in Vega isn't used in the core CU structure of the GPU though, it's only used to interconnect things like the memory controller. They need to do what Nvidia has done with Volta, reduce the CUs per Compute Engine and deploy more clusters of them (SMs in Nvidia speak). If they did that they would need to improve the Hardware Schedulers and the ACEs to handle the extra Graphics Pipelines but it would result in less CUs being resource wasted. This can be done without going MCM or IF, it's just a logical structure change (well physical too).

 

Likely other knock on effects but more CUs per Compute Engine isn't scaling out actual performance.

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, cj09beira said:

having 2 dies that small could be good or bad, it makes even the 580 tier need more than one die but it could allow for having 64rops and 32 cus which if they can have 4 dies it could be really powerful, hmm there might be a reason to only have 32 cus after all 

Unless they've gotten through the ROP issue, I don't think we're getting 4-way GPUs, as I imagine a 2-way is a lot easier to deal with. Also, with the HBCC that came in with Vega, memory management is actually controlled at the Driver level, so the issues with having to replicate the memory across both GPU dies should be manageable. 

 

I still would assume where likely to see a 48 CU design and that's what they mean by "scalable". While the GPUs will run with CU deactivated, since they've switched to GCN, you've gotten your choice of 8 CU, 16 CU, 32 CU or 64 CU designs.

 

Actually, the Ryzen Mobile parts is an 11 CU design, which is the first unbalanced design we've seen from GCN. Does that mean they can produce a Navi part with however many CUs they want? Rolling out 30 CU and 48 CU designs would probably make a lot more sense on 7nm than another 16 & 32 CU for the mainstream parts.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Taf the Ghost said:

Unless they've gotten through the ROP issue, I don't think we're getting 4-way GPUs, as I imagine a 2-way is a lot easier to deal with. Also, with the HBCC that came in with Vega, memory management is actually controlled at the Driver level, so the issues with having to replicate the memory across both GPU dies should be manageable. 

 

I still would assume where likely to see a 48 CU design and that's what they mean by "scalable". While the GPUs will run with CU deactivated, since they've switched to GCN, you've gotten your choice of 8 CU, 16 CU, 32 CU or 64 CU designs.

 

Actually, the Ryzen Mobile parts is an 11 CU design, which is the first unbalanced design we've seen from GCN. Does that mean they can produce a Navi part without however many CUs they want? Rolling out 30 CU and 48 CU designs would probably make a lot more sense on 7nm than another 16 & 32 CU for the mainstream parts.

we already got 44cus with the 290x which in my mind is a pretty dam good balance between cus and the rest (when at 64 rops), but you are probably right in that having more than 2 dies would be quite dificult if memory is handled by the driver which for me means they need bigger dies

the way i think is that if the go mcm they really need to push it as far as they can to take as much advantage of that as possible because nvidea is also going to do it sooner or later and it would be very helpful for amd if they had more market share by that time, plus amd really needs its mind share,

i was talking just yesterday with a collegue of my uni that while knowing about intel and amd cpus and having a ryzen gpu himself didn't know amd also made gpus

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Taf the Ghost said:

My guess is Nvidia is going to shift their entire stack up about 50USD in MSRP, or more, given the persistent GPU prices (Cryptos sell off in the beginning of the year then rise in the end of the year, for reasons I don't know). By launching into their big selling season at relatively good prices, they should sell out their expected sales stock. AMD's Navi won't effect them until it actually lands.

 

Given that 14nm -> 7nm generations are about 1.5 Nodes worth of previous shrink level, we're really unsure what AMD could be dropping on 7nm with Navi. We can assume it's going to be a fairly sizable generational improvement (node shrinks do that), but that's all where looking at right now. (I do hope it's a 48 CU Navi and then cut-down versions of that die, but we'll see.) 

Yeah Navi won't impact sales then, that's probably why they're releasing it then. Because if rumours are to be true, that'd mean significant price cuts on their part, which hurts their bottom line.

If anything we could hope AMD pull a Pascal on Vega+Polaris and have those efficiency range being at significantly higher range and therefore end up with a significant increase in either power consumption or performance depending on the product. That should help them disassociate themselves from the whole not card image a bit. Ultimately 7nm may be more about price than performance crown. 250$ Navi vs 600 or 700$ 1170 for the same perf? That'd be quite a huge gap.

1 hour ago, Taf the Ghost said:

Yup. AMD has clearly aligned their focus to get on the node as fast as possible, while Nvidia could very easily do a 1 year generation on 12FF then move to TSMC's 7nm process by late 2019, if Navi comes out really good.

 

7nm "generation" is also where almost everyone catches up to Intel, which is pretty much the first time that's happened since about 1999 or something like that.

That's probably what Nvidia is up to. They don't seem to want to radically change their arch for the foreseeable future, so a node shrink would be the way to go for them. Doing it incrementally makes sense because that gives them a little bit more time to change the arch itself.

 

1 hour ago, cj09beira said:

7nm should be a great equalizer for the whole industry, it will be very fun 

Yeah, unfortunately that might get destabilized again with post silicon technologies.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, cj09beira said:

we already got 44cus with the 290x which in my mind is a pretty dam good balance between cus and the rest (when at 64 rops), but you are probably right in that having more than 2 dies would be quite dificult if memory is handled by the driver which for me means they need bigger dies

the way i think is that if the go mcm they really need to push it as far as they can to take as much advantage of that as possible because nvidea is also going to do it sooner or later and it would be very helpful for amd if they had more market share by that time, plus amd really needs its mind share,

i was talking just yesterday with a collegue of my uni that while knowing about intel and amd cpus and having a ryzen gpu himself didn't know amd also made gpus

MCM might also end up being a ML/DL/AI only type of product, which is also a place that might profit more from it, since it's just acting as a "many, many cores" processing card at that point.

 

Per wiki, they can put up to 11 CUs in a Shader Engine, so the Vega in Raven Ridge isn't unbalanced. So maybe there is some internal reorganization with Navi of the CU & SE structure, allowing for more a more balanced design. The semi-custom piece for Intel is a 24 CU design, for instance. So maybe it will be something like a 48 CU design as the larger Navi part. We'll find out in about a year.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, laminutederire said:

Yeah Navi won't impact sales then, that's probably why they're releasing it then. Because if rumours are to be true, that'd mean significant price cuts on their part, which hurts their bottom line.

If anything we could hope AMD pull a Pascal on Vega+Polaris and have those efficiency range being at significantly higher range and therefore end up with a significant increase in either power consumption or performance depending on the product. That should help them disassociate themselves from the whole not card image a bit.

That's probably what Nvidia is up to. They don't seem to want to radically change their arch for the foreseeable future, so a node shrink would be the way to go for them. Doing it incrementally makes sense because that gives them a little bit more time to change the arch itself.

 

Yeah, unfortunately that might get destabilized again with post silicon technologies.

Launching Q3 2018 would let Nvidia get the New Generation + Christmas sales, which is their big period, then they could launch a version that's ported to 7nm the next year to eat up those holiday sales.

 

The speed at which they release probably is dictated by AMD, as Nvidia dropped the Titan Xp & lowered the 1080's price right before they expected AMD to launch RX Vega in April 2017. From that, we can get a pretty good idea of Navi's results from what Nvidia does in 2019.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Taf the Ghost said:

Launching Q3 2018 would let Nvidia get the New Generation + Christmas sales, which is their big period, then they could launch a version that's ported to 7nm the next year to eat up those holiday sales.

 

The speed at which they release probably is dictated by AMD, as Nvidia dropped the Titan Xp & lowered the 1080's price right before they expected AMD to launch RX Vega in April 2017. From that, we can get a pretty good idea of Navi's results from what Nvidia does in 2019.

Yeah of course. Right now Navi could be exceptional or a flop. It could be released this summer, or in 2019, nobody knows. It wouldn't be impossible that Nvidia expects Navi this summer and plans to release something at the same time to try and shadow Amd's release. That'd indicate that Navi isn't that good (otherwise... price cuts), or that Nvidia doesn't know the level of the cards contrary to Vega since not much has leaked on Navi compared to Vega ( maybe because the marketing head has changed? ).

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, laminutederire said:

Yeah of course. Right now Navi could be exceptional or a flop. It could be released this summer, or in 2019, nobody knows. It wouldn't be impossible that Nvidia expects Navi this summer and plans to release something at the same time to try and shadow Amd's release. That'd indicate that Navi isn't that good (otherwise... price cuts), or that Nvidia doesn't know the level of the cards contrary to Vega since not much has leaked on Navi compared to Vega ( maybe because the marketing head has changed? ).

i wouldn't expect navi until at least january vega 7nm might see a summer release but i don't expect navi to do the same as for that they need much more volume, and then vega wouldn't help much as a pipe cleaner 

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Taf the Ghost said:

Per wiki, they can put up to 11 CUs in a Shader Engine, so the Vega in Raven Ridge isn't unbalanced.

That just for Raven Ridge, the 11 CUs per Shader Engine? Vega has 16 CUs and so did Fiji (Fury).

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, cj09beira said:

i wouldn't expect navi until at least january vega 7nm might see a summer release but i don't expect navi to do the same as for that they need much more volume, and then vega wouldn't help much as a pipe cleaner 

I personally don't care that much since I don't have any money for those :P haven't they starting producing prototypes for Navi though?

That and it is rumoured the new PlayStation based on Navi could arrived for Christmas. It would make sense that Navi would be ready before and released for PC as am incentive for developers to get to know its arch in practice before the PS5 would be released. It would also make sense that they'd produce the ps5 parts before and delaying the PC parts afterwards as well.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, cj09beira said:

i wouldn't expect navi until at least january vega 7nm might see a summer release but i don't expect navi to do the same as for that they need much more volume, and then vega wouldn't help much as a pipe cleaner 

There's been persistent rumors of something in Q3, but that's probably Vega on 7nm for ML. And it's probably going to be a display of the very few working silicon. I honestly wouldn't expect Navi until well into Q2 2019. AMD will want to get Zen2 and Epyc2 out the door first. There's also the little issue that AMD will still be selling every GPU they've allocated to make by that point.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Taf the Ghost said:

There's been persistent rumors of something in Q3, but that's probably Vega on 7nm for ML. And it's probably going to be a display of the very few working silicon. I honestly wouldn't expect Navi until well into Q2 2019. AMD will want to get Zen2 and Epyc2 out the door first. There's also the little issue that AMD will still be selling every GPU they've allocated to make by that point.

agreed with the first part, but by then there is a good chance that mining wont last that long, profitability is so low right now :(

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, laminutederire said:

I personally don't care that much since I don't have any money for those :P haven't they starting producing prototypes for Navi though?

That and it is rumoured the new PlayStation based on Navi could arrived for Christmas. It would make sense that Navi would be ready before and released for PC as am incentive for developers to get to know its arch in practice before the PS5 would be released. It would also make sense that they'd produce the ps5 parts before and delaying the PC parts afterwards as well.

Navi's design, and Nvidia's 7nm as well, should be well completed already. Testing & Validation take a while, and they're waiting for full production on the 7nm nodes. 

 

Though TSMC still seems to be on track for around July for full production on their 7nm node, so we'll see. We could see 7nm on PS5 before any other 7nm product to market.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, cj09beira said:

agreed with the first part, but by then there is a good chance that mining wont last that long, profitability is so low right now :(

There's always a new coin. :(

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Taf the Ghost said:

Navi's design, and Nvidia's 7nm as well, should be well completed already. Testing & Validation take a while, and they're waiting for full production on the 7nm nodes. 

 

Though TSMC still seems to be on track for around July for full production on their 7nm node, so we'll see. We could see 7nm on PS5 before any other 7nm product to market.

Nvidia might need less validation time. Ok if they go to 12nm then 7nm, that'd be the forth node for the same arch, so validation becomes more of an habit than something else where you could really have problems with rad8cak changes.

Link to comment
Share on other sites

Link to post
Share on other sites

Neat, pretty much the same it's been the last couple of generations(80 non ti as good or slightly better than last gen 80ti). I don't personally care about mid-range gpus though, I cant wait to see specs of the 1180ti or 1150ti. Low end and high end are where the best bang for the buck is, at least when it comes to nvidia gpus.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/18/2018 at 10:55 AM, AluminiumTech said:

We could see Tensor Cores in Nvidia's next gen. We'll just have to wait and see.

I don’t see why they would. As far as I can tell there’s no real use for them in gaming (at least not yet). Mostly just machine learning applications and this way they can force the professional market to commercial GPUs over gaming GPUs. That would be good news for gamers (less non-gamer market competition) but bad news for prosumers and budget academic labs.

 

At some point I could see mixed precision used in gaming for physics and real-time AI but the development of those into games would likely be a few generations away anyway and be very limited due to lack of experienced developers and inherent programming challenges with implementation of such complex code.

Primary PC-

CPU: Intel i7-6800k @ 4.2-4.4Ghz   CPU COOLER: Bequiet Dark Rock Pro 4   MOBO: MSI X99A SLI Plus   RAM: 32GB Corsair Vengeance LPX quad-channel DDR4-2800  GPU: EVGA GTX 1080 SC2 iCX   PSU: Corsair RM1000i   CASE: Corsair 750D Obsidian   SSDs: 500GB Samsung 960 Evo + 256GB Samsung 850 Pro   HDDs: Toshiba 3TB + Seagate 1TB   Monitors: Acer Predator XB271HUC 27" 2560x1440 (165Hz G-Sync)  +  LG 29UM57 29" 2560x1080   OS: Windows 10 Pro

Album

Other Systems:

Spoiler

Home HTPC/NAS-

CPU: AMD FX-8320 @ 4.4Ghz  MOBO: Gigabyte 990FXA-UD3   RAM: 16GB dual-channel DDR3-1600  GPU: Gigabyte GTX 760 OC   PSU: Rosewill 750W   CASE: Antec Gaming One   SSD: 120GB PNY CS1311   HDDs: WD Red 3TB + WD 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200 -or- Steam Link to Vizio M43C1 43" 4K TV  OS: Windows 10 Pro

 

Offsite NAS/VM Server-

CPU: 2x Xeon E5645 (12-core)  Model: Dell PowerEdge T610  RAM: 16GB DDR3-1333  PSUs: 2x 570W  SSDs: 8GB Kingston Boot FD + 32GB Sandisk Cache SSD   HDDs: WD Red 4TB + Seagate 2TB + Seagate 320GB   OS: FreeNAS 11+

 

Laptop-

CPU: Intel i7-3520M   Model: Dell Latitude E6530   RAM: 8GB dual-channel DDR3-1600  GPU: Nvidia NVS 5200M   SSD: 240GB TeamGroup L5   HDD: WD Black 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200   OS: Windows 10 Pro

Having issues with a Corsair AIO? Possible fix here:

Spoiler

Are you getting weird fan behavior, speed fluctuations, and/or other issues with Link?

Are you running AIDA64, HWinfo, CAM, or HWmonitor? (ASUS suite & other monitoring software often have the same issue.)

Corsair Link has problems with some monitoring software so you may have to change some settings to get them to work smoothly.

-For AIDA64: First make sure you have the newest update installed, then, go to Preferences>Stability and make sure the "Corsair Link sensor support" box is checked and make sure the "Asetek LC sensor support" box is UNchecked.

-For HWinfo: manually disable all monitoring of the AIO sensors/components.

-For others: Disable any monitoring of Corsair AIO sensors.

That should fix the fan issue for some Corsair AIOs (H80i GT/v2, H110i GTX/H115i, H100i GTX and others made by Asetek). The problem is bad coding in Link that fights for AIO control with other programs. You can test if this worked by setting the fan speed in Link to 100%, if it doesn't fluctuate you are set and can change the curve to whatever. If that doesn't work or you're still having other issues then you probably still have a monitoring software interfering with the AIO/Link communications, find what it is and disable it.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, pyrojoe34 said:

I don’t see why they would. As far as I can tell there’s no real use for them in gaming (at least not yet).

We don't know because game developers haven't tried to use them in gaming yet.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

Using tensor cores for machine learning in games at runtime doesn't make sense to me. It makes more sense during development to fine tune things.

 

But I don't want AI (the enemy anyway) to get progressively better as the game ages or the game needs time to mature or something and I have to run it.

Link to comment
Share on other sites

Link to post
Share on other sites

Shall we take bets on if Linus drops one?

PC - NZXT H510 Elite, Ryzen 5600, 16GB DDR3200 2x8GB, EVGA 3070 FTW3 Ultra, Asus VG278HQ 165hz,

 

Mac - 1.4ghz i5, 4GB DDR3 1600mhz, Intel HD 5000.  x2

 

Endlessly wishing for a BBQ in space.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, M.Yurizaki said:

Using tensor cores for machine learning in games at runtime doesn't make sense to me. It makes more sense during development to fine tune things.

 

But I don't want AI (the enemy anyway) to get progressively better as the game ages or the game needs time to mature or something and I have to run it.

I believe it was GN that said that most of the new real time raytracing Nvidia has been showing off utilizes tensors, which is why they always advertise as being a Volta-centric feature. If raytracing is going to be the next big "thing", I could see them pushing out consumer cards with at least some form of tensor on board, just to get an install base up so devs can start working with them. I'm not sure on a technical level if they can add less/nerfed tensors compared to what are on the compute cards, but it would make some amount of sense if they were able to.

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Waffles13 said:

I believe it was GN that said that most of the new real time raytracing Nvidia has been showing off utilizes tensors, which is why they always advertise as being a Volta-centric feature. If raytracing is going to be the next big "thing", I could see them pushing out consumer cards with at least some form of tensor on board, just to get an install base up so devs can start working with them. I'm not sure on a technical level if they can add less/nerfed tensors compared to what are on the compute cards, but it would make some amount of sense if they were able to.

Which would make more sense given Microsoft's push to get ray tracing into Direct3D.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×