Jump to content

NVIDIA Pascal officially called PK100 and PK104 w/ HBM2

BiG StroOnZ

And AMD is still lagging behind... why can't they keep up with Intel and Nvidia's process shrinks? :(

 

AMD lagging behind on Nvidia?

Haha what a laugh.

 

Yeah for now the GTX980 and Titan X are better, but for how long? lol.

Honnestly the whole GTX900 series did not impress me at all.

if you play high texture games, at 1440p or up, with all the filter cranked up, the GTX980 is only marginaly faster then a 290X.

Link to comment
Share on other sites

Link to post
Share on other sites

AMD lagging behind on Nvidia?

Haha what a laugh.

 

Yeah for now the gTX980 and Titan X are better, but for "how long" ? lol.

Do I need to remind you about the GDDR4 debacle? When it comes to graphics AMD is always one step of innovation behind Nvidia which gets copied the next generation. Mark my words: Arctic Islands will increase the cache sizes per SP dramatically. Just as Nvidia was the first to fully programmable shaders which ATI copied.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Do I need to remind you about the GDDR4 debacle? When it comes to graphics AMD is always one step of innovation behind Nvidia which gets copied the next generation. Mark my words: Arctic Islands will increase the cache sizes per SP dramatically. Just as Nvidia was the first to fully programmable shaders which ATI copied.

How do you know? We don't even have cache info about Tonga as it sits. Nvidia only raised cache density in Maxwell so they could cut back the interface to make the card cheaper. A higher density L2 alleviates the need for more memory bandwidth. Something that AMD is not short of with HBM.

Link to comment
Share on other sites

Link to post
Share on other sites

hxQCe8N.jpg

 

 

Based on the strong partnership with AMD, Hynix navigates the best Graphics solutions of each system for the future today.

http://www.amd.com/Documents/TFE2011_006HYN.pdf#search=hbm

 

Why do anyone think AMD has nothing to do with development of HBM?

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

How do you know? We don't even have cache info about Tonga as it sits. Nvidia only raised cache density in Maxwell so they could cut back the interface to make the card cheaper. A higher density L2 alleviates the need for more memory bandwidth. Something that AMD is not short of with HBM.

Games are not bandwidth-starved, something you'd know if you ever bothered running a memory profiler. Nvidia did that primarily for the power gains, an effort towards maintaining the crown in the HPC space, a proof of concept prep for Pascal, not to mention originally Maxwell had been planned for 20nm and with DP support, but alas TSMC fell down on the job.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

AMD lagging behind on Nvidia?

Haha what a laugh.

LOLOL TOP KEKEKEK M8 WHAT A LAUGH

/s

Your colors are showing.

They are, marginally behind Nvidia in a certain perspective (performance per watt and just overall performance in general). While they do have good GPU's, they are going to be left in the dust if they don't deliver on their next release.

 

Yeah for now the GTX980 and Titan X are better, but for how long? lol.

Until AMD delivers on their next Radeon launch, which hopefully will be this year. I still have doubts about them even beating the Titan X.

 

Honnestly the whole GTX900 series did not impress me at all.

Really? Low TDP with a big performance per watt didn't impress you? What if AMD were to pull the same thing? Would you say what you're saying now? I'd bet not - you'd be praising the hell out of them.

 

if you play high texture games, at 1440p or up, with all the filter cranked up, the GTX980 is only marginaly faster then a 290X.

Still beats the 290x regardless of what you consider marginal.

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

 

http://www.amd.com/Documents/TFE2011_006HYN.pdf#search=hbm

 

Why do anyone think AMD has nothing to do with development of HBM?

Because there's no proof AMD did anything beyond develop its mainboard and pre-emptively get driver preparation going. Hynix has hundreds of patents related to the HBM project, none of which even mention AMD. AMD has no patents related to the endeavor. Beyond being a financial partner and getting exclusive first usage rights, we have no proof AMD did anything substantial on the development side.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Do I need to remind you about the GDDR4 debacle? When it comes to graphics AMD is always one step of innovation behind Nvidia which gets copied the next generation. Mark my words: Arctic Islands will increase the cache sizes per SP dramatically. Just as Nvidia was the first to fully programmable shaders which ATI copied.

Which explains why AMD were the first to push audio through hdmi, or use asynchronous shaders (since Tahiti, while Nvidia just adopted the tech with Maxwell), or using xdma for multi-GPU, or HBM memory, or hybrid water cooling on a reference card... just to name a few firsts. Spewing nonsense and generalized statements that aren't true (like saying AMD are always one step behind in innovation) is silly. Both Nvidia and AMD are strong innovators, and both piggyback each other to get ahead.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

Games are not bandwidth-starved, something you'd know if you ever bothered running a memory profiler. Nvidia did that primarily for the power gains, an effort towards maintaining the crown in the HPC space, a proof of concept prep for Pascal, not to mention originally Maxwell had been planned for 20nm and with DP support, but alas TSMC fell down on the job.

There's no such thing as a game being bandwidth starved. The memory communicates and feeds the GPU the game has nothing to do with it. Resources are piped into VRAM and the registers are constantly poked by both the CPU and GPU. The game sits on top of what is actually a high level graphics stack so it couldn't do much with the GPU even if it wanted to. The increase in L2 helps alleviate memory bandwidth but also reduces power consumption. They could drop power, cut back interface, and all that jazz just by raising the L2 by twice its size. AMD could of already pumped their caches we just don't know because IPv8 hasn't been disclosed at all. We have the GPU but no information really regarding the architecture inside it. We've even contacted AMD specifically about the L2 density in Tonga and they never responded back.

Link to comment
Share on other sites

Link to post
Share on other sites

Do I need to remind you about the GDDR4 debacle? When it comes to graphics AMD is always one step of innovation behind Nvidia which gets copied the next generation. Mark my words: Arctic Islands will increase the cache sizes per SP dramatically. Just as Nvidia was the first to fully programmable shaders which ATI copied.

Is that why Nvidia had no dx 10.1 cards until their 300M series or why the gtx 400 and 500 series were a disaster or why they put 3GB of vram on the 700 series or why AMD had multi monitor support on one card 1st or why Nvidia still doesn't have bridgeless SLI?  Besides them finally pulling ahead in power consumption with Maxwell, please tell me exactly how Nvidia has always been ahead in graphics innovation?

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

Which explains why AMD were the first to push audio through hdmi, or use asynchronous shaders (since Tahiti, while Nvidia just adopted the tech with Maxwell), or using xdma for multi-GPU, or HBM memory, or hybrid water cooling on a reference card... just to name a few firsts. Spewing nonsense and generalized statements that aren't true (like saying AMD are always one step behind in innovation) is silly. Both Nvidia and AMD are strong innovators, and both piggyback each other to get ahead.

Nvidia knows how to read the market, much the way Intel does. Nvidia plays the business game, releasing features when developers are actually ready for them. XDMA has some major microstutter issues even to this day. Frankly I don't blame Nvidia for keeping away from it through Maxwell.

 

Hybrid water cooling was an admittance of failure of engineering. It would be one thing to offer it as an option, but it was the only SKU AMD released. Ever seen a Titan Z benchmarked under water? The 295x2 loses. That said, Nvidia leaves the option up to the customer. Meanwhile AMD gets in the way of enthusiasts, not to mention the cooler for the 295x2 is damn loud.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Is that why Nvidia had no dx 10.1 cards until their 300M series or why the gtx 400 and 500 series were a disaster or why they put 3GB of vram on the 700 series or why AMD had multi monitor support on one card 1st or why Nvidia still doesn't have bridgeless SLI?  Besides them finally pulling ahead in power consumption with Maxwell, please tell me exactly how Nvidia has always been ahead in graphics innovation?

DX 10.1 adoption was practically non-existent until Nvidia jumped aboard. It's called reading the market momentum. 400 & 500 were not disasters, and I currently run off a 570 in my school build. up until very recently 3GB was more than enough. Suddenly memory management in games went to pot.

 

XDMA XFire still has pretty prolific microstutter issues. Frankly it's not a superior solution right now, and Nvidia understands that. There's also the fact going bridgeless means potentially creating a PCIe bottleneck where there wasn't one before. Using a bridge cable allows mitigation of this. Nvidia pulled ahead on Power Consumption with Kepler, or else IBM wouldn't be using Kepler in their supercomputers. They'd have picked AMD's FirePro chips.

 

Fully programmable shaders, HDMI 2.0, DP 1.2, physics engines, real-time rendering and simulation, GPGPU AI programming, and a whole slew more. Nvidia is still a step ahead, but Nvidia also knows which steps to take when. Some of you may find Nvidia's slowness to move as complacency or arrogance, but I'd pose this question: is it arrogance if you really are the best in the room? It seems most consumers recognize Nvidia's superiority, and I can guarantee you it's not primarily marketing.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Hybrid water cooling was an admittance of failure of engineering. It would be one thing to offer it as an option, but it was the only SKU AMD released. Ever seen a Titan Z benchmarked under water? The 295x2 loses. That said, Nvidia leaves the option up to the customer. Meanwhile AMD gets in the way of enthusiasts, not to mention the cooler for the 295x2 is damn loud.

 

So AMD ships a 2 slot water cooled card, that is factory OC'd an performs amazing for the architecture, at low temps. Nvidia then ships a 3k$ card, that is factory UNDERclocked, yet still thermo throttles with its 3 slot fan blower, at very high temps, and AMD is the admittance of failure? Then that is Titan Z? "It's not a bug, it's a feature" kind of thing? Titan Z should have been on water too, and might have maybe kinda been a little worth the money, if OC'd on water. You cannot criticize AMD for something that NVidia outright failed at.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

There's no such thing as a game being bandwidth starved. The memory communicates and feeds the GPU the game has nothing to do with it. Resources are piped into VRAM and the registers are constantly poked by both the CPU and GPU. The game sits on top of what is actually a high level graphics stack so it couldn't do much with the GPU even if it wanted to. The increase in L2 helps alleviate memory bandwidth but also reduces power consumption. They could drop power, cut back interface, and all that jazz just by raising the L2 by twice its size. AMD could of already pumped their caches we just don't know because IPv8 hasn't been disclosed at all. We have the GPU but no information really regarding the architecture inside it. We've even contacted AMD specifically about the L2 density in Tonga and they never responded back.

You do know you can figure that out directly if you have a linux system, right? Kcachegrind anyone? You can run it on graphics processes and yank out GPU info as well. Seriously, when did this community go amateur hour on me?

 

Also, exactly, which is why HBM won't help much. Bandwidth is not the issue unless you're doing scientific computing, and even then the PCIe 3.0 bus is the bottleneck right now for that use case.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

So AMD ships a 2 slot water cooled card, that is factory OC'd an performs amazing for the architecture, at low temps. Nvidia then ships a 3k$ card, that is factory UNDERclocked, yet still thermo throttles with its 3 slot fan blower, at very high temps, and AMD is the admittance of failure? Then that is Titan Z? "It's not a bug, it's a feature" kind of thing? Titan Z should have been on water too, and might have maybe kinda been a little worth the money, if OC'd on water. You cannot criticize AMD for something that NVidia outright failed at.

No, the Titan Z was a fair stock cooled option, but no one ever leaves the stock coolers on for dual-chip GPUs, at least not any enthusiast I know. Create a cheap but functional air-cooled solution and let the enthusiasts have a field day with EKWB, Aquacomputer, or their phase change/dry ice/LN2 cooling.

 

No, the Titan Z was exactly what it should have been, though obnoxiously priced.

 

Nvidia didn't fail. Nvidia delivered what the market wanted. AMD delivered an anti-enthusiast design (very anti-modding) that was loud and forced users to make a choice between their custom loop solutions and AMD. Bad move, failure of engineering and business tactics for this industry.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You do know you can figure that out directly if you have a linux system, right? Kcachegrind anyone? You can run it on graphics processes and yank out GPU info as well. Seriously, when did this community go amateur hour on me?

 

Also, exactly, which is why HBM won't help much. Bandwidth is not the issue unless you're doing scientific computing, and even then the PCIe 3.0 bus is the bottleneck right now for that use case.

So go buy a R9 285 and let us know how that works out for you. If I owned one I would already know as I would poke the registers myself. There's no "amateur hour" here as I don't even need third party software to do it. Unless you feel grateful enough to donate a R9 285 to yours truly we still wont and don't know.

 

PCIe 3.0 isn't a bottleneck at all and Linus has a few videos on this.

The only place PCIe bandwidth becomes a problem is in super computing which is consumer irrelevant.

Link to comment
Share on other sites

Link to post
Share on other sites

No, the Titan Z was a fair stock cooled option, but no one ever leaves the stock coolers on for dual-chip GPUs, at least not any enthusiast I know. Create a cheap but functional air-cooled solution and let the enthusiasts have a field day with EKWB, Aquacomputer, or their phase change/dry ice/LN2 cooling.

 

No, the Titan Z was exactly what it should have been, though obnoxiously priced.

 

Nvidia didn't fail. Nvidia delivered what the market wanted. AMD delivered an anti-enthusiast design (very anti-modding) that was loud and forced users to make a choice between their custom loop solutions and AMD. Bad move, failure of engineering and business tactics for this industry.

 

What a load of BS.

 

Back in the day, your justification for a Titan Z, was that it could be used in computing, in schools and smaller companies, etc. NONE of those would EVER buy such a card, and start making custom water cooling loops, or use even more money on that card. Makes no sense what so ever.

 

AMD's design was in no way anti enthusiast. On the contrary, as many enthusiasts, who are not interested in custom loops, could easily buy and install/use that card, and get an amazing performance out of Hawaii.

 

Removing the cooler of 295x2 an Titan Z is not really any different. Any enthusiast, wanting a custom loop, could do that. The difference, is that they would actually be able to afford the custom loop with the AMD part. How many "enthusiasts" bought the Titan Z? The card that costs more than most peoples dual gpu, x99 computers.

 

Are you honestly saying that an UNDERclocked card that thermo throttles is not a fail?

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

And AMD is still lagging behind... why can't they keep up with Intel and Nvidia's process shrinks? :(

 

And we couldn't even make it one post in before someone had to bring up AMD...

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

Hopefully with all these gpu advances we'll be able to get games with max view distance where the eye can't tell the distance, life-like detail, and all the nice simulations that will make VR amazing.

Link to comment
Share on other sites

Link to post
Share on other sites

Welcome to the internet. :/ 

Link to comment
Share on other sites

Link to post
Share on other sites

Will be getting Pascal. Hopefully my 980 gets a bit more time :)

i7 6700K - ASUS Maximus VIII Ranger - Corsair H110i GT CPU Cooler - EVGA GTX 980 Ti ACX2.0+ SC+ - 16GB Corsair Vengeance LPX 3000MHz - Samsung 850 EVO 500GB - AX760i - Corsair 450D - XB270HU G-Sync Monitor

i7 3770K - H110 Corsair CPU Cooler - ASUS P8Z77 V-PRO - GTX 980 Reference - 16GB HyperX Beast 1600MHz - Intel 240GB SSD - HX750i - Corsair 750D - XB270HU G-Sync Monitor
Link to comment
Share on other sites

Link to post
Share on other sites

*throws 970 out the window*

*tries to grab it before it lands* 

Those are great news 2016 will be an epic year 

Link to comment
Share on other sites

Link to post
Share on other sites

Since my name is Pascal, I'm so going to buy a Pascal card :D

phanteks enthoo pro | intel i5 4690k | noctua nh-d14 | msi z97 gaming 5 | 16gb crucial ballistix tactical | msi gtx970 4G OC  | adata sp900

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×