Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

GEFORCE GTX 1070 with GDDR5X VRAM

bitsandpieces
 Share

https://www.techpowerup.com/250266/nvidia-unveils-geforce-gtx-1070-with-gddr5x-memory

https://www.zotac.com/ro/product/graphics_card/zotac-geforce-gtx-1070-amp-extreme-core-gddr5x

 

Quote

It looks like NVIDIA bought itself a mountain of unsold GDDR5X memory chips, and is now refreshing its own mountain of unsold GP104 inventory, to make products more presentable to consumers in the wake of its RTX 20-series and real-time ray-tracing lure. First, it was the GP104-based GTX 1060 6 GB with GDDR5X memory, and now it's the significantly faster GeForce GTX 1070, which is receiving the faster memory, along with otherwise unchanged specifications. ZOTAC is among the first NVIDIA add-in card partners ready with one such cards, the GTX 1070 AMP Extreme Core GDDR5X (model: ZT-P10700Q-10P). 

 

Much like the GTX 1060 6 GB GDDR5X, this otherwise factory-overclocked ZOTAC card sticks to a memory clock speed of 8.00 GHz, despite using GDDR5X memory chips that are rated for 10 Gbps. It features 8 GB of it across the chip's full 256-bit memory bus width. The GPU is factory-overclocked by ZOTAC to tick at 1607 MHz, with 1797 MHz GPU Boost, which are below the clock-speeds of the GDDR5 AMP Extreme SKU, that has not just higher 1805 MHz GPU Boost frequency, but also overclocked memory at 8.20 GHz. Out of the box, this card's performance shouldn't be distinguishable from the GDDR5 AMP Core, but the memory alone should serve up a significant overclocking headroom.

 

Good news for the people who wanted to buy a 10 series card and not a more expensive RTX card

 

image.png.8ed141fbff5caea3004ace4c25bd25d7.png

Link to comment
Share on other sites

Link to post
Share on other sites

NVIDIA is confusing the fuck out of me for real.

 

GTX 1060 3GB, GTX 1060 6GB, GTX 1060 5GB. Now is GTX 1070 GDDR5, GTX 1070TI, and now this shit. 

Link to comment
Share on other sites

Link to post
Share on other sites

Gotta move those Pascal chips

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k (won) - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

Linux Proliant ML150 G6:

Dual Xeon X5560 - 24GB ECC DDR3 - GTX 750 TI - old Seagate 1.5TB HDD - dark mode Ubuntu (and Win7, cuz why not)

 

How many watts do I need? Seasonic Focus thread, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

I don't understand Nvidia. People at my school aren't that smart and the CEO of Nvidia graduated from my high school. I'm sensing something. Is the 1070Ti with GDDR5X gonna roll in? If so... MMM

8086k

aorus pro z390

noctua nh-d15s chromax w black cover

evga 3070 ultra

samsung 128gb, adata swordfish 1tb, wd blue 1tb

seasonic 620w dogballs psu

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Taf the Ghost said:

The GP104 really must have yielded a lot of dead SMs for Nvidia. How many non-full die models are they trying to sell now? I count 4.

That's why monolithic die are a problem.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Dabombinable said:

That's why monolithic die are a problem.

AMD can't go on-die chiplets for gaming GPUs for a while, but they'll get there. They have the tech, but they need a new baseline architecture to make it happen. Vega on 7nm has all of the IF built in to do it at the connection side for Compute, but Gaming will be a bit. Probably 2022 at the earliest. 

Link to comment
Share on other sites

Link to post
Share on other sites

My head hurts. Let me see if I have got this right...

 

So you have the GP104 which was originally used in the GTX1080 and GTX1070.

In the 1080 it is paired with 8GB of GDDR5X memory with 2560 cuda cores.

In the 1070 it is 8GB of GDDR5 with a cut down 1920 cuda cores.
Then Nvidia released the GP104 in the 1070Ti, which had 2432 cuda cores and was essentially a slightly cut down GTX 1080 but with GDDR5 memory.
Then the GP104 is now used in the 1060 with 6GB of GDDR5X memory with who knows how many cuda cores activated, but that's not to be confused with the GTX1060 3GB, GTX1060 6GB 8gbps, GTX1060 6GB 9gbps, or GTX1060 5GB which all use the GP106 die with GDDR5.
Now the GP104 is used again in the GTX1070 with the same 1920 cuda cores as the original GTX1070 but now with GDDR5X memory... But still run it at the same 8gbps as the GDDR5 version.

 

Spoiler

image.png.e5808e4cc650cd17f12ade7b34009fa0.png

 

Is this going to be replacing the GDDR5 SKUs, or are they going to be sold alongside one another with the GDDR5X version being a more premium product? Because I can't imagine there would be any real world performance benefits over the existing GTX1070?


Are they just trying to shift towards GDDR5X memory and cease using GDDR5, or does everyone just have a shit load of GDDR5X memory they want to clear?

CPU: Intel i7 6700k  | Motherboard: Gigabyte Z170x Gaming 5 | RAM: 2x16GB 3000MHz Corsair Vengeance LPX | GPU: Gigabyte Aorus GTX 1080ti | PSU: Corsair RM750x (2018) | Case: BeQuiet SilentBase 800 | Cooler: Arctic Freezer 34 eSports | SSD: Samsung 970 Evo 500GB + Samsung 840 500GB + Crucial MX500 2TB | Monitor: Acer Predator XB271HU + Samsung BX2450

Link to comment
Share on other sites

Link to post
Share on other sites

Are like the 2030 and 2050 released? Make those not old chips?

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, fasauceome said:

Gotta move those Pascal chips

let's put new memory on old stuck in warehouse chips so people buy it at a premium and not at a discount because they are in fact old unsold chips in excess and prices should just go down. 

.

Link to comment
Share on other sites

Link to post
Share on other sites

Just give us 1090 dual core with 16 GB RAM and sell it at the same price as the 2080 Ti.

 

There

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Taf the Ghost said:

AMD can't go on-die chiplets for gaming GPUs for a while, but they'll get there. They have the tech, but they need a new baseline architecture to make it happen. Vega on 7nm has all of the IF built in to do it at the connection side for Compute, but Gaming will be a bit. Probably 2022 at the earliest. 

The problem with GPUs and chiplet design is that compared to CPUs, there's a massive difference in ratio between the front-end and the back-end. i.e., a vast majority of the GPU is execution units, that's not something you should be separating from the front-end for latency purposes. Especially when recent designs are trying to avoid hitting VRAM as much as possible.

 

The only way I can see chiplet design work is they have to stick with a lower SKU tier and just double or quadruple it up for higher end tiers. Which basically means you now have on-board CrossFire.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, M.Yurizaki said:

The problem with GPUs and chiplet design is that compared to CPUs, there's a massive difference in ratio between the front-end and the back-end. i.e., a vast majority of the GPU is execution units, that's not something you should be separating from the front-end for latency purposes. Especially when recent designs are trying to avoid hitting VRAM as much as possible.

 

The only way I can see chiplet design work is they have to stick with a lower SKU tier and just double or quadruple it up for higher end tiers. Which basically means you now have on-board CrossFire.

We'll see a Navi Compute product that's on-PCB Cross Fire. Looking at the MI60 release information, they're already doing that, they just haven't done it on a single PCB. So they're already there. (AdoredTV mentioned a MI100 that is a dual-gpu setup in a recent video after New Horizons, so it's pretty much a given that comes out.) The issue is gaming, which is why David Wang, AMD's GPU head, shot it down already.

 

Threadripper/Epyc 1 style MCM won't work on gaming GPUs and it'll have some issues in certain GPU compute tasks as well, but we'll see what latency looks like once Rome & Ryzen 3000 launch. The I/O die approach is what will work for Gaming GPUs, and I expect AMD to call them something like "Engines" that they attach to the central I/O die. Unlike CPUs, GPUs don't have a huge set of Comms overhead compared to the stack of FPUs. This would make the I/O relatively small, given the hierarchical nature of GPUs, which you could then just add Engines to.

 

Pretty sure these would need to be on an interposer and not a normal package, but we can also see why HBM made so much sense. At least until HBM never came down in price. That might get saved for high-end/workstation SKUs that AMD could then run out a lot cheaper than 800mm2 monolithic dies. Still, it also explains the rumor going around that AMD is headed back towards something like Terascale/VLIW approaches. GCN is here to stay as an ISA, but not a design system. AMD would run out near pure-FPU chiplets and then some sort of Compute chiplet to go with it. Professional cards get more compute ones, and they can roll out semi-custom compute stacks as well.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Taf the Ghost said:

The I/O die approach is what will work for Gaming GPUs, and I expect AMD to call them something like "Engines" that they attach to the central I/O die.

Even then, assuming they went with GCN in design, there's not a lot to lop off. For instance this is a block diagram of the high-end R9 200:

AMD-Hawaii-GCN-2.0.jpg

 

You could at best separate the right side and the row of MCs at the bottom. Everything else is a core if you want to find some equivalent in CPU land. It also appears that TeraScale was laid out in a similar fashion as well.

 

Though the problem is that considering how data hungry GPUs are, the number of memory channels would likely increase with the number of GPU chiplets you have. It's terrifying to think we now have consumer grade GPUs that operate at over half a terabyte per second of bandwidth.

Link to comment
Share on other sites

Link to post
Share on other sites

So Nvidia has decided to disable some parts on the GTX1080 and then rebadge them as GTX1060 and GTX1070. Someone got a hold of a GTX1060 w/GDDR5 and they discovered it's actually a GTX1080.

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, M.Yurizaki said:

Even then, assuming they went with GCN in design, there's not a lot to lop off. For instance this is a block diagram of the high-end R9 200:

AMD-Hawaii-GCN-2.0.jpg

 

You could at best separate the right side and the row of MCs at the bottom. Everything else is a core if you want to find some equivalent in CPU land. It also appears that TeraScale was laid out in a similar fashion as well.

 

Though the problem is that considering how data hungry GPUs are, the number of memory channels would likely increase with the number of GPU chiplets you have. It's terrifying to think we now have consumer grade GPUs that operate at over half a terabyte per second of bandwidth.

Gotta push that 4K. :)

 

But, you've pointed to why AMD would split it up. Purely from a die perspective, just splitting the I/O and the 4 Engines into 5 dies would gain AMD a huge % increase in Yield and much better bins. The on-interposer latency penalty is something like 1-2 ns, so it's not really much of a problem, in that regard. But you can't just run out 2-4 full GPU dies and MCM them together and make a gaming GPU. You need the full chiplet approach, along with all of the tech in place for it.  That's going to take a lot of work on the front-end to accomplish. I.e. a brand new Architecture. Which, from AMD is probably 2022-2023 and possibly on 5nm node.

 

CPUs went first because the x86 Server Market is 5-7x larger than the dGPU market. AMD will eventually be "Chiplet all the things!" because it makes the most technical & strategic sense for a Fabless company.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, handymanshandle said:

Gee, GPUs sure were less confusing three months ago.

No kidding. LOL

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, M.Yurizaki said:

Even then, assuming they went with GCN in design, there's not a lot to lop off. For instance this is a block diagram of the high-end R9 200:

AMD-Hawaii-GCN-2.0.jpg

 

You could at best separate the right side and the row of MCs at the bottom. Everything else is a core if you want to find some equivalent in CPU land. It also appears that TeraScale was laid out in a similar fashion as well.

 

Though the problem is that considering how data hungry GPUs are, the number of memory channels would likely increase with the number of GPU chiplets you have. It's terrifying to think we now have consumer grade GPUs that operate at over half a terabyte per second of bandwidth.

 

 

Care to go into a bit more detail about how the proccessing of stuff differs between a CPu and a GPu a worked example for each would be handy.

 

I mean my way of going at this would be to take everything but the L2 cache and the shader engines and shove it onto an IO Die, including the GDDR. Then take each individual Shader engine and make it it's own chiplet with it;s own L2 connected to the IO die via IF possibly with 1 or more layers of HBM on top of the chiplet for a large fast cache to avoid major calls to the memory unit, (a sort of mirrored L4 shared across all chiplets). Using HBM2 spec layers a single layer would be a 256 bit interface at 500mhz giving each shader engine 16GB/s of access to their set with a 1Gb capacity. HBM3 will supposedly have more data storage, high clocks, and be way cheaper to do.

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, CarlBar said:

-Snip-

The problem with this setup is the front-end is now on the I/O chip. The front-end needs to access the same cache as the execution resources (primarily instruction cache). But otherwise, each execution resource you do should ideally have its own front-end to manage it. Otherwise a centralized one may spend a lot of time with overhead in keeping tabs on what's in use.

 

For a better comparison, this is a simplified block diagram of GCN:

1091px-Generic_block_diagram_of_a_GPU.sv

 

And a simplified block diagram of a CCX

600px-zen_soc_block.svg.png

 

All the chiplet design is doing for Rome is taking everything above the Infinity Fabric interface and moving it elsewhere. The thing with GPUs is that massive block of execution units is tightly coupled and hungry. Anything to disturb feeding it will likely cause hiccups. On a side note, the tan part in the CCX diagram is the front-end, the light green part is the execution area.

 

I mean, I get it, chiplet designs look very promising. But it's not the solution to every problem at least with what we currently know.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, M.Yurizaki said:

The problem with this setup is the front-end is now on the I/O chip. The front-end needs to access the same cache as the execution resources (primarily instruction cache). But otherwise, each execution resource you do should ideally have its own front-end to manage it. Otherwise a centralized one may spend a lot of time with overhead in keeping tabs on what's in use.

 

For a better comparison, this is a simplified block diagram of GCN:

1091px-Generic_block_diagram_of_a_GPU.sv

 

And a simplified block diagram of a CCX

600px-zen_soc_block.svg.png

 

All the chiplet design is doing for Rome is taking everything above the Infinity Fabric interface and moving it elsewhere. The thing with GPUs is that massive block of execution units is tightly coupled and hungry. Anything to disturb feeding it will likely cause hiccups. On a side note, the tan part in the CCX diagram is the front-end, the light green part is the execution area.

 

I mean, I get it, chiplet designs look very promising. But it's not the solution to every problem at least with what we currently know.

 

I'm having a bit of trouble following. In one sense i get you about the front end part, on a Zen2 chiplet each CPU effectively has it's own front end, (i think it's called the execution unit but don't quote me, it's late and i'm tired). But at the same time in the diagram you linked that i quoted in my previous post you've got 1 front end controlling 4 back ends, (shader engines), other than the front end being on the same chip rather than a seperate one and the shared L2 cache i'm not sure what the difference is. It dosen;t help that i'm not sure what some of the letters on that first image mean so it's possibble the answers staring me in the face and i'm just not seeing it because my tired brain can't make an acurratte guess.

 

Whatever happened to keys on diagrams...

Link to comment
Share on other sites

Link to post
Share on other sites

I regret to inform you guys that this was a typo.

 

https://www.pcgamer.com/is-the-geforce-gtx-1070-getting-a-gddr5x-memory-upgrade-not-so-fast/

 

Quote

Zotac recently added another GeForce GTX 1070 graphics card model to its product stack, and in doing so it seemingly revealed that Nvidia was adding a GDDR5X memory option to the 1070, like it did with the GeForce GTX 1060 in October. If you got your hopes up, brace yourself—the GDDR5X memory that initially appeared in the specifications was a "simple mistake" on Zotac's part.

 

This was not immediately evident. When the new SKU appeared online (model ZT-P10700Q-10P), it displayed "GDDR5X" in big, bold letters in the product title, and also the accompanying specifications chart. Zotac pulled the listing offline, though not before Google could cache a snapshot.

 

Naturally, this led to the assumption that Zotac inadvertently announced something that Nvidia was not yet ready to reveal. However, the listing has since reappeared, only this time it shows the usual GDDR5 memory, not GDDR5X. I asked Zotac what was up with that, and the company confirmed it was just a typo.

Sorry, no GDDR5X 1070 is coming blobshrug.png

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

Zotac recently added another GeForce GTX 1070 graphics card model to its product stack, and in doing so it seemingly revealed that Nvidia was adding a GDDR5X memory option to the 1070, like it did with the GeForce GTX 1060 in October. If you got your hopes up, brace yourself—the GDDR5X memory that initially appeared in the specifications was a "simple mistake" on Zotac's part.

 

WHY is Zotac adding another 1070 to its line up!? The GTX1070 was released 2 and a half years ago. Zotac you've have 2 and a half years to release your line up. The replacement for it, the 2070, has already launched. Why are you now releasing new models of old cards, Zotac? Why!? It was strange enough when we thought it was a new model, but if it's the same model that has been available for the last two and a half years it just does not make any sense.

 

Spoiler

image.png.b726848ab7d5253f956bfa6e98e8b88e.png

 

Just when I thought I was no longer confused, Zotac uses Confuse Ray.

CPU: Intel i7 6700k  | Motherboard: Gigabyte Z170x Gaming 5 | RAM: 2x16GB 3000MHz Corsair Vengeance LPX | GPU: Gigabyte Aorus GTX 1080ti | PSU: Corsair RM750x (2018) | Case: BeQuiet SilentBase 800 | Cooler: Arctic Freezer 34 eSports | SSD: Samsung 970 Evo 500GB + Samsung 840 500GB + Crucial MX500 2TB | Monitor: Acer Predator XB271HU + Samsung BX2450

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×