Jump to content

Nvidia Claims 3X Improved Performance In Certain Applications With New Driver Update for Titan Xp (Vega Response?)

Max_Settings
18 minutes ago, Billy_Mays said:

Well if they keep going this way it's not going to end well

How is that? Have you seen Vega and how it can barely compete with last years tech? Nvidia is fine as they are still the only cards worth buying.

 

No one knows if this update was purposely held back or actually was proper development and they have better code. 

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Billy_Mays said:

Well if they keep going this way it's not going to end well

You see, here’s the problem with Vega that I don’t think a lot of people thought about. AMD is SO late to the party with Vega. They have a 1080 and 1070 competitor, 15 months later. Nvidia has enjoyed 15 months with 0 competition and now still have none in the high end with the 1080 ti and Titan Xp. But here’s the issue, Volta is 6-8 months out. So Vega will be relevant for 6-8 months then it is right back to complete Nvidia dominance. And do you think AMD is going to come out with a completely new lineup in 6-8 months? No chance. AMD is just trying to grab a little bit of money here at the end of the lifecycle of this generation of cards. And with Volta we are expecting almost another doubling in performance like we saw from Maxwell to Pascal. So Nvidia will have a monopoly again.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, mynameisjuan said:

How is that? Have you seen Vega and how it can barely compete with last years tech? Nvidia is fine as they are still the only cards worth buying.

 

No one knows if this update was purposely held back or actually was proper development and they have better code. 

Vega FE is a lot faster then the Titan Xp in professional applications. This looks like a direct response to that.

 

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, The Benjamins said:

Vega FE is a lot faster then the Titan Xp in professional applications. This looks like a direct response to that.

 

We don’t know that yet, 100%. You all are just assuming that Vega is better or equal than pascal based on assumptions and AMDs always fake benchmarks.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Max_Settings said:

You see, here’s the problem with Vega that I don’t think a lot of people thought about. AMD is SO late to the party with Vega. They have a 1080 and 1070 competitor, 15 months later. Nvidia has enjoyed 15 months with 0 competition and now still have none in the high end with the 1080 ti and Titan Xp. But here’s the issue, Volta is 6-8 months out. So Vega will be relevant for 6-8 months then it is right back to complete Nvidia dominance. And do you think AMD is going to come out with a completely new lineup in 6-8 months? No chance. AMD is just trying to grab a little bit of money here at the end of the lifecycle of this generation of cards. And with Volta we are expecting almost another doubling in performance like we saw from Maxwell to Pascal. So Nvidia will have a monopoly again.

Yeah that's the problem AMD should have made it lots more better than a Titian X so then Nvidia would start shitting bricks

Im mostly on discord now and you can find me on my profile

 

My Build: Xeon 2630L V, RX 560 2gb, 8gb ddr4 1866, EVGA 450BV 

My Laptop #1: i3-5020U, 8gb of DDR3, Intel HD 5500

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Max_Settings said:

We don’t know that yet, 100%. You all are just assuming that Vega is better or equal than pascal based on assumptions and AMDs always fake benchmarks.

What? We had benchmarks weeks ago. 

https://www.pcper.com/reviews/Graphics-Cards/Radeon-Vega-Frontier-Edition-16GB-Air-Cooled-Review/Professional-Testing-SPEC

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Max_Settings said:

You see, here’s the problem with Vega that I don’t think a lot of people thought about. AMD is SO late to the party with Vega. They have a 1080 and 1070 competitor, 15 months later. Nvidia has enjoyed 15 months with 0 competition and now still have none in the high end with the 1080 ti and Titan Xp. But here’s the issue, Volta is 6-8 months out. So Vega will be relevant for 6-8 months then it is right back to complete Nvidia dominance. And do you think AMD is going to come out with a completely new lineup in 6-8 months? No chance. AMD is just trying to grab a little bit of money here at the end of the lifecycle of this generation of cards. And with Volta we are expecting almost another doubling in performance like we saw from Maxwell to Pascal. So Nvidia will have a monopoly again.

I've noticed.

 

It seems like their whole ploy over the past year or so is to offer comparable tech for a cheaper price much later.

- ASUS X99 Deluxe - i7 5820k - Nvidia GTX 1080ti SLi - 4x4GB EVGA SSC 2800mhz DDR4 - Samsung SM951 500 - 2x Samsung 850 EVO 512 -

- EK Supremacy EVO CPU Block - EK FC 1080 GPU Blocks - EK XRES 100 DDC - EK Coolstream XE 360 - EK Coolstream XE 240 -

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, The Benjamins said:

Vega FE is a lot faster then the Titan Xp in professional applications. This looks like a direct response to that.

 

BUT is the Titan Xp really the Vega FE's competition?

- ASUS X99 Deluxe - i7 5820k - Nvidia GTX 1080ti SLi - 4x4GB EVGA SSC 2800mhz DDR4 - Samsung SM951 500 - 2x Samsung 850 EVO 512 -

- EK Supremacy EVO CPU Block - EK FC 1080 GPU Blocks - EK XRES 100 DDC - EK Coolstream XE 360 - EK Coolstream XE 240 -

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Max_Settings said:

We don’t know that yet, 100%. You all are just assuming that Vega is better or equal than pascal based on assumptions and AMDs always fake benchmarks.

Vega FE's been released for a while lol.

1 minute ago, TidaLWaveZ said:

BUT is the Titan Xp really the Vega FE's competition?

They're in the same price bracket

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, TidaLWaveZ said:

BUT is the Titan Xp really the Vega FE's competition?

yes both are marketed as in between cards for gaming and professional use.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Glenwing said:

Maybe I'm remembering wrong, but I recall when AMD was pushing their Mantle API, NVIDIA responded by announcing their next driver would bring "up to double performance" in games, and it turned out to be 1-5% increase in a handful of games plus one game which got double performance in SLI, because... SLI wasn't working before, and the driver fixed it. So double performance in that game. Woooooh...

 

Seems like they might actually be worried about Vega if they're trying to kill hype with that old trick again.

This is what I'm expecting this time around. 3x perf is either going to be in extremely limited areas, and I mean a handful of cases. And mostly likely in places where things were not working as expected to being with. 

CPU: Amd 7800X3D | GPU: AMD 7900XTX

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, TidaLWaveZ said:

I've noticed.

 

It seems like their whole ploy over the past year or so is to offer comparable tech for a cheaper price much later.

It’s not even a much cheaper price. It’s not going to be quite as fast as a 1080 we really fairly certain, and $499 is not cheaper than a 1080. You can pick up 1080s under $550 so the pice gap isn’t really that big. And especially with the TDP difference, you are going to pay back that saved money quick in power bills.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, The Benjamins said:

yes both are marketed as in between cards for gaming and professional use.

 

11 minutes ago, RagnarokDel said:

Vega FE's been released for a while lol.

They're in the same price bracket

 

That's where I was going, I don't know if it's still an issue or not but wasn't/isn't the FE particularly lackluster at running games? When I was looking at benches the FE was slightly above 1070 making it kind of a tough choice for a workstation/gaming when the Titan Xp has been out for a while with lesser pro performance and better gaming performance.

- ASUS X99 Deluxe - i7 5820k - Nvidia GTX 1080ti SLi - 4x4GB EVGA SSC 2800mhz DDR4 - Samsung SM951 500 - 2x Samsung 850 EVO 512 -

- EK Supremacy EVO CPU Block - EK FC 1080 GPU Blocks - EK XRES 100 DDC - EK Coolstream XE 360 - EK Coolstream XE 240 -

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, TidaLWaveZ said:

 

 

That's where I was going, I don't know if it's still an issue or not but isn't the FE particularly lackluster at running games? When I was looking at benches the FE was slightly above 1070 making it kind of a tough choice for a pro/gaming when the Titan Xp has been out for a while with lesser pro performance and better gaming performance.

true, its been said the FE will get pro and game driver updates when those cards release.

 

but the point of the FE is Pro before gaming, it is good for someone who will be working 8 hours a day and playing less then 4 hours a day.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, The Benjamins said:

true, its been said the FE will get pro and game driver updates when those cards release.

 

but the point of the FE is Pro before gaming, it is good for someone who will be working 8 hours a day and playing less then 4 hours a day.

It's Pro's before Ho's.

- ASUS X99 Deluxe - i7 5820k - Nvidia GTX 1080ti SLi - 4x4GB EVGA SSC 2800mhz DDR4 - Samsung SM951 500 - 2x Samsung 850 EVO 512 -

- EK Supremacy EVO CPU Block - EK FC 1080 GPU Blocks - EK XRES 100 DDC - EK Coolstream XE 360 - EK Coolstream XE 240 -

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, JurunceNK said:

although all that brings is double the VRAM and actually certified drivers for professional software, along with incredible reliability and a eye-watering price tag to boot

And ECC on the memory. That's the one thing I wish my Titan X 2016 had, that I was almost tempted to buy a Quadro for.

12 hours ago, Jahramika said:

Next Nvidia will have Infinity Fabric and HBM memory and Full Dx 12 support and fresync support!

They already have full DX12 support on Pacal, HBM memory on the Pascal Teslas, and use an interposer like infinity fabric on a few of their enterprise products, so all they really lack is an adaptive-sync implementation to complete your list.

54 minutes ago, The Benjamins said:

yes both are marketed as in between cards for gaming and professional use.

The Titan Xp definitely wasn't. It was marketed from the start as a Deep-learning\Neural Net card just like the 2016 Titan X. Nvidia refused to give samples to people like Jayz on the sole basis of "This is not a gaming card!". 

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, TidaLWaveZ said:

 

 

That's where I was going, I don't know if it's still an issue or not but wasn't/isn't the FE particularly lackluster at running games? When I was looking at benches the FE was slightly above 1070 making it kind of a tough choice for a workstation/gaming when the Titan Xp has been out for a while with lesser pro performance and better gaming performance.

Some features on FE are disabled at the time and will be available when RX Vega releases. Most notably Tile-Based Rasterizer.

3 minutes ago, Sniperfox47 said:

They already have full DX12 support on Pacal, HBM memory on the Pascal Teslas, and use an interposer like infinity fabric on a few of their enterprise products, so all they really lack is an adaptive-sync implementation to complete your list.

Nah they dont support FP16 on any Gefore cards and a few other features are only partial/emulated like Async compute.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

So are we going to get a @TaranLMG video testing this update and comparing to radeon WX/SG cards in the near future?  LMG is on, or trying to go to, an 8k workflow at the moment, right?  Might be a relatively easy thing to work into the day to day(lies, it'll be a pain in the ass), might be especially interesting after seeing RED and AMD going on about their 8k abilities at siggraph.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Sniperfox47 said:

They already have full DX12 support on Pacal,

They don't ;) Check it. No current manufacturer has full DX12 feature-set in their products, what's more Nvidia still doesn't have Asynchronous Compute Units and has a workaround in Pascal instead called "Dynamic Load Balancing", thanks to that workaround DX12 isn't as terrible on Pascal as it was on Maxwell.

32 minutes ago, Sniperfox47 said:

The Titan Xp definitely wasn't. It was marketed from the start as a Deep-learning\Neural Net card just like the 2016 Titan X. Nvidia refused to give samples to people like Jayz on the sole basis of "This is not a gaming card!". 

Yeah, that one is funny because it apparently is "not a gaming card" and despite not being called a "GTX Titan X" it has a goddamn huge GEFORCE logo on the cooler :P One almost is like: "Make up your mind Nvidia goddammit!"

Also, if it was a GPU targeted only at professional workloads, it wouldn't be so crippled in FP16 and FP64 performance...

CPU: AMD Ryzen 7 5800X3D GPU: AMD Radeon RX 6900 XT 16GB GDDR6 Motherboard: MSI PRESTIGE X570 CREATION
AIO: Corsair H150i Pro RAM: Corsair Dominator Platinum RGB 32GB 3600MHz DDR4 Case: Lian Li PC-O11 Dynamic PSU: Corsair RM850x White

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, RagnarokDel said:

Nah they dont support FP16 on any Gefore cards and a few other features are only partial/emulated like Async compute.

A) DX12 is an API. They fully support the API. What you're talking about are feature levels and supporting the 12.0+ feature level is not the same as supporting DX12.

 

B) The Titan X 2016, and Titan Xp both support 16 bit and 10 bit shader precision...

 

C) Nvidia Pascal cards fully support asynchronous compute. Even Maxwell v2 cards technically support it. Performance is regressed while using it because manually injecting a potentially non-bounded compute function is inherently more intrusive than Nvidia's existing scheduling. Async compute is only even useful on cards like AMD's and older Nvidia cards where they don't already fully utilize the GPU.

 

3 minutes ago, Morgan MLGman said:

They don't ;) Check it. No current manufacturer has full DX12 feature-set in their products, what's more Nvidia still doesn't have Asynchronous Compute Units and has a workaround in Pascal instead called "Dynamic Load Balancing", thanks to that workaround DX12 isn't as terrible on Pascal as it was on Maxwell.

Again, feature level 12 is not the same as DirectX 12...

 

And again, they have a workaround for async compute crippling their cards, not because they don't support it, but because it's inherently worse than their existing solution. There's no magical "Async Compute Unit" they can add to their cards that will make it "work", since it working *properly* is substantially slower than their current solution.

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, MoonSpot said:

So are we going to get a @TaranLMG video testing this update and comparing to radeon WX/SG cards in the near future?  LMG is on, or trying to go to, an 8k workflow at the moment, right?  Might be a relatively easy thing to work into the day to day(lies, it'll be a pain in the ass), might be especially interesting after seeing RED and AMD going on about their 8k abilities at siggraph.

I'm really curious about the SSG Vega card. That seems like one of those niches that could really push things forward in the Video space. If it works as well as it demos.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Glenwing said:

Maybe I'm remembering wrong, but I recall when AMD was pushing their Mantle API, NVIDIA responded by announcing their next driver would bring "up to double performance" in games, and it turned out to be 1-5% increase in a handful of games plus one game which got double performance in SLI, because... SLI wasn't working before, and the driver fixed it. So double performance in that game. Woooooh...

 

Seems like they might actually be worried about Vega if they're trying to kill hype with that old trick again.

Well the Vega cards, and FE in particular was competing against the Quadro P6000 on performance. It was around 5-10% slower depending on workload tests, but cost $1000 vs $5000+.

It completely slapped around the Prosumer Xp; so NVIDIA had to respond somehow.
It's a good response; but just shows how they've been screwing their prosumers.

 

300% isn't hard to believe considering just how poorly the Xp did before. My guess it it'll do very well, but just not so well it beats the low-end Quadro's that are around the same price point.

 

Therefore still getting people to buy a Quadro for the very best NVIDIA has to offer.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Sniperfox47 said:

And again, they have a workaround for async compute crippling their cards, not because they don't support it, but because it's inherently worse than their existing solution. There's no magical "Async Compute Unit" they can add to their cards that will make it "work", since it working *properly* is substantially slower than their current solution.

Huh, feature level? Then how can one say that their GPUs "fully" support DX12 if they don't support all of its features?

 

As for how Async works on Maxwell/Pascal & GCN, here's a paste from reddit:

Quote

The difference between Maxwell/Pascal and GCN is fundamentally this :

On GCN every compute unit (64 'shaders') can switch between graphics and compute tasks quickly.

Consider this; a task on the graphics queue takes 10ms to execute on on GCN, of these 10ms the shaders in the CU are stalled for a contiguous period of four seconds between 5 and 9 ms. What happens on GCN is that this task is halted, all relevant data is moved to a cache and another task is piped in from the compute queue, which completes in 3ms.

So now instead of just executing the first task in 10ms, it executes both in 10 ms.
This happens within a single unit (CU).
On Maxwell/Pascal it's radically different.
Let's consider a Maxwell GPU with 10 SMs.
Each SM executes a series of warps assigned to it from each task.
Let's say that same graphics task from before also takes 10ms to execute on a single SM it will take 1ms if you spread it over all 10.
Now you also have a compute shader that needs 3ms to execute on a single SM.
So we assign 8 SMs to the graphics task which now completes 1.25ms, and two to the compute shader which will complete in 1.5ms.
You have 0.25ms in which the compute shader is running and the graphics shader has finished.
On Maxwell those 0.25ms are wasted on those 8 SMs because it can't repartition the SMs outside of drawcall boundaries, on Pascal this limitation is removed with 'dynamic load balancing'
This is obviously a little simplified, you also have to consider that the tasks best suited at running concurrently (parallelism is a subset of concurrency) are usually those that stress different parts of the GPU.

GCN's implementation is called 'async shaders' and it operates within a single unit.
Maxwell and Pascal do not do this.
On Maxwell this functionality is disabled precisely because extracting additional performance from it entails explicitly scheduling the graphics and compute tasks such that you minimize differences in execution time for all shaders within each drawcall.
The performance deficit you see when you enable async compute in Ashes of the Singularity (for example) is due to DirectX emulating a compute queue and serializing everything by piping these tasks into the graphics queue. It is essentially handling the scheduling of said tasks.
In order for it to work well on Maxwell nvidia would need to develop a very reliable driver level heuristic which evaluates shader execution time and schedules accordingly. It's a lot of work for very little reward.
Technically Maxwell supports this feature, in practice it is not beneficial to performance outside of very specific cases.

Because of how the hardware in those cards is made (i.e. lack of Async Compute Engines/Units) the software workaround allows Pascal cards not to lose as much performance when transitioning from DX11 to 12 as Maxwell GPUs did ^_^

 

CPU: AMD Ryzen 7 5800X3D GPU: AMD Radeon RX 6900 XT 16GB GDDR6 Motherboard: MSI PRESTIGE X570 CREATION
AIO: Corsair H150i Pro RAM: Corsair Dominator Platinum RGB 32GB 3600MHz DDR4 Case: Lian Li PC-O11 Dynamic PSU: Corsair RM850x White

Link to comment
Share on other sites

Link to post
Share on other sites

If nvidia is going to release drivers for the titan XP to boost the performance by 3X, then thats a small response to vega from nvidia.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×