Jump to content

Nvidia volta gpu tested

Giantpie12

https://www.game-debate.com/news/23739/first-nvidia-tesla-volta-v100-gpu-benchmarks-reveal-more-than-double-pascal-performance#discussForm

 

according to this article the next gen Nvidia gpus completely blow the pascal line up out of the water, if it is true we would be seeing a leap in gpu performance not seen in quite a while imo.

Quote

Nvidia Tensor Cores are new to Volta and are specially designed for AI training and inference applications. They’re mixed-precision FP16/FP32 cores specifically designed for deep learning, offering up to 6x the performance of previous generation Pascal P100’s FP16 operations, and 12x the performance in terms of FP32 operations. This is a specialised machine learning device that is several times faster than the current-gen, although this AI optimisation won’t feed through directly to gaming performance. The end result is a GPU that’s incredibly fast for Deep Learning; an AI dream for many research institutions.

 

Quote

As for the Nvidia DGX-1, preliminary benchmarks give it a score of 743,537 in single-core Geekbench 4 compute tests. The second fastest system ever recorded in the benchmark database is the current P100 system, which scores ‘just’ 320,031. A workstation PC with nine Quadro GP100 graphics cards can score 278,706. That’s an astounding improvement, more than doubling the peak benchmark score within a single generation.

_id1505729258_343178.jpg

 

-Thanks @TVwazhere

Edited by TheRandomness
Fixed.
Link to comment
Share on other sites

Link to post
Share on other sites

Geekbench? Really? Please ban this benchmark, it sucks!

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

I just realized why he never quoted. That article is complete garbage. That's not tested. It's speculation with nothing else. No deep dive analysis, legit benchmarks, or actual performance outputs... Click bait title.

And Geek Bench?

 

I guess there last question is can it run Crisis or  Mine?

Link to comment
Share on other sites

Link to post
Share on other sites

But then you realise the V100 has 5120 CUDA cores, compared to the 3584 of the P100. That's a 30% increase in CUDA cores. Now, comparing FP64/FP32, you'll see that those have increases of 33%. So basically, according to some really dodgy 'benchmark' Volta has minimal IPC improvements.

 

EDIT: Lol I can't do maths.

13 minutes ago, pyrojoe34 said:

Your math is backwards btw. It’s a 42.8% increase in CUDA cores and a 50% increase in FP32/64. What you calculated was the decrease from Volta to Pascal, not the increase from Pascal to Volta. 

USEFUL LINKS:

PSU Tier List F@H stats

Link to comment
Share on other sites

Link to post
Share on other sites

It'd be nice if the Volta TITAN X was the full chip since begin like in Maxwell family... might be a worth purchase on day one release... I walked away from buying a 1080ti for myself to replace the TITAN XM in this expectation... if by some chance it is a crippled chip again like happened in Pascal then I'll go GTX 2080...

Personal Desktop":

CPU: Intel Core i7 10700K @5ghz |~| Cooling: bq! Dark Rock Pro 4 |~| MOBO: Gigabyte Z490UD ATX|~| RAM: 16gb DDR4 3333mhzCL16 G.Skill Trident Z |~| GPU: RX 6900XT Sapphire Nitro+ |~| PSU: Corsair TX650M 80Plus Gold |~| Boot:  SSD WD Green M.2 2280 240GB |~| Storage: 1x3TB HDD 7200rpm Seagate Barracuda + SanDisk Ultra 3D 1TB |~| Case: Fractal Design Meshify C Mini |~| Display: Toshiba UL7A 4K/60hz |~| OS: Windows 10 Pro.

Luna, the temporary Desktop:

CPU: AMD R9 7950XT  |~| Cooling: bq! Dark Rock 4 Pro |~| MOBO: Gigabyte Aorus Master |~| RAM: 32G Kingston HyperX |~| GPU: AMD Radeon RX 7900XTX (Reference) |~| PSU: Corsair HX1000 80+ Platinum |~| Windows Boot Drive: 2x 512GB (1TB total) Plextor SATA SSD (RAID0 volume) |~| Linux Boot Drive: 500GB Kingston A2000 |~| Storage: 4TB WD Black HDD |~| Case: Cooler Master Silencio S600 |~| Display 1 (leftmost): Eizo (unknown model) 1920x1080 IPS @ 60Hz|~| Display 2 (center): BenQ ZOWIE XL2540 1920x1080 TN @ 240Hz |~| Display 3 (rightmost): Wacom Cintiq Pro 24 3840x2160 IPS @ 60Hz 10-bit |~| OS: Windows 10 Pro (games / art) + Linux (distro: NixOS; programming and daily driver)
Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, PCGuy_5960 said:

Geekbench? Really? Please ban this benchmark, it sucks!

Geekbench is pretty accurate, within 100-200% margin of error... 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Tedny said:

people, that won't be consumer Card, at least not the first generation.... like it was with 400-700 series 

don't tell me that I can't buy a thousand dollar gpu to play minecraft 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, deXxterlab97 said:

Geekbench is pretty accurate, within 100-200% margin of error... 

seeing how some houses are built.. thats acceptable margin in some sectors :D

Link to comment
Share on other sites

Link to post
Share on other sites

volta will have higher clocks too. Hence the 50% increase.

 

But the point is that these arent game benchmarks, and its a question if the consumer cards will get the same thing and if we can use them in our codes and games. It would be a big improvement for game AI.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, goodtofufriday said:

Is this really the same class of gpu die they are comparing? it seems like their super high end used in machine learning applications. Pretty damn sure its not gonna be a 2080

I'm not too sure about the validity of the data but if accurate the p100 and v100 would be in the same class (professional/tesla/datacenter type).  You're correct, super high end - not Geforce.

- ASUS X99 Deluxe - i7 5820k - Nvidia GTX 1080ti SLi - 4x4GB EVGA SSC 2800mhz DDR4 - Samsung SM951 500 - 2x Samsung 850 EVO 512 -

- EK Supremacy EVO CPU Block - EK FC 1080 GPU Blocks - EK XRES 100 DDC - EK Coolstream XE 360 - EK Coolstream XE 240 -

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Dan Castellaneta said:

No sources within the articles, no meaningful benchmarks, all speculation. I thought Volta wasn't dropping until later next year anyways.

Seriously, tech news is really as bad, if not worse than cable news when it comes to this garbage.

there are multiple sources citing the same tests that were run

Link to comment
Share on other sites

Link to post
Share on other sites

e02e5ffb5f980cd8262cf7f0ae00a4a9_press-x

 

Sorry but this is likely the big chip and a synthetic bench: Real world it will be a bit of a jump, maybe 10 to 20% jump in performance vs current Pascal cards but it won't "revolutionize" shit.

 

Because even if it did, Nvidia would just keep that big Volta chip in their back pockets and release a cut version. In Ken's own word they don't have to do much better when AMD Radeon is just so behind.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

This massive V100 die proves a point I tried to make a while back:  Large dies are the future.  We are maybe 5 years away from the limits of silicon, which most agree is around 5nm.  Eventually, yields on 5nm will be so good that it will be easy to make larger and larger dies.  They might even increase wafer size up to 600 mm or larger.  Today, 815mm² is considered massive and only for super computers, but I think we'll see consumer GPUs that size or larger, eventually.

i7 4790k @4.7 | GTX 1070 Strix | Z97 Sabertooth | 32GB  DDR3 2400 mhz | Intel 750 SSD | Define R5 | Corsair K70 | Steel Series Rival | XB271, 1440p, IPS, 165hz | 5.1 Surround
PC Build

Desk Build

Link to comment
Share on other sites

Link to post
Share on other sites

>Game debate

 

kek. 

Our Grace. The Feathered One. He shows us the way. His bob is majestic and shows us the path. Follow unto his guidance and His example. He knows the one true path. Our Saviour. Our Grace. Our Father Birb has taught us with His humble heart and gentle wing the way of the bob. Let us show Him our reverence and follow in His example. The True Path of the Feathered One. ~ Dimboble-dubabob III

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, VegetableStu said:

the future looks to be extremely expensive ._.

Good yields and larger wafers will bring prices down a lot...  It will happen, mark my words!  

 

In 1999 the GeForce GPU was around 80 mm².  Today, the GTX 1080 ti is 471mm², and even that is not considered to be really "large".  Chips have been getting bigger for a long time.  That trend will accelerate once we hit the 5nm wall.

i7 4790k @4.7 | GTX 1070 Strix | Z97 Sabertooth | 32GB  DDR3 2400 mhz | Intel 750 SSD | Define R5 | Corsair K70 | Steel Series Rival | XB271, 1440p, IPS, 165hz | 5.1 Surround
PC Build

Desk Build

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, TheRandomness said:

But then you realise the V100 has 5120 CUDA cores, compared to the 3584 of the P100. That's a 30% increase in CUDA cores. Now, comparing FP64/FP32, you'll see that those have increases of 33%. So basically, according to some really dodgy 'benchmark' Volta has minimal IPC improvements.

Your math is backwards btw. It’s a 42.8% increase in CUDA cores and a 50% increase in FP32/64. What you calculated was the decrease from Volta to Pascal, not the increase from Pascal to Volta. 

Primary PC-

CPU: Intel i7-6800k @ 4.2-4.4Ghz   CPU COOLER: Bequiet Dark Rock Pro 4   MOBO: MSI X99A SLI Plus   RAM: 32GB Corsair Vengeance LPX quad-channel DDR4-2800  GPU: EVGA GTX 1080 SC2 iCX   PSU: Corsair RM1000i   CASE: Corsair 750D Obsidian   SSDs: 500GB Samsung 960 Evo + 256GB Samsung 850 Pro   HDDs: Toshiba 3TB + Seagate 1TB   Monitors: Acer Predator XB271HUC 27" 2560x1440 (165Hz G-Sync)  +  LG 29UM57 29" 2560x1080   OS: Windows 10 Pro

Album

Other Systems:

Spoiler

Home HTPC/NAS-

CPU: AMD FX-8320 @ 4.4Ghz  MOBO: Gigabyte 990FXA-UD3   RAM: 16GB dual-channel DDR3-1600  GPU: Gigabyte GTX 760 OC   PSU: Rosewill 750W   CASE: Antec Gaming One   SSD: 120GB PNY CS1311   HDDs: WD Red 3TB + WD 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200 -or- Steam Link to Vizio M43C1 43" 4K TV  OS: Windows 10 Pro

 

Offsite NAS/VM Server-

CPU: 2x Xeon E5645 (12-core)  Model: Dell PowerEdge T610  RAM: 16GB DDR3-1333  PSUs: 2x 570W  SSDs: 8GB Kingston Boot FD + 32GB Sandisk Cache SSD   HDDs: WD Red 4TB + Seagate 2TB + Seagate 320GB   OS: FreeNAS 11+

 

Laptop-

CPU: Intel i7-3520M   Model: Dell Latitude E6530   RAM: 8GB dual-channel DDR3-1600  GPU: Nvidia NVS 5200M   SSD: 240GB TeamGroup L5   HDD: WD Black 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200   OS: Windows 10 Pro

Having issues with a Corsair AIO? Possible fix here:

Spoiler

Are you getting weird fan behavior, speed fluctuations, and/or other issues with Link?

Are you running AIDA64, HWinfo, CAM, or HWmonitor? (ASUS suite & other monitoring software often have the same issue.)

Corsair Link has problems with some monitoring software so you may have to change some settings to get them to work smoothly.

-For AIDA64: First make sure you have the newest update installed, then, go to Preferences>Stability and make sure the "Corsair Link sensor support" box is checked and make sure the "Asetek LC sensor support" box is UNchecked.

-For HWinfo: manually disable all monitoring of the AIO sensors/components.

-For others: Disable any monitoring of Corsair AIO sensors.

That should fix the fan issue for some Corsair AIOs (H80i GT/v2, H110i GTX/H115i, H100i GTX and others made by Asetek). The problem is bad coding in Link that fights for AIO control with other programs. You can test if this worked by setting the fan speed in Link to 100%, if it doesn't fluctuate you are set and can change the curve to whatever. If that doesn't work or you're still having other issues then you probably still have a monitoring software interfering with the AIO/Link communications, find what it is and disable it.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, pyrojoe34 said:

Your math is backwards btw. It’s a 42.8% increase in CUDA cores and a 50% increase in FP32/64. What you calculated was the decrease from Volta to Pascal, not the increase from Pascal to Volta. 

Whoops

Yeah it's been a while since I've messed with percentages and such. So I'll just quote that in my op.

USEFUL LINKS:

PSU Tier List F@H stats

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, CostcoSamples said:

Good yields and larger wafers will bring prices down a lot...  It will happen, mark my words!  

 

In 1999 the GeForce GPU was around 80 mm².  Today, the GTX 1080 ti is 471mm², and even that is not considered to be really "large".  Chips have been getting bigger for a long time.  That trend will accelerate once we hit the 5nm wall.

Eh smaller chips mean more chips per wafer we moved away from large sizes and are moving back into them, possibly for the last time or two. We will then get smaller and more efficient again until we hit a point we're node size doesn't shrink for a while and we make bugger chips to compensate, however I'm guessing in 15 years we will be at the point were moderate cards are fine for 4k like cards are for 1080p today. This continues until we have cards like that for the resolution that we can't tell the difference between

The Vinyl Decal guy.

Celestial-Uprising  A Work In-Progress

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, MeDownYou said:

Eh smaller chips mean more chips per wafer we moved away from large sizes and are moving back into them, possibly for the last time or two. We will then get smaller and more efficient again until we hit a point we're node size doesn't shrink for a while and we make bugger chips to compensate, however I'm guessing in 15 years we will be at the point were moderate cards are fine for 4k like cards are for 1080p today. This continues until we have cards like that for the resolution that we can't tell the difference between

I disagree.  AI for games is just around the corner and it will require hardware acceleration.  The need for more GPU power will not diminish, and die size will forever be dictated by manufacturing cost vs consumer demand.  My point is simply that manufacturing cost will diminish.

i7 4790k @4.7 | GTX 1070 Strix | Z97 Sabertooth | 32GB  DDR3 2400 mhz | Intel 750 SSD | Define R5 | Corsair K70 | Steel Series Rival | XB271, 1440p, IPS, 165hz | 5.1 Surround
PC Build

Desk Build

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×