Jump to content

NVIDIA GeForce GTX 1080 3DMark Firestrike Benchmark Results Leaked - beats 980ti and Fury X

Mr_Troll
4 minutes ago, ThinkWithPortals said:

2.1GHz? Much speed! Very wow!

  Reveal hidden contents

giphy.gif  

Seriously though, 2.1GHz OC? That's mental.

It's bonkers mental.

- ASUS X99 Deluxe - i7 5820k - Nvidia GTX 1080ti SLi - 4x4GB EVGA SSC 2800mhz DDR4 - Samsung SM951 500 - 2x Samsung 850 EVO 512 -

- EK Supremacy EVO CPU Block - EK FC 1080 GPU Blocks - EK XRES 100 DDC - EK Coolstream XE 360 - EK Coolstream XE 240 -

Link to comment
Share on other sites

Link to post
Share on other sites

BREAKING NEWS!: The new flagship card is faster than the old one!

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, zMeul said:

who said there aren't drivers for GTX1080?

nvidia-gtx-1080-gpuz-official.jpg

Saw that earlier, they said the memory will run @2.5 GHz (for memory bandwidth) while it's clearly 1.25 here so I'm not sure what's with these numbers.

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Bouzoo said:

Saw that earlier, they said the memory will run @2.5 GHz while it's clearly 1.25 here so I'm not sure what's with these numbers. 

isn't it double pumped ? 1.2*2=2.5

this is GDDR5X, so I'm guessing it's different base clock than we're used to GDRR5

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, zMeul said:

it's double pumped: 1.2*2=2.5

That crossed my mind, for total multiplier of 8, but I never figured it might have a problem with showing 2.5 GHz.

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, MageTank said:

I am seeing a lot of $400 980 Ti's on ebay. If they drop to about $300, I might still buy one over the GTX 1080, simply because that performance for that price just can't be beat. The only concerning factor i have, is whether or not i can get a GTX 980 Ti to run on a 500w platinum PSU, lol. Stupid SFF power restrictions x.x

You can always start bidding under $200, you might get lucky. unlikely

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

HOLY SHI888888~!!!!! 65% PERFORMANCE INCREASE FROM A GTX 980 AND A 35% INC FROM A 980TI!!!! AHHHHHHHHHH FUUUUUUUUUUUUUTUUUUUUUUREEE

• FX-8320  GTX 970  M5A97 R2  Corsair H100I GT  500W PSU  RIPJAWS 8GB DDR3  SAMSUNG 840 EVO 120GB  1TB HDD 

 
Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Spectrez said:

they said the same thing about maxwell, but still havent delivered.

 

i fear that Nvidia didnt aim for Async computing, and focused on a gpu architecture that still only support software async computing :/ 

Unverified at this point but yeah, if true its a huge chunk of upgrade that's for sure.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Zaryab said:

Why do people like you even exist? You moron. Its not just another flagship card... its an amazing flagship card... its 65% improvement over the 980... that DOESNT HAPPEN EVERY GENERATION OF CARDS. 35% better than a 980ti and is cheaper??? yeah its a big deal.

...and it has more memory, and faster memory as well...and it's early drivers for Pascal, it will only get better from now on!

...but, at stock it's 32% faster than the stock GTX 980ti ...(76% x 1.32 = 100.32) so not 35..yet.

...stock boost clock for GTX 980ti is 1075mhz...my 980ti is running at 1460mhz...therefore i have a 36% overclock in place...now do the math...

 

but i have to agree...a 300mm² chip that can do that, at that price...at 180W TDP, with more memory, and better memory...is just fantastic.

Not worth the upgrade for me personally...but great none the less.

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Thony said:

According to Jay2Cents, nope. He said we are getting to the point where heat is not the limit anymore but other factors are, hence why Nvidia demo had it running at 67C.

what else could affect it ? Process node ?( surely 14nm is enough for 3ghz ) Power ? ( does nvidia restrict power limits to keep it's TDP low ?)

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, zMeul said:

who said there aren't drivers for GTX1080?

nvidia-gtx-1080-gpuz-official.jpg

I'm kind of dissapointed with the memory bandwith though ... About the same as a 290x , for a card that should be almost twice as fast. And considering this is first gen gddr5X ( 10gb/s ) , it probably  won't overclock very well. I fear there might be a bottleneck .

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Coaxialgamer said:

what else could affect it ? Process node ?( surely 14nm is enough for 3ghz ) Power ? ( does nvidia restrict power limits to keep it's TDP low ?)

Here: 

 

 

 

Connection200mbps / 12mbps 5Ghz wifi

My baby: CPU - i7-4790, MB - Z97-A, RAM - Corsair Veng. LP 16gb, GPU - MSI GTX 1060, PSU - CXM 600, Storage - Evo 840 120gb, MX100 256gb, WD Blue 1TB, Cooler - Hyper Evo 212, Case - Corsair Carbide 200R, Monitor - Benq  XL2430T 144Hz, Mouse - FinalMouse, Keyboard -K70 RGB, OS - Win 10, Audio - DT990 Pro, Phone - iPhone SE

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Bouzoo said:

Saw that earlier, they said the memory will run @2.5 GHz (for memory bandwidth) while it's clearly 1.25 here so I'm not sure what's with these numbers.

The reason why Vram clock speeds fluctuate so wildly between different tech sites, journalists and software is because of the fuckery that is ''effective clock speed''. A bullshit made up term that makes it pretty much impossible for software to correctly report he clock speed 100% of the time.

 

First of all, before i explain, there is so such thing as effective clock speed. 3500MHZ GDDR5 does NOT have an effective clock speed of 7000MHZ. Just because each cycle transfers two bits, does NOT mean that it has twice the number of cycles. The 3500 million cycles per second, delivers 7000 million bits of data per second.

 

Now the situation got even worse with GDDR5X. Now the stupid assholes are reporting effective clock speeds of 10 000MHZ. Not only is the Vram name incorrectly named. GDDR5X is quad data ram. ''Graphics Quad Data Rate'' or ''GQDR'' is the correct non-misleading name. But the reported cycle rate is also wrong.

 

The Vram in the 1080 cycles 2500 million times per second, delivering 4 bits of data per cycle, and it transfers data on 8 chips with 32 lanes each for a total of 256 lanes. This equals 10Gb of data per lane. per second for a total of 2560Gb per second or 320GB per second.

 

As of now i am declaring war on the ''effective clock speed'' term. Either you say the correct clock speed, which is 2500MHZ on the 1080. If you really want to say 10 000, call it MbPS (Megabit per second).

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, MMKing said:

The reason why Vram clock speeds fluctuate so wildly between different tech sites, journalists and software is because of the fuckery that is ''effective clock speed''. A bullshit made up term that makes it pretty much impossible for software to correctly report he clock speed 100% of the time.

 

First of all, before i explain, there is so such thing as effective clock speed. 3500MHZ GDDR5 does NOT have an effective clock speed of 7000MHZ. Just because each cycle transfers two bits, does NOT mean that it has twice the number of cycles. The 3500 million cycles per second, delivers 7000 million bits of data per second.

 

Now the situation got even worse with GDDR5X. Now the stupid assholes are reporting effective clock speeds of 10 000MHZ. Not only is the Vram name incorrectly named. GDDR5X is quad data ram. ''Graphics Quad Data Rate'' or ''GQDR'' is the correct non-misleading name. But the reported cycle rate is also wrong.

 

The Vram in the 1080 cycles 2500 million times per second, delivering 4 bits of data per cycle, and it transfers data on 8 chips with 32 lanes each for a total of 256 lanes. This equals 10Gb of data per lane. per second for a total of 2560Gb per second or 320GB per second.

 

As of now i am declaring war on the ''effective clock speed'' term. Either you say the correct clock speed, which is 2500MHZ on the 1080. If you really want to say 10 000, call it MbPS (Megabit per second).

SO TLDR; is gddr5x a LOTTT better than gddr5 or no? 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Zaryabb said:

SO TLDR; is gddr5x a LOTTT better than gddr5 or no? 

Assuming the buss width is identical. You're looking at a about 40% improvement over the previous generation.

 

Luckily, for easy comparison. The 980 and the 1080 has an identical buss width, and the 1080 has bandwidth of 320GB/s while the 980 has 224GB/s or a 42.8% improvement.

 

As for overclocking. We know GDDR5 easily adds 200-400MHZ extra on the memory for 7.4-7.8 Gb/S per lane. If GDDR5X overclocks very poorly or worse, not at all, the technology is more or less just an incremental increase in bandwidth. Though 3-4GHZ clocked memory is allegedly coming in 2017/2018 so it's possible that the OEM partners release 1080s with higher clocking memory down the road.

 

But in the end, it doesn't really matter how you look at it. GDDR5X is a positive.

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/13/2016 at 2:08 PM, Coaxialgamer said:

I'm kind of dissapointed with the memory bandwith though ... About the same as a 290x , for a card that should be almost twice as fast. And considering this is first gen gddr5X ( 10gb/s ) , it probably  won't overclock very well. I fear there might be a bottleneck .

Why? It's already been shown over the past few years that memory bandwidth doesn't make a big difference in most scenarios. 

 

We've already seen during the tech demo it running at 1 ghz over stock specs, and they say it was just a random sample they grabbed.

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Lays said:

Why? It's already been shown over the past few years that memory bandwidth doesn't make a big difference in most scenarios. 

 

We've already seen during the tech demo it running at 1 ghz over stock specs, and they say it was just a random sample they grabbed.

It would have been great if they had not gone with Hynix's HBM, and just went with HMC(Sped up that train!), that way we'd have the high bandwidth of HBM2, and the glory of ultra low-latency, the issue HBM/HBM2 has.  Raw bandwidth?  Hell yeah.  Latency?  Not as great as GDDR5, if I recall for HBM.  Hopefully we get HMC in 2018.

Link to comment
Share on other sites

Link to post
Share on other sites

I really hope so. Why would they release a new architecture and only to be beaten by the predecessor? 

ROG X570-F Strix AMD R9 5900X | EK Elite 360 | EVGA 3080 FTW3 Ultra | G.Skill Trident Z Neo 64gb | Samsung 980 PRO 
ROG Strix XG349C Corsair 4000 | Bose C5 | ROG Swift PG279Q

Logitech G810 Orion Sennheiser HD 518 |  Logitech 502 Hero

 

Link to comment
Share on other sites

Link to post
Share on other sites

buying 2 of these :)

EOC folding stats - Folding stats - My web folding page stats

 

Summer Glau: Quote's The future is worth fighting for. Serenity

 

My linux setup: CPU: I7 2600K @4.5Ghz, MM: Corsair 16GB vengeance @1600Mhz, GPU: 2 Way Radeon his iceq x2 7970, MB: Asus sabertooth Z77, PSU: Corsair 750 plus Gold modular

 

My gaming setup: CPU: I7 3770K @4.7Ghz, MM: Corsair 32GB vengeance @1600Mhz, GPU: 2 Way Gigabyte RX580 8GB, MB: Asus sabertooth Z77, PSU: Corsair 860i Platinum modular

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×