Jump to content

nVidia X80, X80ti and X80 Titan rumoured Specs

Paragon_X
2 minutes ago, Valentyn said:

Yup, HBM2 is going to be for Vega, and their engineer said it would cost too much and isn't ready for mainstream users just yet ( in relation to Polaris ).

Roadmap-640x360.jpg

Vega doesn´t look to far off from Polaris, doesn´t seem like it´s that bad.

Link to comment
Share on other sites

Link to post
Share on other sites

So, X80 Titanium and X80 Titan? Makes perfect sense

Ryzen 7 5800X     Corsair H115i Platinum     ASUS ROG Crosshair VIII Hero (Wi-Fi)     G.Skill Trident Z 3600CL16 (@3800MHzCL16 and other tweaked timings)     

MSI RTX 3080 Gaming X Trio    Corsair HX850     WD Black SN850 1TB     Samsung 970 EVO Plus 1TB     Samsung 840 EVO 500GB     Acer XB271HU 27" 1440p 165hz G-Sync     ASUS ProArt PA278QV     LG C8 55"     Phanteks Enthoo Evolv X Glass     Logitech G915      Logitech MX Vertical      Steelseries Arctis 7 Wireless 2019      Windows 10 Pro x64

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, TrigrH said:

if they seriously call it Geforce X im going polaris,

Seems rational...

 

/s

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Starelementpoke said:

Vega doesn´t look to far off from Polaris, doesn´t seem like it´s that bad.

Polaris is going to be the "mid range series" to get the ball rolling with 16nm, Vega is their big boys. It's due very late this year, to mid next year.

Think of it as NV Mid Pascal to Big Pascal( next Titan )

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Valentyn said:

Polaris is going to be the "mid range series" to get the ball rolling with 16nm, Vega is their big boys. It's due very late this year, to mid next year.

Think of it as NV Mid Pascal to Big Pascal( next Titan )

So it´s going from the middleweight title to the heavyweight championships?

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Starelementpoke said:

So it´s going from the middleweight title to the heavyweight championships?

Essentially. They can get out the smaller chips quicker since they're much easier to make and take up the majority of sales.

Just like NV's 970 and 980 launched first last time, and some times after that they dropped the Big Champ the titan.

With a new manufacturing process it's even more important to get the mainstream cards rolling out first, especially since they usually slap around the older Champs anyway; but at a better price point.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, CommandMan7 said:

So what i'm getting from this is that only their $1000 flagship will even come close to the performance of the Radeon Pro Duo, but even then fall significantly short? IMHO, if you're going to be spending $1000 or more on a graphics card you may as well go for the better performance of the Radeon Pro Duo for the higher cost.

There is more to a card than just FLOPS.

 

There are also a lot of other things you have to take into consideration. Heat output, power usage, features, single GPU vs multi-GPU (so less issues when games does not properly support SLI/Crossfire), the Pro Duo is 1500 dollars and this card might be less (I sure hope it is less), and a bunch of other things.

 

Personally, I would not recommend anyone getting the Pro Duo. I would be very surprised if you can't get a better deal when than it when AMD and Nvidia launch their new cards in just a few months. The previous gen dual GPU card has never been as good of a buy as the current gen high end single GPU card (from a price:performance and user experience standpoint).

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, CommandMan7 said:

So what i'm getting from this is that only their $1000 flagship will even come close to the performance of the Radeon Pro Duo, but even then fall significantly short? IMHO, if you're going to be spending $1000 or more on a graphics card you may as well go for the better performance of the Radeon Pro Duo for the higher cost.

 

I would rather have a single high powered card.

Main Rig "Rocinante" - Ryzen 9 5900X, EVGA FTW3 RTX 3080 Ultra Gaming, 32GB 3600MHz DDR4

Link to comment
Share on other sites

Link to post
Share on other sites

One thing that might be wrong on that first chart is the memory on the X80 will probably be GDDR5X, not GDDR5. And the X80Ti might actually be HBM2 as it will be the same GP100 chip (cut down).

 

Can't wait to see Polaris vs Pascal. Let the battle begin!!! :D

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

I call BS. The gp 100 titan and X80ti should use the same the same memory , as using gddr5 on the cut down gp100 would mean they would have to use two memory controllers in the gp 100 chip , which costs more , lowers yields and taked more work to develop .

 

Not to mention a 512 bit, 8000mhz memory setup would cause power and heat to go through the roof in terms of memory .

 

 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Master Disaster said:

What? The X80 has 1.5 times the Tflop output of the 980TI, its no where even close to "on par" with it, based on Tflops alone the X80 obliterates the 980Ti.

 

The worry about these cards is the shockingly bad amounts of RAM Nvidia decided to go with, 6GB of RAM on the X80 just isn't enough, it should have been at least 8GB.

the more you post, the less seriously i am able to take you.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Samfisher said:

I haven't had a single instance of ever exceeding the VRAM of my 970.  Anything that uses more VRAM than that, the GPU core is the limiting factor anyway.

This is and always will be a fallacy. Textures take up a lot of VRAM, but requires very little GPU power. Rise of the Tomb Raider requires more than 4GB of VRAM for very high. Doom seems to need that too for highest setting. So the more the merrier.

 

Rise of the Tomb Raider - Texture Quality Performance

 

Quote

Those with 4GB GPUs are recommended to use High as VRAM stuttering can be observed on Very High, especially when swapping between gameplay, cutscenes and cinematics, and between gameplay zone transitions.

For a smooth experience with those max-quality, 4K x 4K textures, a 6GB GPU is instead recommended. And to crank things up to 4K (3840x2160), with Very High textures and max settings, we'd recommended GeForce GTX TITAN X GPUs with 12GB of VRAM, as usage can near 10GB over prolonged sessions.

 

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Notional said:

This is and always will be a fallacy. Textures take up a lot of VRAM, but requires very little GPU power. Rise of the Tomb Raider requires more than 4GB of VRAM for very high. Doom seems to need that too for highest setting. So the more the merrier.

 

 

 

 

It was more regarding 1080p games ever use more than 4GB VRAM, and if you ever exceed 4GB it's at a higher resolution that the GPU would struggle with anyway.  No mention of texture causing the performance drop, which I know doesn't.

QUOTE ME IN A REPLY SO I CAN SEE THE NOTIFICATION!

When there is no danger of failure there is no pleasure in success.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, TrigrH said:

lets add "nvidia confidential" to the end of our excel table so it seems more legit....

 

if they seriously call it Geforce X im going polaris,

I was going with Polaris anyways :D

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, VagabondWraith said:

If this comes out q4/q1 2017 that falls inline perfectly for my upgrade path.

It falls inline with my upgrade path too. Unfortunately my wallet runs on a different, much slower, schedule :(

My posts are in a constant state of editing :)

CPU: i7-4790k @ 4.7Ghz MOBO: ASUS ROG Maximums VII Hero  GPU: Asus GTX 780ti Directcu ii SLI RAM: 16GB Corsair Vengeance PSU: Corsair AX860 Case: Corsair 450D Storage: Samsung 840 EVO 250 GB, WD Black 1TB Cooling: Corsair H100i with Noctua fans Monitor: ASUS ROG Swift

laptop

Some ASUS model. Has a GT 550M, i7-2630QM, 4GB or ram and a WD Black SSD/HDD drive. MacBook Pro 13" base model
Apple stuff from over the years
iPhone 5 64GB, iPad air 128GB, iPod Touch 32GB 3rd Gen and an iPod nano 4GB 3rd Gen. Both the touch and nano are working perfectly as far as I can tell :)
Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Samfisher said:

It was more regarding 1080p games ever use more than 4GB VRAM, and if you ever exceed 4GB it's at a higher resolution that the GPU would struggle with anyway.  No mention of texture causing the performance drop, which I know doesn't.

Remember that textures are spread out on a Z axis too, and that you can zoom in on them. Even 1080p can benefit from 4K textures quite massively. No matter your screen resolution (or virtual super resolution), you should always go for the highest texture setting your VRAM allows.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

ill make my mind up when benchmarks come out

"if nothing is impossible, try slamming a revolving door....." - unknown

my new rig bob https://uk.pcpartpicker.com/b/sGRG3C#cx710255

Kumaresh - "Judging whether something is alive by it's capability to live is one of the most idiotic arguments I've ever seen." - jan 2017

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Master Disaster said:

I think I heard that GDDR5X was in fairly short supply and production wasn't expected to increase significantly until the summer so although it will certainly be cheaper I dont think it'll be as cheap as we all think it will/should be.

Correct me if I'm wrong, but some Polaris cards will have HBM1. I think it was PcPer that said that.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, TrigrH said:

lets add "nvidia confidential" to the end of our excel table so it seems more legit....

 

if they seriously call it Geforce X im going polaris,

At least it's better than gtx 1080.....

CPU: I7 3770k @4.8 ghz | GPU: GTX 1080 FE SLI | RAM: 16gb (2x8gb) gskill sniper 1866mhz | Mobo: Asus P8Z77-V LK | PSU: Rosewill Hive 1000W | Case: Corsair 750D | Cooler:Corsair H110| Boot: 2X Kingston v300 120GB RAID 0 | Storage: 1 WD 1tb green | 2 3TB seagate Barracuda|

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, MEC-777 said:

One thing that might be wrong on that first chart is the memory on the X80 will probably be GDDR5X, not GDDR5. And the X80Ti might actually be HBM2 as it will be the same GP100 chip (cut down).

 

Can't wait to see Polaris vs Pascal. Let the battle begin!!! :D

It says it runs at 8 GT/s, which should be possible with GDDR5. GDDR5X is supposed to enable speeds up in the 14 GT/s range, if you're not pushing that high it's probably better to just stick with good old GDDR5.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Dabombinable said:

With HBM, 4GB of vRAM is fine. Its not as effective as the on die cache of a Celeron when compared to a Pentium II with twice the off die cache (same perforamnce at the same clock speeds despite the Celeron having half the cache), but its a lot better than having 4GB GDDR5/GDDR5X.

And there will still be some issues and the fact that not all games have Crossfire profiles, and that some games don't even launch with them (or ones that work as they should).

there are plenty workarounds for missing or broken profiles. Both on consumer and AMDs side.

With DX12 games however, multi adapter is PURELY on game dev side...

 

If you think slow CF/SLI fixes was bad before, just you wait and see how long game devs are gonna wait before solving it.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Sakkura said:

It says it runs at 8 GT/s, which should be possible with GDDR5. GDDR5X is supposed to enable speeds up in the 14 GT/s range, if you're not pushing that high it's probably better to just stick with good old GDDR5.

didnt micro say that 14GT/s wasnt going to happen before 2017 or 2018?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×