Jump to content

AMD's 3xx series tessellation "not" improved - performance boost comes from the drivers

zMeul

What was the Larrabee betrayal? I know Intel developed a GPU named Larrabee @ 2008-2009 but didn't know Nvidia had anything to do with that?

 

 

this is interesting, I know of the project but not about "the betrayal part"

If you examine the 1.5bn USD license deal between Intel and Nvidia, you'll find most of Nvidia's IP is open for use by Intel. what you will find missing are the fundamental pieces which would allow Intel to link them together in a useful manner. JSH withdrew many of those patents from the license after the demo, because even though Larrabee was a disappointment to Intel, it scared the living Hell out of Nvidia that Intel could come so far in such a short time, lots of money spent or not. Nvidia didn't want a third competitor, especially one with Intel's kind of money. That betrayal left a very sour taste, and now Intel wants blood, hence the push for x86-based accelerators which can be programmed natively, which is why Knight's Landing is a 6/3 TFlop beast which is better than the Tesla K40 while using less power. 

 

Intel wants Nvidia's head now, in addition to its IP. The last time Intel decided it wanted to destroy its competition, IBM barely escaped with its life, whereas MOS Technologies died and Texas Instruments just gave up and quickly spun off into niche hardware. Nvidia will not be able to stay ahead of Intel forever with only GPUs at its back, which is one reason it's pushing Tegra into cars and military contracts now. Intel's out for blood, and if Nvidia doesn't run, Nvidia will die or be swallowed up (most likely the latter via merge/buyout).

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

you know how weak mobile processors are compared to desktop processors right. and you know nvidia's tegra processor cant compete against a pentium

Actually the K1 vs. the G3258 is only 10% behind at stock clocks for each.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Actually the K1 vs. the G3258 is only 10% behind at stock clocks for each.

like i said it cant even beat a pentium how is it suppose to beat a i3 or i5 also all of their cpu are ARM which is much different from X86 so they have no experience with X86 processors

Link to comment
Share on other sites

Link to post
Share on other sites

If you examine the 1.5bn USD license deal between Intel and Nvidia, you'll find most of Nvidia's IP is open for use by Intel. what you will find missing are the fundamental pieces which would allow Intel to link them together in a useful manner. JSH withdrew many of those patents from the license after the demo, because even though Larrabee was a disappointment to Intel, it scared the living Hell out of Nvidia that Intel could come so far in such a short time, lots of money spent or not. Nvidia didn't want a third competitor, especially one with Intel's kind of money. That betrayal left a very sour taste, and now Intel wants blood, hence the push for x86-based accelerators which can be programmed natively, which is why Knight's Landing is a 6/3 TFlop beast which is better than the Tesla K40 while using less power.

Intel wants Nvidia's head now, in addition to its IP. The last time Intel decided it wanted to destroy its competition, IBM barely escaped with its life, whereas MOS Technologies died and Texas Instruments just gave up and quickly spun off into niche hardware. Nvidia will not be able to stay ahead of Intel forever with only GPUs at its back, which is one reason it's pushing Tegra into cars and military contracts now. Intel's out for blood, and if Nvidia doesn't run, Nvidia will die or be swallowed up (most likely the latter via merge/buyout).

So basically, Nvidia poked the wrong beast to mess with. Intel is a very powerful company but I would hate to see Nvidia laid to waste if that ever came to fruition.

I just couldn't imagine a world without AMD and Nvidia. :(

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

like i said it cant even beat a pentium how is it suppose to beat a i3 or i5 also all of their cpu are ARM which is much different from X86 so they have no experience with X86 processors

You realize the K1 is at 2.7GHz (boost) vs 3.2 on the G3258, right? Pardon me, but that would put the superior IPC in Nvidia's chip. Furthermore, Nvidia would get AMD's CPU IP, stuff it could very quickly apply to its new line of x86 processors with the Denver IP added in, not to mention Jim Keller can't stand Intel and would likely hop right into Nvidia's arms to help them out. Nvidia can do a Hell of a lot given access to x86.

 

Lastly, ARM and x86 are not that different anymore, except ARM still doesn't have virtualization. ARM has very quickly gone the route of CISC. It's taken on Out of Order Processing. It now has Microcode in most of the chips implementing the Armv8 64-bit instruction set too. The difference between it and x86 now is pretty much nomenclature, origins in low power vs. high power, and the niceties required for a virtualized HPC environment.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You realize the K1 is at 2.7GHz (boost) vs 3.2 on the G3258, right? Pardon me, but that would put the superior IPC in Nvidia's chip. Furthermore, Nvidia would get AMD's CPU IP, stuff it could very quickly apply to its new line of x86 processors with the Denver IP added in, not to mention Jim Keller can't stand Intel and would likely hop right into Nvidia's arms to help them out. Nvidia can do a Hell of a lot given access to x86.

i would much rather samsung buy amd than nvidia and you know that the pentium is intel's weakest line of processors and the K1 is one of nvidia's fastest

Link to comment
Share on other sites

Link to post
Share on other sites

What I'm saying is no one should expect the drivers for their new cards to be officially supported for the older ones.

 

I also wouldn't assume they're direct rebrands.

 

Some are direct rebrands and some aren't. But based on the facts we now know, the actual GPU chip/dies them selves are unchanged. They are the same. The only difference is they might be slightly better binned and the memory chips are also better binned/higher quality.

 

I looked at a detailed review of the HIS IceQx2 R9 390 vs my HIS IceQx2 R9 290 and the two cards are exactly identical. I can even post side by side pics to confirm.  

 

MSI, Asus and Sapphire, on the other hand, have made some significant changes to their PCB's, components and cooler designs. So it largely depends on which cards you're comparing. 

 

No, AMD is selling cards made with a better 28nm process.

If you think the GPU is the same with no improvements, and that you can flash a bios and update drivers on the R9 290X and you will have a R9 390X then boy, you are going in for a surprise.

 

No. They are exactly the same Hawaii GPU dies. Some people are reporting successful vBIOS flashing of 290's to 390's and using the new 15.15 drivers. So it is quite possible. Since I'm a 290 user myself, I'm going to be following this and will probably try it myself. (dual BIOS cards ftw! ;) )

 

The issue with Hairworks in Witcher 3 was over-tesselation. AMD users were able to get significantly better performance with no visible loss in image quality by capping it to x8 or x16 in the catalyst control center. Which means CD Projekt Red could have boosted everybody's framerates higher (Nvidia and AMD) by optimized it properly, yet they failed to do so...

 

Yep. I'm running TW3 with VSR at 1440p down to 1080p with full AA and the "high" preset with Tesselation set to 32x override in Catalyst (hairworks off). Runs mid 40's to 60+ fps. Hair effects looks the same as with hairworks turned on but with almost no performance hit. :)

 

...let's be honest no one is going to swap to a 300 series card from a 980 or higher. Hell all the same arguments exist between the 290x and the 970 that exist between the 390x and the 970 so I doubt many of those would switch either.), and that is literally the only improvements they have to show for it.

 

They aren't trying to get people to switch from a 980 to a 390x. They're trying to get potential buyers to buy a 390x instead of a 980. If you already have a card in that performance range, why would you consider switching to another as a side-grade? ;)

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

i would much rather samsung buy amd than nvidia and you know that the pentium is intel's weakest line of processors and the K1 is one of nvidia's fastest

It was also Nvidia's 2nd try vs. Intel's 20 something generation chip. Last I checked that's damn impressive. Samsung would never be allowed to buy. The FTC and the DHS would block it without fail. They'd never allow a foreign firm exclusive access to one of the U.S.'s most valuable assets. x86_64 would go to an American firm in the event of AMD's dissolution. It doesn't what you'd prefer.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

So basically, Nvidia poked the wrong beast to mess with. Intel is a very powerful company but I would hate to see Nvidia laid to waste if that ever came to fruition.

I just couldn't imagine a world without AMD and Nvidia. :(

JSH is nothing if not arrogant, but even he's wising up to it. If AMD splits ATI off and sells itself to Nvidia, then the combined entity has a chance long-term, though at that point Intel will very rapidly start integrating the ATI IP and will be offering top dollar to GPU talent. Could you imagine Intel armed with Raja Koduri a process node ahead of the rest of the industry? Nvidia could take on the HSA mantle and very rapidly deploy it and integrate it, and that would leave in a bit of a bind if suddenly its competition has tighter integration than it, which would require a paradigm shift and buy Nvidia/AMD time to get the whole portfolio redone and repositioned for long-term competition.

 

The industry and consumers are better off without AMD as a whole entity the way it is now. You'd have stiff competition very quickly and yearly price wars that really only Nvidia and Intel can afford in the first place. 

 

The other nasty bit about Intel getting Nvidia is TSMC would lose another huge client. And with Altera gone from GloFo, when AMD would eventually get buried, GloFo would be gone, leaving just TSMC and Samsung to fight Intel as a foundry, and TSMC would be on its last legs with only Qualcomm and Apple for customers. If Intel gets Nvidia, the world is screwed 9 ways from Sunday. There's just no putting it nicely.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

It was also Nvidia's 2nd try vs. Intel's 20 something generation chip. Last I checked that's damn impressive. Samsung would never be allowed to buy. The FTC and the DHS would block it without fail. They'd never allow a foreign firm exclusive access to one of the U.S.'s most valuable assets. x86_64 would go to an American firm in the event of AMD's dissolution. It doesn't what you'd prefer.

exactly what im saying nvidia doesnt have much experience with cpus

Link to comment
Share on other sites

Link to post
Share on other sites

exactly what im saying nvidia doesnt have much experience with cpus

Experience is not everything. Intel has just as much experience as Nvidia in GPU production and design. However, it lacks the IP necessary to build on the same quality because both ATI and Nvidia were huge patent trolls, and Nvidia still doesn't license the fundamental pieces of its IP which are needed to allow Intel build a more complete GPU architecture. Nvidia doesn't need the experience. It needs the IP. The same is true for Intel in regards to GPUs.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

will this driver availaible for download later on?.... 

perhaps, but if it's not WHQL .. might not be

Link to comment
Share on other sites

Link to post
Share on other sites

If you examine the 1.5bn USD license deal between Intel and Nvidia, you'll find most of Nvidia's IP is open for use by Intel. what you will find missing are the fundamental pieces which would allow Intel to link them together in a useful manner. JSH withdrew many of those patents from the license after the demo, because even though Larrabee was a disappointment to Intel, it scared the living Hell out of Nvidia that Intel could come so far in such a short time, lots of money spent or not. Nvidia didn't want a third competitor, especially one with Intel's kind of money. That betrayal left a very sour taste, and now Intel wants blood, hence the push for x86-based accelerators which can be programmed natively, which is why Knight's Landing is a 6/3 TFlop beast which is better than the Tesla K40 while using less power. 

 

Intel wants Nvidia's head now, in addition to its IP. The last time Intel decided it wanted to destroy its competition, IBM barely escaped with its life, whereas MOS Technologies died and Texas Instruments just gave up and quickly spun off into niche hardware. Nvidia will not be able to stay ahead of Intel forever with only GPUs at its back, which is one reason it's pushing Tegra into cars and military contracts now. Intel's out for blood, and if Nvidia doesn't run, Nvidia will die or be swallowed up (most likely the latter via merge/buyout).

this is supporting what I was saying : nVidia won't give a rat's ass to bring Pascal to desktop market, but make a priority for putting it into Tesla accelerators

Intel is after them in HPC market

 

yes, we might see Pascal on desktop, but not sooner than very late 2016 or even 2017

Link to comment
Share on other sites

Link to post
Share on other sites

Ty guys.  You guys (especially @patrickjp93) are giving me an conceptual understanding regarding modern micro-processing... even though I have to read it at 0.05 speed to try to understand most of it (failing partly of course).

 

;)

Link to comment
Share on other sites

Link to post
Share on other sites

I can't answer them directly with honesty because I don't pay attention to their driver releases.

Then stop talking.

 

I'd assume that the enhancements will be part of the next driver, while this driver remains exclusive. That said, it could also be related to the vBIOS. Who knows?

Link to comment
Share on other sites

Link to post
Share on other sites

Ty guys.  You guys (especially @patrickjp93) are giving me an conceptual education regarding micro-processing... even though I have to read it at 0.05 speed to try to understand most of it (failing partly of course).

 

;)

What @patrickjp93  is giving you is far from an education. He is throwing all of his bias for certain companies into little balls of fact and rumor.

System CPU : Ryzen 9 5950 doing whatever PBO lets it. Motherboard : Asus B550 Wifi II RAM 80GB 3600 CL 18 2x 32GB 2x 8GB GPUs Vega 56 & Tesla M40 Corsair 4000D Storage: many and varied small (512GB-1TB) SSD + 5TB WD Green PSU 1000W EVGA GOLD

 

You can trust me, I'm from the Internet.

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

 
 

If you examine the 1.5bn USD license deal between Intel and Nvidia, you'll find most of Nvidia's IP is open for use by Intel. what you will find missing are the fundamental pieces which would allow Intel to link them together in a useful manner. JSH withdrew many of those patents from the license after the demo, because even though Larrabee was a disappointment to Intel, it scared the living Hell out of Nvidia that Intel could come so far in such a short time, lots of money spent or not. Nvidia didn't want a third competitor, especially one with Intel's kind of money. That betrayal left a very sour taste, and now Intel wants blood, hence the push for x86-based accelerators which can be programmed natively, which is why Knight's Landing is a 6/3 TFlop beast which is better than the Tesla K40 while using less power. 

 

Intel wants Nvidia's head now, in addition to its IP. The last time Intel decided it wanted to destroy its competition, IBM barely escaped with its life, whereas MOS Technologies died and Texas Instruments just gave up and quickly spun off into niche hardware. Nvidia will not be able to stay ahead of Intel forever with only GPUs at its back, which is one reason it's pushing Tegra into cars and military contracts now. Intel's out for blood, and if Nvidia doesn't run, Nvidia will die or be swallowed up (most likely the latter via merge/buyout).

 

...

 

Okay, out of curiosity, is this the only name you use on forums?

Link to comment
Share on other sites

Link to post
Share on other sites

Then stop talking.

 

I'd assume that the enhancements will be part of the next driver, while this driver remains exclusive. That said, it could also be related to the vBIOS. Who knows?

 

Stop talking? Lol.

 

All I said was a speculation on my part. I can't speculate about something AMD is doing? 

Link to comment
Share on other sites

Link to post
Share on other sites

What @patrickjp93  is far from an education. He is throwing all of his bias for certain companies into little balls of fact and rumor.

 

I get that, but it is his insight that I find intriguing.  10-15 years from now, we might see Intel/ATI dropping new SoC like units with Nvidia/AMD on their heels, I will for one will think back and say:

 

That motherf*cker Patrick was right!  :D

Link to comment
Share on other sites

Link to post
Share on other sites

Stop talking? Lol.

 

All I said was a speculation on my part. I can't speculate about something AMD is doing? 

No, you're not allowed to. :P

.

Link to comment
Share on other sites

Link to post
Share on other sites

I get that, but it is his insight I find intriguing.  10-15 years from now, we might see Intel/ATI dropping new SoC like units with Nvidia/AMD on their heels, I will for one think back and say:

 

That motherf*cker Patrick was right!  :D

the Intel acquisition of ATi should've happened back then, in 2007 - why exactly it didn't? unknown

Link to comment
Share on other sites

Link to post
Share on other sites

Stop talking? Lol.

 

All I said was a speculation on my part. I can't speculate about something AMD is doing? 

Not without backup, no.

Link to comment
Share on other sites

Link to post
Share on other sites

the Intel acquisition of ATi should've happened back then, in 2007 - why exactly it didn't? unknown

 

ATi is Canadian I believe.  I remember seeing their offices around the city, a long time ago.

 

They went to AMD for some reason.  It should not be too hard to find out some of the reason.

Link to comment
Share on other sites

Link to post
Share on other sites

I get that, but it is his insight that I find intriguing.  10-15 years from now, we might see Intel/ATI dropping new SoC like units with Nvidia/AMD on their heels, I will for one will think back and say:

 

That motherf*cker Patrick was right!  :D

I will buy everyone here the first Intel desktop card if that happens. According to him, it should happen by 2020 or 2021.

Link to comment
Share on other sites

Link to post
Share on other sites

Not without backup, no.

 

It's a safe assumption to say that they will keep their new "sauce" on their new cards so they can sell more cards.

 

They're a business.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×