Jump to content

A New Era Of Computing Is Upon Us.

Eh... That's not the whole story. A CPU can only send over so much per burst due to the encoding and doesn't have as many parallel PCIe controllers. Skylake-E is bringing PCIe 4 to address that part of the problem.

Question: Currently, even the most high-end GPUs can't fully saturate PCIe 3.0 at 8x (PCIe 2.0) and if that's the case, why is PCIe 4.0 even necessary and how could there be any benefit from it then?

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

Question: Currently, even the most high-end GPUs can't fully saturate PCIe 3.0 at 8x (PCIe 2.0) and if that's the case, why is PCIe 4.0 even necessary and how could there be any benefit from it then?

Again, a CPU core could send out more information in a single burst to make up for its slow burst rate across much fewer PCIe controllers than a GPU.

Also, Nvidia is predicting 3.0 will be saturated by 2020 and proposed a new bus connection altogether.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Both. I've never had any issue using gmail, but outlook continues to confuze and annoy me.

 

I just get tired of having to have "student email"'s and "professional email"s and what not. Why can I not just use my primary email address for everything?

 

Tried that, but it always gets spammed because even if you click "don't email me random shit" they will still email you random shit. I primarily blame facebook and verizon.

Link to comment
Share on other sites

Link to post
Share on other sites

Cough* Graphene Cough*

IBM has already built 400GHz chips with it.

 

The only problem with graphene is manufacturing it on a large scale cheaply and cleanly. If it can be done it could revolutionize not just computers, but also battery tech. Imagine a world where everyone has solar panels on their houses, and those solar panels are so efficient that they produce far more energy than the building uses, so all that energy gets stored in the super dense, super powerful graphene batteries for later. Of course that will never happen because America is so anti-environmental no thanks to Big-Oil paying the mainstream media to push propagandic bs to the public. Fuck Big Oil, and fuck America.

Link to comment
Share on other sites

Link to post
Share on other sites

My problem with HSA is that AMD seems to be not doing anything with it. Take the FM2+ apu's, that can run in crossfire with a discreet gpu. While this sounds cool, they only allow you to crossfire the onboard graphics with a low end graphics card. People always say "well thats for budgets blah blah blah blah" the point is, I want to be able to use onboard graphics in conjunction with 1, 2, 3, or even 4 r9 290's, so that I can achieve better performance than just the (X) number of gpu's can on their own. Don't limit things, make it be able to work at the cheap end, as well as the uttermost ridiculously expensive end as well, and have it so that (x) number of cards aren't limited to the performance of the onboard graphics. Make performance additive, not subtractive, for instance, running an r9 270 in crossfire with an r9 290, limits the r9 290 to the performance of the r9 270. Why? Why can't (or dont) they make it so that the r9 290 does as much as it can, and the secondary card simply supplements that to make things even better?

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

Eh... That's not the whole story. A CPU can only send over so much per burst due to the encoding and doesn't have as many parallel PCIe controllers. Skylake-E is bringing PCIe 4 to address that part of the problem.

It's far from saturating PCIe, no need to worry about that.

Anyway this technology is not really aimed at dedicated graphics, It's all about SoCs, APUs, and Intel HD, where you can have a very capable GPU built in, that can do all the floating point operations and highly parallel workloads instead of having the CPU for that. Look at the AMD architectures, look at Kaveri, where it has 2 modules, each with 2 Integer processing cores, but only 1 floating point processor. On the Integer side of things, it rocks, there are a few benchmarks where only that loads are used and the little kaveri doesn't fear the i5 at all, it's pretty much a tie. However on a real world application like rendering, or gaming, the lack of proper float processing and dedicated cache kills it for it. If HSA would exist on such scenario all the problems the APU faces are soon gone, float numbers will get crunched away by the integrated GPU  and that CPU will actually do what it was made for.

If this technology succeeds, getting a CPU with integrated graphics, and also a dedicated GPU is where it's at, if games get the hang of it we might later down the road start recommending A10 APUs with high end CPUs for gaming builds.

Link to comment
Share on other sites

Link to post
Share on other sites

The Kaveri APUs still kinda suck on the CPU side. The GPU is strong don't get me wrong, but it needs more cores on both sides and it still doesn't compete well with Intel except in LibreOffice Calc running simulations.

It's debatable that the CPU side doesn't necessarily need to be much stronger (not at the moment anyways). It's a proven gaming CPU that can handle high-end GPUs without much bottlenecking (thanks to Mantle, driver updates and other software optimizations) and also it's a brand new manufacturing process for AMD so I'm sure they'll improve the Arch with the next iterations to some reasonable degree. 

 

I get that it's no power-house of a CPU but in terms of HSA, for what it is, it is potentially very capable in that regard.

 

Also, HSA aside, the GPU cores are still used along side the CPU to improve performance for certain tasks, so the GPU doesn't just become a waste of silicon when you install a dGPU along side a Kaveri APU. AMD has also mentioned the possibility to off-load a portion of the work load (or certain tasks) from the dGPU to the iGPU during gaming to help further increase performance. 

 

So there's more to it than just being a CPU with iGPU that a lot of people wouldn't normally think of. It's the first HSA-ready APU, the first of it's kind and I just think it deserves more credit than it's given, that's all. Not easy to do when we haven't seen what it's truly capable of just yet, of course.

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

My problem with HSA is that AMD seems to be not doing anything with it. Take the FM2+ apu's, that can run in crossfire with a discreet gpu. While this sounds cool, they only allow you to crossfire the onboard graphics with a low end graphics card. People always say "well thats for budgets blah blah blah blah" the point is, I want to be able to use onboard graphics in conjunction with 1, 2, 3, or even 4 r9 290's, so that I can achieve better performance than just the (X) number of gpu's can on their own. Don't limit things, make it be able to work at the cheap end, as well as the uttermost ridiculously expensive end as well, and have it so that (x) number of cards aren't limited to the performance of the onboard graphics. Make performance additive, not subtractive, for instance, running an r9 270 in crossfire with an r9 290, limits the r9 290 to the performance of the r9 270. Why? Why can't (or dont) they make it so that the r9 290 does as much as it can, and the secondary card simply supplements that to make things even better?

I think AMD needs to get developers to work on it, they layed down the architecture, but nobody cares. Now they can pay EA to make BF5 full HSA ready and show on Computex or any other event how a A10 APU together with a R9 290X can hands down defeat a i5 or i7.

Intel did enable not long ago a technology that allows games to use the HD graphics to boost performance (for example processing backgrounds and UI elements like minimaps), but nobody adopted it. Today they are pushing crystalwell and other integrated graphics perks, they know that using the integrated GPU to do processing stuff is where it's at.

If AMD pays game studios to optimize for it, HSA might took off for gaming. Still it's a great solution for workstation use maybe pushing it down there would work for them too.

Link to comment
Share on other sites

Link to post
Share on other sites

I think AMD needs to get developers to work on it, they layed down the architecture, but nobody cares. Now they can pay EA to make BF5 full HSA ready and show on Computex or any other event how a A10 APU together with a R9 290X can hands down defeat a i5 or i7.

Intel did enable not long ago a technology that allows games to use the HD graphics to boost performance (for example processing backgrounds and UI elements like minimaps), but nobody adopted it. Today they are pushing crystalwell and other integrated graphics perks, they know that using the integrated GPU to do processing stuff is where it's at.

If AMD pays game studios to optimize for it, HSA might took off for gaming. Still it's a great solution for workstation use maybe pushing it down there would work for them too.

That's exactly what I was thinking of, use extra hardware like onboard graphics, that we pay for as apart of the base system, to do what it can, and free up workload for the primary graphics card(s). Why did noone adopt it? I could see it being useful for things like UI, or even rendering the sky, or any other non interactable, static graphics workloads, basically things that don't change, or don't change very much, in relation to the player.

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

That's exactly what I was thinking of, use extra hardware like onboard graphics, that we pay for as apart of the base system, to do what it can, and free up workload for the primary graphics card(s). Why did noone adopt it? I could see it being useful for things like UI, or even rendering the sky, or any other non interactable, static graphics workloads, basically things that don't change, or don't change very much, in relation to the player.

I have no idea why ther Intel HD "boost" didn't took off, but I do remember Linus talking about it on a NCIX video, I can't remember the name of the feature.

But totally agree with you, you can give the easy things to the integrated graphics and increase your average FPS this way.

Link to comment
Share on other sites

Link to post
Share on other sites

I have no idea why ther Intel HD "boost" didn't took off, but I do remember Linus talking about it on a NCIX video, I can't remember the name of the feature.

But totally agree with you, you can give the easy things to the integrated graphics and increase your average FPS this way.

I bet it's because console's don't have that kind of capability, so they didn't implement it to make consoles look equal to pc. *cough*ubi*cough*

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

Took long enough.

Someone told Luke and Linus at CES 2017 to "Unban the legend known as Jerakl" and that's about all I've got going for me. (It didn't work)

 

Link to comment
Share on other sites

Link to post
Share on other sites

The only reason why CPU performance doesnt double every year is AMD exiting dekstop mid-high end cpu market and no other competitor to intel,and fail paralelism.

 

I mean come on i know multi threading is hard but the fuck companies are barely pulling their heads out of their asses and making proper paralel code for cpu.

We wouldnt need 5ghz cpu's if we could have 10 core cheap 3ghz ones instead,just like Xeon's,make a machine with dual 8+ core xeon's you get 0 benfit from 3rd party apps and games,then run server/custom paralel code stuff and your mind blows at the process speed, couple that with hsa and we could probably have 4K games tommorow.

Link to comment
Share on other sites

Link to post
Share on other sites

My problem with HSA is that AMD seems to be not doing anything with it. Take the FM2+ apu's, that can run in crossfire with a discreet gpu. While this sounds cool, they only allow you to crossfire the onboard graphics with a low end graphics card. People always say "well thats for budgets blah blah blah blah" the point is, I want to be able to use onboard graphics in conjunction with 1, 2, 3, or even 4 r9 290's, so that I can achieve better performance than just the (X) number of gpu's can on their own. Don't limit things, make it be able to work at the cheap end, as well as the uttermost ridiculously expensive end as well, and have it so that (x) number of cards aren't limited to the performance of the onboard graphics. Make performance additive, not subtractive, for instance, running an r9 270 in crossfire with an r9 290, limits the r9 290 to the performance of the r9 270. Why? Why can't (or dont) they make it so that the r9 290 does as much as it can, and the secondary card simply supplements that to make things even better?

 

AMD has mentioned the potential to do with these HSA-capable APUs exactly what you're saying. The ability to off-load a portion of the GPU work load to the iGPU to increase the total GPU capability of your system. So R9-290 + iGPU = total performance greater than that of the R9-290 alone. 

 

Currently, the way SLI/crossfire is done is the two GPUs must be of the same architectural family and they must be synced (GPU clocks must match) and resources are shared (same data stored on the Vram of both GPUs - which is why you still have 2GB total when you crossfire two GPUs with 2GB Vram each, not 4GB). It is somewhat inefficient and poorly optimized in many aspects, but this is the main reason why you can only run certain specific GPUs in crossfire with the APUs iGPU - it must be from the same GPU family (R7-240/250 with Kaveri). I don't think you can crossfire an R9-290 with a 270 because they are from different GPU families. You can crossfire a 270 with a 270X and the 270X would be "held back" to the clock frequency of the 270 (because the 270 can't be automatically over clocked to match the 270X). Again, this is just simply due to the limitations in the way crossfire currently works.

 

But as I said before, with HSA, it very well could be possible to include the iGPU cores as part of the total system GPU count. It wouldn't be the same as crossfire as it would be only off-loading certain specific tasks and crossfire with two same/similar GPUs would probably have to remain the same as it is done now (simply because of the need for the two devices to communicate and synchronize properly). It may be possible for the iGPU to somehow aid in that process as well, allowing for better performance scaling. 

 

There are many possibilities with this and I think we've only just hit the tip of the iceberg.

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

AMD has mentioned the potential to do with these HSA-capable APUs exactly what you're saying. The ability to off-load a portion of the GPU work load to the iGPU to increase the total GPU capability of your system. So R9-290 + iGPU = total performance greater than that of the R9-290 alone. 

 

Currently, the way SLI/crossfire is done is the two GPUs must be of the same architectural family and they must be synced (GPU clocks must match) and resources are shared (same data stored on the Vram of both GPUs - which is why you still have 2GB total when you crossfire two GPUs with 2GB Vram each, not 4GB). It is somewhat inefficient and poorly optimized in many aspects, but this is the main reason why you can only run certain specific GPUs in crossfire with the APUs iGPU - it must be from the same GPU family (R7-240/250 with Kaveri). I don't think you can crossfire an R9-290 with a 270 because they are from different GPU families. You can crossfire a 270 with a 270X and the 270X would be "held back" to the clock frequency of the 270 (because the 270 can't be automatically over clocked to match the 270X). Again, this is just simply due to the limitations in the way crossfire currently works.

 

But as I said before, with HSA, it very well could be possible to include the iGPU cores as part of the total system GPU count. It wouldn't be the same as crossfire as it would be only off-loading certain specific tasks and crossfire with two same/similar GPUs would probably have to remain the same as it is done now (simply because of the need for the two devices to communicate and synchronize properly). It may be possible for the iGPU to somehow aid in that process as well, allowing for better performance scaling. 

 

There are many possibilities with this and I think we've only just hit the tip of the iceberg.

But WHY do they have to share system resources and clock speeds? Why not have a primary GPU (in my example the 290) run at max and have the secondary (the 270) handle any "overflow" on the backend? For instance, if you had a heavily modded skyrim, have the primary gpu fill it's memory with what it needs, textures and shiz, and have anything extra on the secondary cards memory?

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

But WHY do they have to share system resources and clock speeds? Why not have a primary GPU (in my example the 290) run at max and have the secondary (the 270) handle any "overflow" on the backend? For instance, if you had a heavily modded skyrim, have the primary gpu fill it's memory with what it needs, textures and shiz, and have anything extra on the secondary cards memory?

I make no claims to be a crossfire expert, but I believe it's because they have to be synchronized to properly communicate and divide/share the workload and then output the image(s) correctly. This is why frame-time variance has always been a bit of an issue with crossfire/SLI. Vram data is duplicated on both GPUs because they are both rendering portions of the same images/scenes at the same time. 

 

 In order to implement off-loading a portion or specific tasks to the other GPU, I believe there is more work to be done on the software side to tell the GPUs what to do. This is where things get a little beyond my knowledge but this could possibly be handled by the API.

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

I make no claims to be a crossfire expert, but I believe it's because they have to be synchronized to properly communicate and divide/share the workload and then output the image(s) correctly. This is why frame-time variance has always been a bit of an issue with crossfire/SLI. Vram data is duplicated on both GPUs because they are both rendering portions of the same images/scenes at the same time. 

 

 In order to implement off-loading a portion or specific tasks to the other GPU, I believe there is more work to be done on the software side to tell the GPUs what to do. This is where things get a little beyond my knowledge but this could possibly be handled by the API.

So basically, each gpu has to know what the other is doing, to prevent them from doing the exact same thing?

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

The only problem with graphene is manufacturing it on a large scale cheaply and cleanly. If it can be done it could revolutionize not just computers, but also battery tech. Imagine a world where everyone has solar panels on their houses, and those solar panels are so efficient that they produce far more energy than the building uses, so all that energy gets stored in the super dense, super powerful graphene batteries for later. Of course that will never happen because America is so anti-environmental no thanks to Big-Oil paying the mainstream media to push propagandic bs to the public. Fuck Big Oil, and fuck America.

America beats the world to almost everything. Screw other countries for being dependent. That said, the Graphene problem was fixed a while ago. Google Graphene battery record reverse capacitance. Defect free Graphene in mass production is here.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

So basically, each gpu has to know what the other is doing, to prevent them from doing the exact same thing?

Yup, but the CUDA 6 standard on the Maxwell architecture has unified memory with minimal copying. SLI will pull ahead for a while.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

So basically, each gpu has to know what the other is doing, to prevent them from doing the exact same thing?

In a nutshell, yes.

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yup, but the CUDA 6 standard on the Maxwell architecture has unified memory with minimal copying. SLI will pull ahead for a while.

In a nutshell, yes.

Interesting. Definitely going to affect my decision on whether or not to go amd or nvidia for my next gpu.

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

It's far from saturating PCIe, no need to worry about that.

Anyway this technology is not really aimed at dedicated graphics, It's all about SoCs, APUs, and Intel HD, where you can have a very capable GPU built in, that can do all the floating point operations and highly parallel workloads instead of having the CPU for that. Look at the AMD architectures, look at Kaveri, where it has 2 modules, each with 2 Integer processing cores, but only 1 floating point processor. On the Integer side of things, it rocks, there are a few benchmarks where only that loads are used and the little kaveri doesn't fear the i5 at all, it's pretty much a tie. However on a real world application like rendering, or gaming, the lack of proper float processing and dedicated cache kills it for it. If HSA would exist on such scenario all the problems the APU faces are soon gone, float numbers will get crunched away by the integrated GPU and that CPU will actually do what it was made for.

If this technology succeeds, getting a CPU with integrated graphics, and also a dedicated GPU is where it's at, if games get the hang of it we might later down the road start recommending A10 APUs with high end CPUs for gaming builds.

Nope. Game makers will take too long and Intel is already putting unified memory on Skylake. AMD's heterogeneous advantage is dropping out the window as we speak. Also, Intel has the most powerful and efficient FPU in the industry in its cores, and Broadwell shaved 2 cycles off the divide and multiply instructions. That will translate into the GPU cores, and that 96-core Iris Pro 6200 is going to make Carrizo look awful. It's a 2TFlop solution against a 1.3TFlop solution assuming AMD gets to 768 GPU cores. The A10 will be just fine for gaming for about 6 years, but after that AMD won't have any solution to compete with Intel. Still, yay heterogeneous chips!

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Nope. Game makers will take too long and Intel is already putting unified memory on Skylake. AMD's heterogeneous advantage is dropping out the window as we speak. Also, Intel has the most powerful and efficient FPU in the industry in its cores, and Broadwell shaved 2 cycles off the divide and multiply instructions. That will translate into the GPU cores, and that 96-core Iris Pro 6200 is going to make Carrizo look awful. It's a 2TFlop solution against a 1.3TFlop solution assuming AMD gets to 768 GPU cores. The A10 will be just fine for gaming for about 6 years, but after that AMD won't have any solution to compete with Intel. Still, yay heterogeneous chips!

Yeah, but Intel is not going to sell Iris Pro for $150. I'm a lot more excited with budget offerings rather than the super high end.

Link to comment
Share on other sites

Link to post
Share on other sites

Nope. Game makers will take too long and Intel is already putting unified memory on Skylake. AMD's heterogeneous advantage is dropping out the window as we speak. Also, Intel has the most powerful and efficient FPU in the industry in its cores, and Broadwell shaved 2 cycles off the divide and multiply instructions. That will translate into the GPU cores, and that 96-core Iris Pro 6200 is going to make Carrizo look awful. It's a 2TFlop solution against a 1.3TFlop solution assuming AMD gets to 768 GPU cores. The A10 will be just fine for gaming for about 6 years, but after that AMD won't have any solution to compete with Intel. Still, yay heterogeneous chips!

Your talking about the very expensive ($300+) Iris Pro 6200 and trying to compare it to AMDs current/future APUs - which will still be more budget-oriented. These aren't competing parts. Someone who only has $200 or less for an APU isn't going to even look at a $350+ i7, even if it performs better. 

 

If you want to compare properly, you have to look at price point and target usage. Right now, as it stands, Intel has nothing to compete with the strength of AMDs APUs at the 7850k's price point and below. When Intel delivers a sub $200 CPU with an iGPU strong enough to compete against the A10-7850K or it's successor(s), then we'll talk. 

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×