Jump to content

DDR4 3000MHz CL15 from Gskill and its cheaper than DDR3

LukaP

I won't need to be upgrading anytime soon as my 4790K and 16GB of RAM will suit me for quite a while.

Link to comment
Share on other sites

Link to post
Share on other sites

each CL is one cycle, with higher frequency, it gets shorter and shorter, but you have more and more cycles to wait. basically 2x frequency needs 2x cl for same performance. (and if you get the CL lower than 2x, the ram performs better all the time)

so 1600 means one cycle is 6.25ns, which makes CL9 be 56.25ns

3000 one cycle is 3.33ns, cl15 is then 50ns.

so this is better than DDR3 1600MHz CL9, its basically the same as CL8, apart from bandwith hungry scenarios (APUs)

I do not know how did you calculate that, but from your calculations my RAM that runs 2133 MHZ at CL9 is waay better then DDR4 3000 MHZ at CL 15 

Computer users fall into two groups:
those that do backups
those that have never had a hard drive fail.

Link to comment
Share on other sites

Link to post
Share on other sites

I do not know how did you calculate that, but from your calculations my RAM that runs 2133 MHZ at CL9 is waay better then DDR4 3000 MHZ at CL 15 

it is, for normal tasks, whish arent bandwith dependent.

 

and calculation

 

CLx means the latency is x cycles. 1 cycle is 1/frequency. then you just multiply

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

each CL is one cycle, with higher frequency, it gets shorter and shorter, but you have more and more cycles to wait. basically 2x frequency needs 2x cl for same performance. (and if you get the CL lower than 2x, the ram performs better all the time)

 

so 1600 means one cycle is 6.25ns, which makes CL9 be 56.25ns

 

3000 one cycle is 3.33ns, cl15 is then 50ns.

 

so this is better than DDR3 1600MHz CL9, its basically the same as CL8, apart from bandwidth hungry scenarios (APUs)

A million times this!

It's really annoying how people go "baww so high latency" when it's actually pretty damn good for a first gen product. It has almost twice the bandwidth of 1600MHz CL9, and slightly lower latency.

I think 1600MHz CL9 RAM is usually the sweet spot (unless you're getting an APU) so once these DDR4 kits start dropping in price this will be a very attractive config.

Link to comment
Share on other sites

Link to post
Share on other sites

Damn. That's fast.

Main rig on profile

VAULT - File Server

Spoiler

Intel Core i5 11400 w/ Shadow Rock LP, 2x16GB SP GAMING 3200MHz CL16, ASUS PRIME Z590-A, 2x LSI 9211-8i, Fractal Define 7, 256GB Team MP33, 3x 6TB WD Red Pro (general storage), 3x 1TB Seagate Barracuda (dumping ground), 3x 8TB WD White-Label (Plex) (all 3 arrays in their respective Windows Parity storage spaces), Corsair RM750x, Windows 11 Education

Sleeper HP Pavilion A6137C

Spoiler

Intel Core i7 6700K @ 4.4GHz, 4x8GB G.SKILL Ares 1800MHz CL10, ASUS Z170M-E D3, 128GB Team MP33, 1TB Seagate Barracuda, 320GB Samsung Spinpoint (for video capture), MSI GTX 970 100ME, EVGA 650G1, Windows 10 Pro

Mac Mini (Late 2020)

Spoiler

Apple M1, 8GB RAM, 256GB, macOS Sonoma

Consoles: Softmodded 1.4 Xbox w/ 500GB HDD, Xbox 360 Elite 120GB Falcon, XB1X w/2TB MX500, Xbox Series X, PS1 1001, PS2 Slim 70000 w/ FreeMcBoot, PS4 Pro 7015B 1TB (retired), PS5 Digital, Nintendo Switch OLED, Nintendo Wii RVL-001 (black)

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, 3000mhz on DDR4 is not the same as DDR3 tho. I feel like it'd be better to compare 4300mhz DDR4 and 3000mhz DDR3, without even considering the timings.

 

 

 

 

Edit: What I mean is that it's not like 3000mhz was easy to get to on DDR3. I've got a feeling that down the road we will see much higher frequency DDR4 that'll really sink into peoples wallets for the performance they're actually interested in.

Link to comment
Share on other sites

Link to post
Share on other sites

A million times this!

It's really annoying how people go "baww so high latency" when it's actually pretty damn good for a first gen product. It has almost twice the bandwidth of 1600MHz CL9, and slightly lower latency.

I think 1600MHz CL9 RAM is usually the sweet spot (unless you're getting an APU) so once these DDR4 kits start dropping in price this will be a very attractive config.

I think people aren't expecting them to hit CL12 at anything above 2800mhz on DDR4 for a long time, though. That or it'll cost an arm and a leg, or maybe only as much as an E5 Xeon.

Link to comment
Share on other sites

Link to post
Share on other sites

AMD is dancing right now, developing their future APUs... :P

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

I bet this would really boost GPU performance on an APU, kinda wanna see benchmarks lol.

 

Dislike dislike...

 

Hate me but this is soo stupid wake up people from your APu dreams, you buy memory worth of 200-400$ to boost crap APU speed? buy a freaking dedicated gpu in that price range and will beat any apu 10x times in performance,geez.

Link to comment
Share on other sites

Link to post
Share on other sites

Dislike dislike...

 

Hate me but this is soo stupid wake up people from your APu dreams, you buy memory worth of 200-400$ to boost crap APU speed? buy a freaking dedicated gpu in that price range and will beat any apu 10x times in performance,geez.

 

We're talking about the concept, not the cost. 

 

Of course DDR4 is going to be super-expensive when it's new and thus, probably won't be a cost-effective solution. When it drops in price and becomes the standard, however, it could very well end up being a very good cost-effective solution. The fact remains; faster RAM with more bandwidth = better APU performance. That's all we're saying. Calm yourself down. lol

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

All RAM does for an APU is make it so the pc can actually run games, there's hardly anything on the chip to use. It means that eventually you'll see nothing happen (especially at lower resolutions) after adding more RAM/faster RAM and eventually the crappy GPU cores inside are going to bottleneck you, or the CPU cores will if they don't (which is unlikely since GPU cores are so weak right now in AMD's APUs).

 

 

This is why benchmarks suck even with 2133mhz+ on an APU. Sure, they're better than 1600mhz but that's like comparing 800mhz 1GB VRAM and 1100mhz 2GB.

Link to comment
Share on other sites

Link to post
Share on other sites

All RAM does for an APU is make it so the pc can actually run games, there's hardly anything on the chip to use. It means that eventually you'll see nothing happen (especially at lower resolutions) after adding more RAM/faster RAM and eventually the crappy GPU cores inside are going to bottleneck you, or the CPU cores will if they don't (which is unlikely since GPU cores are so weak right now in AMD's APUs).

 

 

This is why benchmarks suck even with 2133mhz+ on an APU. Sure, they're better than 1600mhz but that's like comparing 800mhz 1GB VRAM and 1100mhz 2GB.

Benchmarks show Intel's Iris Pro 5200 matching the GTX 650m everywhere until you add in anti aliasing. Intel is going to more than double the core count on the top end of Broadwell chips. You really think iGPU isn't going to eclipse the mid range of dGPU before the decade closes? Intel isn't just sitting on its ass. It's taking the fight to AMD on the APU side. Skylake implementing unified memory is all the proof you need. Once intel increases its triangle budget per core, we're going to see something which can compete with a 770 on even footing. Of course, Intel can only add more GPU cores with die shrinks, but do you seriously think Nvidia and AMD can keep scaling dGPUs the way they have been? They're reaching the limits of Amdahl's law of parallelism. We're in the realm of diminishing returns going forward on increasing core counts.

 

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/7 That's 40 cores. Broadwell will bring that up to 96. Skylake will bring GPU updates but likely the same core count, and it will bring unified memory, no more copying. Nvidia's days are numbered, and oddly enough it comes own to the stubborness of AMD's CEO Ruiz who wouldn't let himself be #2 in the originally desired AMD-Nvidia merger. Now that means AMD considered ATI 2nd tier, but it's going to outlive Nvidia because Nvidia can't do ARM or x86 well. It can't do SOC design well. How funny.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Because I got my new PC year ago DDR4 is not relevant for me at all. Only thing I plan on upgrading in within next 4 years is my GPU. By the time I will be looking on my next PC I think that DDR4 will be on a whole different level where we will be able to see 4000-5000MHz kits with CAS 17-20 for fraction of the price.

Link to comment
Share on other sites

Link to post
Share on other sites

All RAM does for an APU is make it so the pc can actually run games, there's hardly anything on the chip to use. It means that eventually you'll see nothing happen (especially at lower resolutions) after adding more RAM/faster RAM and eventually the crappy GPU cores inside are going to bottleneck you, or the CPU cores will if they don't (which is unlikely since GPU cores are so weak right now in AMD's APUs).

 

 

This is why benchmarks suck even with 2133mhz+ on an APU. Sure, they're better than 1600mhz but that's like comparing 800mhz 1GB VRAM and 1100mhz 2GB.

Thank you, Captain obvious.  :huh:  Any CPU/GPU/APU needs RAM to do anything, and what do you mean by "there's hardly anything on the chip to use"? 

 

The ideal scenario would be to use faster RAM with enough bandwidth that it isn't the bottleneck for the iGPU in an APU (currently, it is). That's why dGPUs have very fast RAM with a lot of bandwidth. DDR4 is only going to improve APU performance in gaming and you want the RAM to be fast enough so that you don't have to use anything faster. The GPU should be the bottleneck because that means you're using the full capabilities of that GPU.

 

You say "the GPU cores are so weak right now in AMD's APUs" yet they are the most powerful and capable  iGPUs to date and blow Intel's current HD graphics right out of the water. The CPU cores in APUs will never bottleneck the iGPU. Assuming no CPU architectural improvements, you'd have to run the equivalent of an R9-280X/290 or higher to push the A10-7850K's limits and AMD would never pair a CPU/GPU like this on the same die. By the time APU iGPU's get anywhere near the equivalent of a 290 (assuming they even get that far), CPU advancements will have continued a long side over time and thus, CPU bottlenecking in an APU will never be an issue.

 

You say the "benchmarks suck even with 2133mhz on an APU", but compared to what? Intel's HD iGPU benchmarks are a joke compared to AMD's APUs. Anyone can make statements, but without context, such statements are meaningless and smell of fanboyism. :P

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you, Captain obvious.  :huh:  Any CPU/GPU/APU needs RAM to do anything, and what do you mean by "there's hardly anything on the chip to use"? 

 

The ideal scenario would be to use faster RAM with enough bandwidth that it isn't the bottleneck for the iGPU in an APU (currently, it is). That's why dGPUs have very fast RAM with a lot of bandwidth. DDR4 is only going to improve APU performance in gaming and you want the RAM to be fast enough so that you don't have to use anything faster. The GPU should be the bottleneck because that means you're using the full capabilities of that GPU.

 

You say "the GPU cores are so weak right now in AMD's APUs" yet they are the most powerful and capable  iGPUs to date and blow Intel's current HD graphics right out of the water. The CPU cores in APUs will never bottleneck the iGPU. Assuming no CPU architectural improvements, you'd have to run the equivalent of an R9-280X/290 or higher to push the A10-7850K's limits and AMD would never pair a CPU/GPU like this on the same die. By the time APU iGPU's get anywhere near the equivalent of a 290 (assuming they even get that far), CPU advancements will have continued a long side over time and thus, CPU bottlenecking in an APU will never be an issue.

 

You say the "benchmarks suck even with 2133mhz on an APU", but compared to what? Intel's HD iGPU benchmarks are a joke compared to AMD's APUs. Anyone can make statements, but without context, such statements are meaningless and smell of fanboyism. :P

Um, no, let's take a step back please http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/8

 

And that's only 40 Intel cores. That count reaches 96 on the top Broadwell SKUs. Iris 5200 beats out the A10 5800 handily. 6200 is going to make carrizo look like a joke, especially when Skylake adds unified memory. AMD's APUs are at about 700GFlops. GT4e is rated a 2TFlops. AMD is in big trouble and they know it.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Um, no, let's take a step back please http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/8

 

And that's only 40 Intel cores. That count reaches 96 on the top Broadwell SKUs. Iris 5200 beats out the A10 5800 handily. 6200 is going to make carrizo look like a joke, especially when Skylake adds unified memory. AMD's APUs are at about 700GFlops. GT4e is rated a 2TFlops. AMD is in big trouble and they know it.

you do know that is still a WLIW based APU? kavari/carizzo is GCN1.1 which is waay more potent

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

you do know that is still a WLIW based APU? kavari/carizzo is GCN1.1 which is waay more potent

Except it's not in terms of compute. In terms of geometry/renderring yes. In terms of heterogeneous computation power? Oh Hell no. GCN loses out 4 to 1 on scaling to Intel on that front.

 

Sorry misinterpreted which side you were coming from. Even so, the 7850k is not way ahead of Iris Pro in gameplay. If it was fed correctly, maybe it would be. I don't know. In terms of computational power though, 7850k is 856 GFlops. A fully outfitted Broadwell crushes it by 5:2 or 2.5 to 1.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Except it's not in terms of compute. In terms of geometry/renderring yes. In terms of heterogeneous computation power? Oh Hell no. GCN loses out 4 to 1 on scaling on that front.

and yet we are talking about gaming here no? there was never any talk about compute. if you want to do real compute, you will never go for an apu anyway... 

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

and yet we are talking about gaming here no? there was never any talk about compute. if you want to do real compute, you will never go for an apu anyway... 

Oh yes you will. Latency is a huge issue copying everything over to the GPU, and you need a lot of CPU cores to do it efficiently. APUs thanks to Intel will grow extremely powerful. AMD pursued HSA not for gaming but to break into the scientific computing world where Intel has an iron grip, because that's a far bigger market. Intel realized the threat and is moving on it.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Oh yes you will. Latency is a huge issue copying everything over to the GPU, and you need a lot of CPU cores to do it efficiently. APUs thanks to Intel will grow extremely powerful. AMD pursued HSA not for gaming but to break into the scientific computing world where Intel has an iron grip, because that's a far bigger market. Intel realized the threat and is moving on it.

ever heard of compute cards? and compute languages? ill give you a hint, tesla, xeon phi, CUDA, openCL. go research that and then explain to me how having a task being put onto the compute card and then let it compute with no interaction with the host could induce so much latency that we want the host to compute? when you do, i will commend you on creating a problem that was not seen till today by intel/amd/nvidia who have been producing separate massive compute cards for ages, instead of just making a powerful compute host and having many nodes of those, with no real host....

 

cya when you realise you are wrong. bye bye :)

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

I didn't see this before, I'm late but... Demn this is good for a release :D

I expect good things for DDR4... in the future :)

noob, you shouldve had me subscribed, most of my news are and will be HwE, DDR4 and them new Xeon E7 v3 when they come ;)

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

ever heard of compute cards? and compute languages? ill give you a hint, tesla, xeon phi, CUDA, openCL. go research that and then explain to me how having a task being put onto the compute card and then let it compute with no interaction with the host could induce so much latency that we want the host to compute? when you do, i will commend you on creating a problem that was not seen till today by intel/amd/nvidia who have been producing separate massive compute cards for ages, instead of just making a powerful compute host and having many nodes of those, with no real host....

 

cya when you realise you are wrong. bye bye :)

I'm a highly proficient gpgpu programmer and yes I know how to use a Tesla compute card via CUDA, my favorite GPGPU language. I know a decent amount of OpenCL and OpenGL. The latency comes from the CPU having to copy every piece of information about the program continuously and send it to the GPU. What unified memory on an APU does is let the CPU pass a starting address to the iGPU and then skip to something the CPU can do while the GPU takes off. Equal ciitizen scheduling lets any core take on ANY task it can do and let the others skip to something else. That's gonna be pretty.

 

Also, look at the compute specs: http://www.nvidia.com/object/tesla-servers.html

Vs. 96 Intel EUs which is 2TFlops http://forums.anandtech.com/showthread.php?t=2317353&page=15.

Tesla loses.

 

Also, Intel is putting the Xeon Phi on a CPU die/socket. You can still use one separately, but Knight's Landing takes the Phi and puts it on a CPU socket.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I'm a highly proficient gpgpu programmer and yes I know how to use a Tesla compute card via CUDA, my favorite GPGPU language. I know a decent amount of OpenCL and OpenGL. The latency comes from the CPU having to copy every piece of information about the program continuously and send it to the GPU. What unified memory on an APU does is let the CPU pass a starting address to the iGPU and then skip to something the CPU can do while the GPU takes off. Equal ciitizen scheduling lets any core take on ANY task it can do and let the others skip to something else. That's gonna be pretty.

 

Also, look at the compute specs: http://www.nvidia.com/object/tesla-servers.html

Vs. 96 Intel EUs which is 2TFlops. Tesla loses.

 

Also, Intel is putting the Xeon Phi on a CPU die/socket. You can still use one separately, but Knight's Landing takes the Phi and puts it on a CPU socket.

I don't see how Intel's unreleased with 2Tflops beasts the Tesla with 4.29 Tflops. Flops isn't even a good measurement for that, I mean the Titan black can do 5.1 Teraflops yet the Tesla will still be far better in a lot of scenarios.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't see how Intel's unreleased with 2Tflops beasts the Tesla with 4.29 Tflops. Flops isn't even a good measurement for that, I mean the Titan black can do 5.1 Teraflops yet the Tesla will still be far better in a lot of scenarios.

I was talking double precision, but yes in single precision Tesla wins for now, but look at the clock speeds dropping like a stone as Nvidia puts more cores on, and then there's still all the time to load the program in there.

 

As per the Titan, yeah, because it's got all the geometry/rendering-to-display circuitry and driver in there that the Tesla doesn't. We have no other main measure of performance other than Flops.

 

But at some point accelerator cards are going to be useless except in the absolute biggest cases imaginable. If Nvidia survives another 15 years it'll be a miracle. At least AMD and Intel will still be here.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×