Jump to content

DDR4 3000MHz CL15 from Gskill and its cheaper than DDR3

LukaP

I was talking double precision, but yes in single precision Tesla wins for now, but look at the clock speeds dropping like a stone as Nvidia puts more cores on, and then there's still all the time to load the program in there.

Ehh... 2Tflops in double precision? Pretty sure the 40 EU model of iris at 1300MHz does about 800 GFlops in single precision, not double.

 

I would be very surprised if Nvidia dies in 15 years. No matter how much you try to make the latency sound like a big deal the fact of the matter is that you still can get far higher performance out of a big dedicated GPU. The raw processing power is simply more valuable than the lower latency. Mining is a great example of this. You just send very complex instructions to the GPU, which it then computes and sends back the result. How much you can do in parallel for such a task is far more important than the latency, and the same goes for things like tessellation and other graphics tasks, and in those areas a dedicated GPU will always trump integrated GPUs because they can be bigger and use more power.

 

 

As per the Titan, yeah, because it's got all the geometry/rendering-to-display circuitry and driver in there that the Tesla doesn't. We have no other main measure of performance other than Flops.

How about benchmarks? Again, flops is not the be all and end all of performance metrics (as we can see in the Tesla vs Titan Black example).

We have other measurements as well. BLAS for example.

Link to comment
Share on other sites

Link to post
Share on other sites

Um, no, let's take a step back please http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/8

 

And that's only 40 Intel cores. That count reaches 96 on the top Broadwell SKUs. Iris 5200 beats out the A10 5800 handily. 6200 is going to make carrizo look like a joke, especially when Skylake adds unified memory. AMD's APUs are at about 700GFlops. GT4e is rated a 2TFlops. AMD is in big trouble and they know it.

The i7-4950HQ (with the Iris 5200) is $650 and only performs a little better in gaming than an old trinity A10-5800k.

 

The A10-7850K is about $180 and (based on the benchmarks in the link you posted) probably trades blows with, if not, performs better than the 5200. 

 

So, for the budget-minded gamer who can only afford ~$200 or less on a CPU and GPU in total isn't going to buy an i7-4950HQ. If they buy an entry-level i5, they get the lack-luster HD 4000 series iGPU. Yeah, no thanks. :S 

 

When you compare CPUs, APUs or GPUs, you should compare parts that are at least close to the same price point. It doesn't make sense to compare Iris pro to kaveri because it's way too expensive. Those two parts were not meant to compete with each other. It would make far more sense if Intel paired Iris pro with an i3 or i5 and had a price tag that was far more reasonable. 

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

You can get 3000MHz memory with a 12 CAS right now. Only thing DDR4 has going for it right now is higher densities.

And lower voltages. Which is how is been for every single revision of DDR.

Link to comment
Share on other sites

Link to post
Share on other sites

And lower voltages. Which is how is been for every single revision of DDR.

Forgot about that.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Ehh... 2Tflops in double precision? Pretty sure the 40 EU model of iris at 1300MHz does about 800 GFlops in single precision, not double.

 

I would be very surprised if Nvidia dies in 15 years. No matter how much you try to make the latency sound like a big deal the fact of the matter is that you still can get far higher performance out of a big dedicated GPU. The raw processing power is simply more valuable than the lower latency. Mining is a great example of this. You just send very complex instructions to the GPU, which it then computes and sends back the result. How much you can do in parallel for such a task is far more important than the latency, and the same goes for things like tessellation and other graphics tasks, and in those areas a dedicated GPU will always trump integrated GPUs because they can be bigger and use more power.

 

 

How about benchmarks? Again, flops is not the be all and end all of performance metrics (as we can see in the Tesla vs Titan Black example).

We have other measurements as well. BLAS for example.

Mining is dead outside ASIC machines. Furthermore, look at the scaling. If Intel dedicated as much space to its GPU as Nvidia did with the GK 110, that's roughly 580 cores on the 22nm process. And the 2 TFlops is the 96-core solution slotted for Broadwell, not the 40 we have right now. Are you telling me Nvidia is going to stay ahead of that growth rate? It can't. If it puts 2 GK 110 cores on a single Tesla it doubles the power while getting 1.7x performance. Intel's scaling is much better.

 

And those benchmarks are always 3 years behind on available instructions. We've had AVX 2.0 for 3 years and only this year did AIDA64 and Prime95 update to use it. iGPU will be the end of the low-end if not mid-range gaming dGPU before the close of the decade. dGPU prices will rise to combat this loss of revenues, Nvidia losing market share. AMD will have the advantage of iGPU/dGPU Crossfire on improved GPU designs starting with the successor to Carrizo.

 

As per Intel's double precision performance, don't forget Intel has the fastest FPU in the industry. There are 5 of them per EU, each 256 bits wide. Suddenly that's not so unbelievable. 

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

The i7-4950HQ (with the Iris 5200) is $650 and only performs a little better in gaming than an old trinity A10-5800k.

 

The A10-7850K is about $180 and (based on the benchmarks in the link you posted) probably trades blows with, if not, performs better than the 5200. 

 

So, for the budget-minded gamer who can only afford ~$200 or less on a CPU and GPU in total isn't going to buy an i7-4950HQ. If they buy an entry-level i5, they get the lack-luster HD 4000 series iGPU. Yeah, no thanks. :S 

 

When you compare CPUs, APUs or GPUs, you should compare parts that are at least close to the same price point. It doesn't make sense to compare Iris pro to kaveri because it's way too expensive. Those two parts were not meant to compete with each other. It would make far more sense if Intel paired Iris pro with an i3 or i5 and had a price tag that was far more reasonable. 

No...FREAKING REALLY?! Broadwell will bring Iris Pro 6200 to desktop at the same standard $350 price their last flagship I7 chips have launched with. Jesus H Christ I'm not plum stupid!

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

No...FREAKING REALLY?! Broadwell will bring Iris Pro 6200 to desktop at the same standard $350 price their last flagship I7 chips have launched with. Jesus H Christ I'm not plum stupid!

not actually convinced of that. from the start up, you are turning this argument around. first you start with the "whatever the pre kaveri APUs were" then you go straight to beating a tesla k40 with an integrated GPU. all the while you were switching from gaming to compute whenever it suited you. then you bring a next gen iris pro in, which isnt even in production yet. 

 

you just keep digging. now please stop shitposting and derailing my thread about DDR4 thank you :)

 

and actually learn what you are talking about *shotsfired*

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

not actually convinced of that. from the start up, you are turning this argument around. first you start with the "whatever the pre kaveri APUs were" then you go straight to beating a tesla k40 with an integrated GPU. all the while you were switching from gaming to compute whenever it suited you. then you bring a next gen iris pro in, which isnt even in production yet. 

 

you just keep digging. now please stop shitposting and derailing my thread about DDR4 thank you :)

 

and actually learn what you are talking about *shotsfired*

I didn't switch anything. It was always about compute performance, which can in general predict graphics performance. Iris Pro 6200 is already in production as well. Now, iGPU if Intel continues its doubling of cores with every die shrink, will eclipse Tesla performance by the successor to Cannonlake.

 

The Intel Iris Pro 5200 is about 10% less powerful than Kaveri. The entry-level broadwell-Y chip went from 20 cores to 24. 24 cores per slice x4 slices = 96. It's inevitably going to arrive in top-end Broadwell. You people really need to stop underestimating the company which was first to iGPU and has taken over the world of servers and supercomputing.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I didn't switch anything. It was always about compute performance, which can in general predict graphics performance. Iris Pro 6200 is already in production as well. Now, iGPU if Intel continues its doubling of cores with every die shrink, will eclipse Tesla performance by the successor to Cannonlake.

 

The Intel Iris Pro 5200 is about 10% less powerful than Kaveri. The entry-level broadwell-Y chip went from 20 cores to 24. 24 cores per slice x4 slices = 96. It's inevitably going to arrive in top-end Broadwell. You people really need to stop underestimating the company which was first to iGPU and has taken over the world of servers and supercomputing.

youre forgeting that by cannonlake (i reckon atleast 2017) nvidia will be at pascal (or maybe further) and their tesla line will be too, so their compute will also go up... as will intel's and inevitably AMD and ARM too. youre looking at this from a too narrow point of view. 

 

im not underasitmating intel, im just telling you that no matter what, iGPU will be limited by the fact that it has a CPU by its side, and that their combined TDP can be at most some 80w. (intels mainstream now)

 

and compute is rarely a measure of gaming performance... ever heard of driver optimisations? 

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

youre forgeting that by cannonlake (i reckon atleast 2017) nvidia will be at pascal (or maybe further) and their tesla line will be too, so their compute will also go up... as will intel's and inevitably AMD and ARM too. youre looking at this from a too narrow point of view. 

 

im not underasitmating intel, im just telling you that no matter what, iGPU will be limited by the fact that it has a CPU by its side, and that their combined TDP can be at most some 80w. (intels mainstream now)

 

and compute is rarely a measure of gaming performance... ever heard of driver optimisations? 

Nvidia won't redouble their core count in that time (which won't even double performance anyway). Neither will AMD. The heat density on their GPU architecture is way too high to pull that off just by moving to 20nm. Pascal is going to Kepler all over again, not an improved Maxwell.

 

Who needs driver optimizations when your circuitry supports OpenCL directly? iGPU has a huge advantage. In its perfect form (HSA) it never has to be passed a task. It just takes it right from memory. When it doesn't need to be constantly fed, it runs at maximum throughput whereas a Tesla can't.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia won't redouble their core count in that time. Neither will AMD. The heat density on their GPU architecture is way too high to pull that off just by moving to 20nm.

 

Who needs driver optimizations when your circuitry supports OpenCL directly?

no need to double the corecount when you can introduce new in hardware solutions to certain problems (examples from the past: tasselation engine, h264 encoder, etc)

 

and since when is oCL a rendering engine? 

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

no need to double the corecount when you can introduce new in hardware solutions to certain problems (examples from the past: tasselation engine, h264 encoder, etc)

 

and since when is oCL a rendering engine? 

It doesn't matter. If a tesselation engine is all Nvidia's got as an advantage (not far from the truth) then Intel's iGPU will swallow them whole. What other major problems do graphics systems have?

 

As per oCL, are you serious? That's all it was originally designed to be. OpenGL and the programmable shader was a separate matter, but oCL is at its heart the entire rendering engine. Now, anything that can be reduced to matrix algebra can be abstracted as rendering which is why OpenCL is great for scientific computing, but that's beyond its intended design. oCL is a rendering language. It translates to a virtual assembly language by a driver to take advantage of GPU architectures, which is translated again to microcode, but there's not much more to it than that.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

It doesn't matter. If a tesselation engine is all Nvidia's got as an advantage (not far from the truth) then Intel's iGPU will swallow them whole. What other major problems do graphics systems have?

 

As per oCL, are you serious? That's all it was originally designed to be. OpenGL and the programmable shader was a separate matter, but oCL is at its heart the entire rendering engine. Now, anything that can be reduced to matrix algebra can be abstracted as rendering which is why OpenCL is great for scientific computing, but that's beyond its intended design. oCL is a rendering language. It translates to a virtual assembly language by a driver to take advantage of GPU architectures, which is translated again to microcode, but there's not much more to it than that.

its not all it has the advantage at now, and they will implement more when needed. you seem to be assuming a GK110 is the most NV could do in this moment. man you are wrong...

 

i do realise that, i meant that more as a since when was oCL ever used as a renderer for a game. never. you use d3d or oGL, because drivers are already optimised enough to have it working with no problem. 

 

try rendering a scene in a oCL renderer, and you will see how taxing that is. oh and the fact its called open COMPUTE language may indicate what its meant to be used at...

 

now please, stop saying stupid things, stop quoting me, for i am done. no use trying to correct your "knowledge" since you are so set in your ways. 

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

*smacks table*

 

I want shit to work well the first time, is that too much to complain about? :P

....... looking at you Battlefield 4 :D

AMD FX-8350 // ASUS Radeon R9 280X Matrix // ASUS M5A97 Pro // Corsair Vengance 8GB 1600MHz // Corsair RM850 PSU //  WD Green 2TB // Corsair H60 // Cooler Master Elite 430 // KBParadise V60 MX Blue // Logitech G602 // Sennheiser HD 598 + Focusrrrrite 2i2 + MXL V67 // Samsung SyncMaster 245BW 1920x1200 // #killedmywife  #afterdark  #makebombs #Twerkit      "it touches my junk"   linus 2014

Link to comment
Share on other sites

Link to post
Share on other sites

Mining is dead outside ASIC machines. Furthermore, look at the scaling. If Intel dedicated as much space to its GPU as Nvidia did with the GK 110, that's roughly 580 cores on the 22nm process. And the 2 TFlops is the 96-core solution slotted for Broadwell, not the 40 we have right now. Are you telling me Nvidia is going to stay ahead of that growth rate? It can't. If it puts 2 GK 110 cores on a single Tesla it doubles the power while getting 1.7x performance. Intel's scaling is much better.

I used mining as an example of a workload where raw performance is far more important than the latency (which barely matters at all). It's just one example out of many.

Yes I think they will stay ahead. They are already far ahead in terms of performance (when looking at performance in programs, not just blindly staring at the flops, where Intel is behind quite a lot as well). I mean, you can't possibly say that the iGPU will be as powerful as a dedicated GPU. It simply won't happen. In the end they are the same, except one has slightly lower latency but far bigger size and power consumption restrictions.

More restrictions won't translate into higher performance. That makes no sense.

 

 

And those benchmarks are always 3 years behind on available instructions. We've had AVX 2.0 for 3 years and only this year did AIDA64 and Prime95 update to use it. iGPU will be the end of the low-end if not mid-range gaming dGPU before the close of the decade. dGPU prices will rise to combat this loss of revenues, Nvidia losing market share. AMD will have the advantage of iGPU/dGPU Crossfire on improved GPU designs starting with the successor to Carrizo.

Sadly I can't prove you wrong because you're just making assumptions. We will just have to wait and see.

Not sure why you're bringing up AVX because that's an instruction for CPUs, not GPUs. Also, AVX 2 was introduced in 2013 (launched on Haswell), so we have had it for slightly more than 1 year, not 3.

 

 

As per Intel's double precision performance, don't forget Intel has the fastest FPU in the industry. There are 5 of them per EU, each 256 bits wide. Suddenly that's not so unbelievable. 

And that means it will suffer in other areas compared to competitors who send that die area on other types of logics.

 

All this before we even start talking about Xeon Phi. Do you really think Intel will be able to make iGPUs as powerful as their Xeon Phis? That's just ridiculous.

 

 

Who needs driver optimizations when your circuitry supports OpenCL directly? iGPU has a huge advantage. In its perfect form (HSA) it never has to be passed a task. It just takes it right from memory. When it doesn't need to be constantly fed, it runs at maximum throughput whereas a Tesla can't.

Pretty sure Tesla can in fact access main memory directly.

Link to comment
Share on other sites

Link to post
Share on other sites

I used mining as an example of a workload where raw performance is far more important than the latency (which barely matters at all). It's just one example out of many.

Yes I think they will stay ahead. They are already far ahead in terms of performance (when looking at performance in programs, not just blindly staring at the flops, where Intel is behind quite a lot as well). I mean, you can't possibly say that the iGPU will be as powerful as a dedicated GPU. It simply won't happen. In the end they are the same, except one has slightly lower latency but far bigger size and power consumption restrictions.

More restrictions won't translate into higher performance. That makes no sense.

 

 

Sadly I can't prove you wrong because you're just making assumptions. We will just have to wait and see.

Not sure why you're bringing up AVX because that's an instruction for CPUs, not GPUs. Also, AVX 2 was introduced in 2013 (launched on Haswell), so we have had it for slightly more than 1 year, not 3.

 

 

And that means it will suffer in other areas compared to competitors who send that die area on other types of logics.

 

All this before we even start talking about Xeon Phi. Do you really think Intel will be able to make iGPUs as powerful as their Xeon Phis? That's just ridiculous.

 

 

Pretty sure Tesla can in fact access main memory directly.

It can, after passing through 2 busses to get to it. At that point bandwidth becomes an issue all over again. In server/supercomputer applications where you have exotic architecture like fabric and now the hybrid memory cube in 2014 supercomputers, this isn't too bad a solution. That said, I question how well that performance can scale on consumer platforms without exotic memory solutions. There is one reason SOCs can trump dGPU though: much easier to cool 1 central unit than trying to have air work on so many different parts of the board, especially once we start working with graphene heatsinks. The proofs of concept have been stunning so far.

 

But Intel's compute model is far more advanced than Nvidia's as well. Nvidia is great for 3D rendering and I love the simplicity of CUDA, but I think the GPU naysayers are going to be burned as much as the original server system builders were. Everyone who has ever said Intel would never be able to compete in X market involving computing has been dead a couple decades later.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

No...FREAKING REALLY?! Broadwell will bring Iris Pro 6200 to desktop at the same standard $350 price their last flagship I7 chips have launched with. Jesus H Christ I'm not plum stupid!

LOL! You're still talking about a $350 part vs a $180 part. i7's and Kaveri's are not competing parts and are intended for very different target markets. 

 

*facepalm*

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

You better put REALLY (I don't mean that current are bad ;)) good iGPUs in your future APUs in the future for DDR4...

What AMD needs to do is stack memory on package. They can easily integrate HBM into Carizzo and it would be cheaper than adding L3. Even with HBM being on package, Carizzo die size will still be smaller than Kaveri. I wouldn't mind paying $200 for a A10-8850k if it had Excavator, 640 GCN 2.0, and HBM on package (128 GB/s of memory bandwidth). It would destroy a discrete R7 250X.

Link to comment
Share on other sites

Link to post
Share on other sites

I can see DDR4 really boosting APU performance and progress. There may soon be very little necessity for low-end dGPUs and it'll be beneficial to have those iGPU cores with HSA. No wasted die space, even with an added dGPU.

 

These are very interesting times in PC hardware development. :)

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah... but if the don't do it a crazy way to do it would be Quad channel +3200 MHz... but that would be expensive xD

They should bring quad channel to their APU platform at the least. Tho Carrizo will run on the FM2+ socket, I have my doubts of seeing DDR4 support until later next year.

Link to comment
Share on other sites

Link to post
Share on other sites

They should bring quad channel to their APU platform at the least. Tho Carrizo will run on the FM2+ socket, I have my doubts of seeing DDR4 support until later next year.

Carrizo also doesn't have a process shrink. It will have the same die area as Kaveri.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

each CL is one cycle, with higher frequency, it gets shorter and shorter, but you have more and more cycles to wait. basically 2x frequency needs 2x cl for same performance. (and if you get the CL lower than 2x, the ram performs better all the time)

 

so 1600 means one cycle is 6.25ns, which makes CL9 be 56.25ns

 

3000 one cycle is 3.33ns, cl15 is then 50ns.

 

so this is better than DDR3 1600MHz CL9, its basically the same as CL8, apart from bandwith hungry scenarios (APUs)

 

its a new technology. it was the same with ddr3...

 

Your math seems to be off.

 

DDR3-1600 runs at 800 MHz, ie. 8 x 10^8 cycles/sec

DDR4-3000 runs at 1500 MHz, ie. 1.5 x 10^9 cycles/sec

 

One clock cycle for DDR3-1600 takes 1 cycle / (8 x 10^8 cycles/sec) = 1.25 x 10^-9 sec = 1.25 nanoseconds

One clock cycle for DDR4-3000 takes 1 cycle / (1.5 x 10^9 cycles/sec) = 6.67 x 10^-10 sec = 0.67 nanoseconds

 

So the CAS latency of DDR3-1600 CL9 is 9 x 1.25 ns = 11.25 ns

And the CAS latency of DDR4-3000 CL15 is 15 x 0.67 ns = 10 ns

Link to comment
Share on other sites

Link to post
Share on other sites

Your math seems to be off.

 

DDR3-1600 runs at 800 MHz, ie. 8 x 10^8 cycles/sec

DDR4-3000 runs at 1500 MHz, ie. 1.5 x 10^9 cycles/sec

 

One clock cycle for DDR3-1600 takes 1 cycle / (8 x 10^8 cycles/sec) = 1.25 x 10^-9 sec = 1.25 nanoseconds

One clock cycle for DDR4-3000 takes 1 cycle / (1.5 x 10^9 cycles/sec) = 6.67 x 10^-10 sec = 0.67 nanoseconds

 

So the CAS latency of DDR3-1600 CL9 is 9 x 1.25 ns = 11.25 ns

And the CAS latency of DDR4-3000 CL15 is 15 x 0.67 ns = 10 ns

its the same relatively. 56.25/50 is the same as 11.25/10 ;) 

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

cl 15 though.

Intel 3570k 3,4@4,5 1,12v Scythe Mugen 3 gigabyte 770     MSi z77a GD55    corsair vengeance 8 gb  corsair CX600M Bitfenix Outlaw 4 casefans

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

each CL is one cycle, with higher frequency, it gets shorter and shorter, but you have more and more cycles to wait. basically 2x frequency needs 2x cl for same performance. (and if you get the CL lower than 2x, the ram performs better all the time)

 

so 1600 means one cycle is 6.25ns, which makes CL9 be 56.25ns

 

3000 one cycle is 3.33ns, cl15 is then 50ns.

 

so this is better than DDR3 1600MHz CL9, its basically the same as CL8, apart from bandwith hungry scenarios (APUs)

 

its a new technology. it was the same with ddr3...

 

 

cl 15 though.

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×