What I'm looking for is the same type of benchmarks they do for gaming to be applied to some Machine learning.
So what effect does overclocking the GPU have on training a model? What is the speed up of training various models with GPU X vs GPU Y? What kind of performance increase is there in memory transfer if the data is stored on an NVMe drive vs ssd? What are the effects of different chipsets and the throughput of data transfered to the GPU for processing. What about water-cooling?
It's basically all the same questions they do for gaming to really nail down what effects different components have on FPS, but I want to see what the effects are for training a model. Different types of models, different data sets, different size input images, etc.
NVidia is devoting so much research to their CUDA platform, and they're making these huge claims. I want to see if the claims can be substantiated, and if they're effected by the other components in the computer, etc.