Jump to content

Intel Stratix 10 Destroys Nvidia Titan XP in Machine Learning, With Beta Libraries

MandelFrac
2 minutes ago, Prysin said:

not really. This is a FPGA specifically crafted for these types of workload. a Titan XP does not have specific driver optimizations for most of the types of workloads tested here. Also do note that Intel is not mentioning the Quadros and Tesla's, nor are they mentioning the Radeon GPUs made for these sorts of workloads. Funny that, eh?

 

This is but clever marketing and cherry picking both benchmarks and measuring points. The only place where Stratix 10 is going to crush the opposition, is in power usage. And even then i recon Nvidia will trash them with Volta once they move to the updated 16nm node.

It's not marketing. The testing is done by a fully independent 3rd party. Also, the test isn't done with a Quadro or Tesla since the Titan XP is faster due to higher clocks. The single and half-precision performance is exactly the same per clock +- a few % in corner cases.

 

Also, Nvidia's Quadros and Teslas decimate AMD's performance in datacentres, so there's especially no point in mentioning FirePros. Further, there are 4 benchmarks in that article, using single, half, and quarter precision. The Stratix 10 wins in nearly all of them without having any logic elements involved yet.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, MandelFrac said:

It's not marketing. The testing is done by a fully independent 3rd party. Also, the test isn't done with a Quadro or Tesla since the Titan XP is faster due to higher clocks. The single and half-precision performance is exactly the same per clock +- a few % in corner cases.

 

Also, Nvidia's Quadros and Teslas decimate AMD's performance in datacentres, so there's especially no point in mentioning FirePros. Further, there are 4 benchmarks in that article, using single, half, and quarter precision. The Stratix 10 wins in nearly all of them without having any logic elements involved yet.

Like moth to a ball of flame.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, MandelFrac said:

Also, Nvidia's Quadros and Teslas decimate AMD's performance in datacentres

Dunno about that, Nvidia is just way better at forming partnerships with vendors and getting them to use their products. At the time an AMD S9150 was much faster (56%) than a Nvidia K20X but time moves forward and even the S9300x2 is slow and mega power hungry now.

 

AMD wasn't really making any big inroads for the server market and high-end gaming hence Polaris and super aging platforms now. Dunno how Vega will pan out but likely same thing, even if faster wont get used that much.

 

AMD does have a way better approach to GPU virtualization tho, SR-IOV, rather than software/drivers.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, MandelFrac said:

Neither AMD nor Nvidia would give Intel the leverage to sink their own GPU business. Intel's fabs and libraries are so well-tuned Intel's 10nm is roughly twice as dense as TSMC's and is more compact than their proposed 7nm node.

 

Intel fit 30 billion transistors into the Stratix 10 in a space roughly 2/3-5/7 the size of the KNL Xeon Phi. Imagine what Intel could do if allowed to have real GPU IP. The market would be flipped on its side in a year.

Who says they need the IP from those companies? They are not the only companies with GPU IP. Intel already manufactures integrated GPU, so they already have a bunch of GPU IP. What would stop them from using that in a dedicated GPU? You think there is some board IP that intel doesn't have access too?

 

I think you are overreacting on how fast Intel would be able to break into the dedicated GPU market. It takes time, experience and money. Something Intel isn't interested to put in for what seems like a slowly dying market.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

But it's an FPGA... 

 

You can literally do anything digital on those chips, while GPUs are essentially ASICs focused on video rendering

 

Your resident osu! player, destroyer of keyboards.

Link to comment
Share on other sites

Link to post
Share on other sites

The most of the people don't know what is FPGA in the first place. FPGA is NOT an graphic processor, not even close. 

 

If you buy Stratix (or any other FPGA) development card and just install it in your PC, you will have a super expensive useless card which will be unable to do anything. FPGA without firmware is useless. Firmware sets FPGA gates in desired state so it can do some operations like in this case machine learning. Making FPGA firmware is really time consuming and difficult to make and it takes months to do it. Don't mix FPGA firmware with Nvidia or AMD firmware. FPGA firmware can switch the function of the FPGA to do something completely different (eg. same FPGA chip will be used for your sound card and in the airplane navigation system). On the other hand, AMD or Nvidia chips can be used for only graphics and OpenCL/CUDA stuff. Someone would say that Altera (now Intel) also uses OpenCL. Yes, but try to eg. connect SFP+ network inputs on AMD/Nvidia chip - it would not be possible. As mentioned before, GPU chips are ASICs which are designed for a specific function. On the other hand, FPGAs are designed for many functions and they are much more flexible than any GPU and it is not surprise that they are beating Nvidia cards in this case.

 

EDIT: I've forgotten to address the ARM as well. FPGA is NOT ARM based chip. Yes, there are some FPGAs with ARM SoCs embedded but those chips are using ARM for some sort of OS and FPGA for IO processing or whatever in the embedded situations. It is much easier for the companies which design electronic circuits to handle with one chip only than with multiple chips. It also saves lots of precious time to link CPU to FPGA since it is already integrated. It is not as simple as "just connect the wires". 

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Tomsen said:

Who says they need the IP from those companies? They are not the only companies with GPU IP. Intel already manufactures integrated GPU, so they already have a bunch of GPU IP. What would stop them from using that in a dedicated GPU? You think there is some board IP that intel doesn't have access too?

 

I think you are overreacting on how fast Intel would be able to break into the dedicated GPU market. It takes time, experience and money. Something Intel isn't interested to put in for what seems like a slowly dying market.

They pretty much ARE. Imagination Technologies and ARM specialize in embedded, low-power, high-pixel throughput GPU tech, not something that can be applied to the big field Intel would need to to make it worthwhile.

 

Intel lacks fundamental IP for making a discrete card though. That's why they license from Nvidia.

 

Intel has the engineers to get it done in a year, and it's a HUGE market to win when you include desktops, workstations, laptops, servers, and all else.

4 hours ago, Niksa said:

The most of the people don't know what is FPGA in the first place. FPGA is NOT an graphic processor, not even close. 

 

If you buy Stratix (or any other FPGA) development card and just install it in your PC, you will have a super expensive useless card which will be unable to do anything. FPGA without firmware is useless. Firmware sets FPGA gates in desired state so it can do some operations like in this case machine learning. Making FPGA firmware is really time consuming and difficult to make and it takes months to do it. Don't mix FPGA firmware with Nvidia or AMD firmware. FPGA firmware can switch the function of the FPGA to do something completely different (eg. same FPGA chip will be used for your sound card and in the airplane navigation system). On the other hand, AMD or Nvidia chips can be used for only graphics and OpenCL/CUDA stuff. Someone would say that Altera (now Intel) also uses OpenCL. Yes, but try to eg. connect SFP+ network inputs on AMD/Nvidia chip - it would not be possible. As mentioned before, GPU chips are ASICs which are designed for a specific function. On the other hand, FPGAs are designed for many functions and they are much more flexible than any GPU and it is not surprise that they are beating Nvidia cards in this case.

 

EDIT: I've forgotten to address the ARM as well. FPGA is NOT ARM based chip. Yes, there are some FPGAs with ARM SoCs embedded but those chips are using ARM for some sort of OS and FPGA for IO processing or whatever in the embedded situations. It is much easier for the companies which design electronic circuits to handle with one chip only than with multiple chips. It also saves lots of precious time to link CPU to FPGA since it is already integrated. It is not as simple as "just connect the wires". 

Altera and Intel have tooled up OpenCL libraries to make firmware construction much less painful. Xilinx is stuck on Verilog still, but the truth is for the one side of the market things have been much easier for at least 3 years now.

 

And actually the SOC on the Stratix 10 exists for hardware remapping on the fly. You CAN certainly use it for high-speed derivatives trading or software-defined networking such as on the Omnipath switches I mentioned, but this is a much more powerful design. You can run more complex routines which modify the FPGA as the algorithm(s) finish in succession.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×