AMD introduces new data center GPU. AMD Instict MI100
12 hours ago, thechinchinsong said:At this point, it doesn't matter how much FP64/32 compute AMD can muster up anymore. The market for lots of ML and AI is already used to using CUDA and the various ways Nvidia have come up with to accelerate workloads through their hardware/software solutions. It's a decent attempt if you only limit the comparison to FP64/32 but in any other workload type, Nvidia has them beat and that's exactly what the market in ML and AI are moving toward anyways.
8 hours ago, tim0901 said:Sounds like they're a bit better if you're using FP32, but not if you're using TF32 or FP16 (which tensor cores also accelerate) which are commonly used in mixed-precision learning.
I think the bigger problem will be their lack of CUDA. Every major deep learning framework supports CUDA, but many have limited support for OpenCL. AMD's card could be 50% faster in FP32, but that wouldn't matter as it can't run the code that people want it to.
3 hours ago, justpoet said:I'm glad to see them attempting to compete in this space. Even if the card were objectively better (as I'm sure it will be for at least some calculation types), the big elephant in the room isn't the card, but CUDA. They need an answer to that, which is easy to widely adopt, if they want to actually compete heavily in the compute space.
They have a CUDA competitor/equivalent, it's called ROCm and they are launching v4.0 with this GPU and one part of it is to ease the transition/port from CUDA.
More details from the phoronix article: https://www.phoronix.com/scan.php?page=article&item=amd-mi100-rocm4&num=1
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now