Jump to content

AMD faces class action suit over Bulldozer missrepresentation

zMeul

Well...

1) If that's true then that makes AMD's decision to market it as a "12 compute core processor" even less sense. That only further strengthen my point.

 

2) There are actually a lot of applications that takes advantage of GPGPU. To mention a few I know and use:

Mozilla Firefox (and other modern browsers)

Open Boardcast Software

Lots of Adobe programs (Photoshop, After Effects and Premier)

Sony Vegas

Blender

GIMP

MSI Afterburner

Folding@Home

A certain program I can't name because of rules, but it has to do with DRM and brute forcing.

LibreOffice Calc

HandBrake

Lots of games (such as games that use Havok Physics)

A lot of video players (such as MPC-HC and VLC)

TrueCrypt

A wide variety of emulators (such as Dolphin)

WinZip (but it's still slower than 7-zip... At least according to tests from 2009)

 

I wouldn't call most of these programs as being in the professional circles, but they aren't really in the "average-Joe circle" either.

 

Side note: He is only correct because he suddenly changed the subject. He went from "Intel doesn't support it!" to "it's not useful for consumers!". GPGPU by the way, is very important for some of the tasks the average Joe does. Decoding videos (like YouTube videos) on the CPU would probably result in a very poor experience for a lot of people.

 

 

 

 

Apple seems to have tuned down their patent trolling a lot since Jobs died. If "old Apple" got their hands on AMD then I wouldn't have been surprised if they had tried to go after Intel for making x86 processors.

In order:

1)duh, but you don't need much of an iGPU for that

2)Not even sure what that is, so it can't be that ubiquitous/popular

3)Professional use cases, not really consumer

4)See 3

5)See 3

6)See 3

7)Yeah, but again, exactly how much of a GPU do you need to run this decently?

8)True, but the number of people who actually participate in this is an even smaller one than the total of PC enthusiasts

9)See 8 but reword enthusiasts as hackers

10) This is the first really legitimate consumer use case you've mentioned

11) More professional than consumer

12) Define "lots"

13) Most people still don't encrypt, but I'll give you that one.

14) A subset of a small market of gamers.

15) Okay, but what consumer is compressing multi-GB files? A 2600K rips through compression fairly quickly.

 

For video decoding, AVX 256 is pretty damn fast at it. GPGPU for it is just overkill for fast buffering.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I believe people are forgetting about the HSA architecture that AMD introduced. I noticed recently that the Linux 4.4 kernel will have HSA support baked in more thoroughly. Linux is used in the professional and consumer circles. So, it stands to reason that if AMD's Zen chips have HBM baked in, that we could theoretically have a 12 core APU. Most likely 8 CPU cores and 4 GPU cores. Or you could see AMD move the APU to a combined CPU/GPU core. Kinda like a GPGPU setup. Where the FPU is literally handled by the GPU part of the core. Basically anything that GPU's excel at would be handled by the GPU segment and anything that the CPU excels at would be handled by the traditional CPU segment. making it more like a GPCPU or something. HBM has some pretty powerful potential. Especially if the GPU has access to with as well as the GPU in an APU. HSA FTW.

Link to comment
Share on other sites

Link to post
Share on other sites

intel went with simultaneous multithreading, while AMD went with clustered multithreading - each has their advantages, each has their disadvantages, AMD's design is 8 cores.

#RAMGate

Competition is green with envy

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

In order:

1)duh, but you don't need much of an iGPU for that

2)Not even sure what that is, so it can't be that ubiquitous/popular

3)Professional use cases, not really consumer

4)See 3

5)See 3

6)See 3

7)Yeah, but again, exactly how much of a GPU do you need to run this decently?

8)True, but the number of people who actually participate in this is an even smaller one than the total of PC enthusiasts

9)See 8 but reword enthusiasts as hackers

10) This is the first really legitimate consumer use case you've mentioned

11) More professional than consumer

12) Define "lots"

13) Most people still don't encrypt, but I'll give you that one.

14) A subset of a small market of gamers.

15) Okay, but what consumer is compressing multi-GB files? A 2600K rips through compression fairly quickly.

 

For video decoding, AVX 256 is pretty damn fast at it. GPGPU for it is just overkill for fast buffering.

OBS is a program used for live streaming.

 

It supports Nvidia NVENC, not OpenCL/OpenGL acceleration as LawlZ claims.

I got the OBS menus open as i type this.

It uses my CPU for the recording/streaming.

 

also patric, let us not forget that in the case of Kaveri and intel HD 3000 and up, aswell as any GPU on the market, they use separate decoders built into the core to do the video encode/decode.. because using GPGPU with a full GPU acceleration would use massive amounts of power. Thus only a fraction of the GPU is used for normal video playback to conserve power

Link to comment
Share on other sites

Link to post
Share on other sites

OBS is a program used for live streaming.

 

It supports Nvidia NVENC, not OpenCL/OpenGL acceleration as LawlZ claims.

I got the OBS menus open as i type this.

It uses my CPU for the recording/streaming.

 

also patric, let us not forget that in the case of Kaveri and intel HD 3000 and up, aswell as any GPU on the market, they use separate decoders built into the core to do the video encode/decode.. because using GPGPU with a full GPU acceleration would use massive amounts of power. Thus only a fraction of the GPU is used for normal video playback to conserve power

Actually, OBS does support OpenCL as well as AMD specific codecs.

 

In the "Advanced" section, if you have the current version of OBS, in the text box labelled "Custom x264 Encoder Settings" you can type "opencl=true" and it will share some encoding steps to the graphics card. This sharing is mostly automatically balanced to prevent either your CPU or your GPU from getting overworked. Again, this decreases your total CPU usage.

It is not QuickSync, it's OpenCL. In fact, several people have in fact claimed they felt it increased quality.

 

AMD excels at OpenCL so this is a more AMD-centric feature.

Link to comment
Share on other sites

Link to post
Share on other sites

1)duh, but you don't need much of an iGPU for that

Depends on what you do actually, and even if you got a low-end iGPU it will still most likely be far better than the CPU for certain tasks. Ever tried playing 1080p or 4K H.264 in software? I doubt the CPU most people use in computers would be able to handle it that well. Possibly they would drop and delay a lot of frames, or maybe just keep the CPU at like 80% with just a single YouTube tab open... It would be terrible.

WebGL is pretty sweet (warning, very long load time).

 

2)Not even sure what that is, so it can't be that ubiquitous/popular

Extremely popular program used by streamers. It's basically a free and open source version of XSplit.

 

3)Professional use cases, not really consumer

4)See 3

5)See 3

6)See 3

Not necessarily. A lot of people use those programs just for fun. GIMP is not at all a professional-only program.

 

7)Yeah, but again, exactly how much of a GPU do you need to run this decently?

A lot, because the parts that use the GPU are related to video encoding. That is usually not something you want to run on the CPU. I can't even get decent quality with my 4.4GHz 2500K when encoding in real time on the CPU. It's just fast enough to stream Hearthstone properly but any more CPU intense game than that and I have to dial the quality way back (too much for my taste).

 

11) More professional than consumer

Handbrake is not a professional tool. It's more of a hobbyist tool.

 

12) Define "lots"

Any game that uses the GPU for AI, physics or anything other than drawing things on the screen. I wouldn't be surprised if it's the vast majority of AAA games released in the last few years. Sadly I don't have any solid numbers since I don't have access to thousands of games' source code.

 

15) Okay, but what consumer is compressing multi-GB files? A 2600K rips through compression fairly quickly.

"We don't need programs to be faster" is a terrible attitude to have.

 

For video decoding, AVX 256 is pretty damn fast at it. GPGPU for it is just overkill for fast buffering.

You're overestimating the CPU performance of the average Joe, and underestimating the performance needed. Hell one of the biggest reasons why things like The Scene was so incredibly slow with switching over to H.264 was because people couldn't play it for hardware reasons.

 

 

But again, this conversation is kind of irrelevant because the original discussion wasn't "how widely used it is". The discussion was if Intel's processor could do GPGPU at all.

 

 

 

 

 

 

Edit:

OBS is a program used for live streaming.

 

It supports Nvidia NVENC, not OpenCL/OpenGL acceleration as LawlZ claims.

I got the OBS menus open as i type this.

It uses my CPU for the recording/streaming.

 

also patric, let us not forget that in the case of Kaveri and intel HD 3000 and up, aswell as any GPU on the market, they use separate decoders built into the core to do the video encode/decode.. because using GPGPU with a full GPU acceleration would use massive amounts of power. Thus only a fraction of the GPU is used for normal video playback to conserve power

Can you please quote me where I said it supported OpenCL/OpenGL?

I said it supported GPGPU, which it does.

post-216-0-31634600-1446934419.png

 

See the 3 options called "Encoder"? You can enabled x264 to use OpenCL if you want, as explained by @KRDucky. This by the way, is also the exact same encoder used in HandBrake.

The Quick Sync option is also GPGPU. It will use the GPU on Intel chips (Sandy Bridge and newer) to encode the video.

Nvidia NVENC is also GPGPU. It uses an Nvidia GPU to encode the video.

 

 

And no, it does not use a separate encoder to encode. There is a separate decoding block (like you said), but you can't use the decoding logics for encoding. It just does not work. You need specific encoding logics or to run it on the "generic" GPU cores. Having dedicated logics in the GPU for H.264 encoding is very new, but encoding with the GPU is fairly old. In fact, AMD did not include their "Video Codec Engine" until the 7000 series, but if you look at Anandtech's comparison of QuickSync you can see that they used a 6870 for the ATI Stream codepath encoding sample. The 6870 does not have an H.264 encoder (a feature introduced with the 7000 series). It used the regular GPU cores to encode it.

The same goes for the 460 they used for their CUDA encoding sample. It was done on regular CUDA cores. Not in a dedicated H.264 encoder.

 

Here is AMD's documentation of it in case you don't like Anandtech. As you can clearly read in that article. VCE is AMD's fixed-function H.264 encoder and no GPU before the 7000 series supported it.

 

 

Stop talking about things you do not understand. You are just spreading misinformation.

Link to comment
Share on other sites

Link to post
Share on other sites

Great time to file a class action lawsuit against AMD isn't it? [/sarcasm]

"My game vs my brains, who gets more fatal errors?" ~ Camper125Lv, GMC Jam #15

Link to comment
Share on other sites

Link to post
Share on other sites

Zen will not have a shared FPU between two cores. You have nothing to worry about. 

I'm not worried about shared FPUs for Zen. I'm worried what different kind of scam they could have prepared now.

Link to comment
Share on other sites

Link to post
Share on other sites

Depends on what you do actually, and even if you got a low-end iGPU it will still most likely be far better than the CPU for certain tasks. Ever tried playing 1080p or 4K H.264 in software? I doubt the CPU most people use in computers would be able to handle it that well. Possibly they would drop and delay a lot of frames, or maybe just keep the CPU at like 80% with just a single YouTube tab open... It would be terrible.

WebGL is pretty sweet (warning, very long load time).

 

Extremely popular program used by streamers. It's basically a free and open source version of XSplit.

 

Not necessarily. A lot of people use those programs just for fun. GIMP is not at all a professional-only program.

 

A lot, because the parts that use the GPU are related to video encoding. That is usually not something you want to run on the CPU. I can't even get decent quality with my 4.4GHz 2500K when encoding in real time on the CPU. It's just fast enough to stream Hearthstone properly but any more CPU intense game than that and I have to dial the quality way back (too much for my taste).

 

Handbrake is not a professional tool. It's more of a hobbyist tool.

 

Any game that uses the GPU for AI, physics or anything other than drawing things on the screen. I wouldn't be surprised if it's the vast majority of AAA games released in the last few years. Sadly I don't have any solid numbers since I don't have access to thousands of games' source code.

 

"We don't need programs to be faster" is a terrible attitude to have.

 

You're overestimating the CPU performance of the average Joe, and underestimating the performance needed. Hell one of the biggest reasons why things like The Scene was so incredibly slow with switching over to H.264 was because people couldn't play it for hardware reasons.

 

 

But again, this conversation is kind of irrelevant because the original discussion wasn't "how widely used it is". The discussion was if Intel's processor could do GPGPU at all.

 

 

 

 

 

 

Edit:

Can you please quote me where I said it supported OpenCL/OpenGL?

I said it supported GPGPU, which it does.

attachicon.gifSettings.png

 

See the 3 options called "Encoder"? You can enabled x264 to use OpenCL if you want, as explained by @KRDucky. This by the way, is also the exact same encoder used in HandBrake.

The Quick Sync option is also GPGPU. It will use the GPU on Intel chips (Sandy Bridge and newer) to encode the video.

Nvidia NVENC is also GPGPU. It uses an Nvidia GPU to encode the video.

 

 

And no, it does not use a separate encoder to encode. There is a separate decoding block (like you said), but you can't use the decoding logics for encoding. It just does not work. You need specific encoding logics or to run it on the "generic" GPU cores. Having dedicated logics in the GPU for H.264 encoding is very new, but encoding with the GPU is fairly old. In fact, AMD did not include their "Video Codec Engine" until the 7000 series, but if you look at Anandtech's comparison of QuickSync you can see that they used a 6870 for the ATI Stream codepath encoding sample. The 6870 does not have an H.264 encoder (a feature introduced with the 7000 series). It used the regular GPU cores to encode it.

The same goes for the 460 they used for their CUDA encoding sample. It was done on regular CUDA cores. Not in a dedicated H.264 encoder.

 

Here is AMD's documentation of it in case you don't like Anandtech. As you can clearly read in that article. VCE is AMD's fixed-function H.264 encoder and no GPU before the 7000 series supported it.

 

 

Stop talking about things you do not understand. You are just spreading misinformation.

So...I've gotten a bit rekt then with my Ati Mobility Radeon HD5650.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Depends on what you do actually, and even if you got a low-end iGPU it will still most likely be far better than the CPU for certain tasks. Ever tried playing 1080p or 4K H.264 in software? I doubt the CPU most people use in computers would be able to handle it that well. Possibly they would drop and delay a lot of frames, or maybe just keep the CPU at like 80% with just a single YouTube tab open... It would be terrible.

WebGL is pretty sweet (warning, very long load time).

 

Extremely popular program used by streamers. It's basically a free and open source version of XSplit.

 

Not necessarily. A lot of people use those programs just for fun. GIMP is not at all a professional-only program.

 

A lot, because the parts that use the GPU are related to video encoding. That is usually not something you want to run on the CPU. I can't even get decent quality with my 4.4GHz 2500K when encoding in real time on the CPU. It's just fast enough to stream Hearthstone properly but any more CPU intense game than that and I have to dial the quality way back (too much for my taste).

 

Handbrake is not a professional tool. It's more of a hobbyist tool.

 

Any game that uses the GPU for AI, physics or anything other than drawing things on the screen. I wouldn't be surprised if it's the vast majority of AAA games released in the last few years. Sadly I don't have any solid numbers since I don't have access to thousands of games' source code.

 

"We don't need programs to be faster" is a terrible attitude to have.

 

You're overestimating the CPU performance of the average Joe, and underestimating the performance needed. Hell one of the biggest reasons why things like The Scene was so incredibly slow with switching over to H.264 was because people couldn't play it for hardware reasons.

 

 

But again, this conversation is kind of irrelevant because the original discussion wasn't "how widely used it is". The discussion was if Intel's processor could do GPGPU at all.

CPUs come with new transcoding blocks each generation. That Sandy Bridge doesn't have H.264 built in is no surprise. The fact that software-based decoding isn't keeping up smells like bad design or poor compiler options used to me.

 

Using GPUs for AI in games is already a massive mistake. GPU-based AI is usually a distributed neural net for very specific use cases where branches can be reduced to mathematical functions. You're better off using AVX for AI since CPUs actually have extremely accurate branch predictors (which GPUs lack partly due to power/thermal constraints since having a predictor attached to every SMM/ACE is already going to be much larger and exponentially more complex than the ones attached to say Skylake's CPU cores. You overestimate how complicated game AIs are. They're nothing more than fuzzy Alpha-Beta algorithms most of the time.

 

It's not that software doesn't need to be faster. I'm one of the biggest advocates of improving game programming on here (and I will admit I don't have much experience optimizing graphics code either in terms of draw calls or in the shader language itself; but I am damn good at the CPU side of it for AI, networking, physics, etc..).

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

For those who don't know what a FPU does:

https://en.wikipedia.org/wiki/Floating-point_unit

http://searchwindowsserver.techtarget.com/definition/floating-point-unit-FPU

http://www.webopedia.com/TERM/F/floating_point_number.html

https://www.techopedia.com/definition/2865/floating-point-unit-fpu

 

An FPU handles the more advanced calculations, which are found all the time in modern computers-in the 90's with the 486 and earlier CPU the FPU (or maths co processor) wasn't needed outside of specific programs.

Is it weird that my mom doesn't know how to use Google Maps but knows about FPU and how insanely helpful it was back then?

Link to comment
Share on other sites

Link to post
Share on other sites

Depends on what you do actually, and even if you got a low-end iGPU it will still most likely be far better than the CPU for certain tasks. Ever tried playing 1080p or 4K H.264 in software? I doubt the CPU most people use in computers would be able to handle it that well. Possibly they would drop and delay a lot of frames, or maybe just keep the CPU at like 80% with just a single YouTube tab open... It would be terrible.

WebGL is pretty sweet (warning, very long load time).

 

Extremely popular program used by streamers. It's basically a free and open source version of XSplit.

 

Not necessarily. A lot of people use those programs just for fun. GIMP is not at all a professional-only program.

 

A lot, because the parts that use the GPU are related to video encoding. That is usually not something you want to run on the CPU. I can't even get decent quality with my 4.4GHz 2500K when encoding in real time on the CPU. It's just fast enough to stream Hearthstone properly but any more CPU intense game than that and I have to dial the quality way back (too much for my taste).

 

Handbrake is not a professional tool. It's more of a hobbyist tool.

 

Any game that uses the GPU for AI, physics or anything other than drawing things on the screen. I wouldn't be surprised if it's the vast majority of AAA games released in the last few years. Sadly I don't have any solid numbers since I don't have access to thousands of games' source code.

 

"We don't need programs to be faster" is a terrible attitude to have.

 

You're overestimating the CPU performance of the average Joe, and underestimating the performance needed. Hell one of the biggest reasons why things like The Scene was so incredibly slow with switching over to H.264 was because people couldn't play it for hardware reasons.

 

 

But again, this conversation is kind of irrelevant because the original discussion wasn't "how widely used it is". The discussion was if Intel's processor could do GPGPU at all.

 

 

 

 

 

 

Edit:

Can you please quote me where I said it supported OpenCL/OpenGL?

I said it supported GPGPU, which it does.

attachicon.gifSettings.png

 

See the 3 options called "Encoder"? You can enabled x264 to use OpenCL if you want, as explained by @KRDucky. This by the way, is also the exact same encoder used in HandBrake.

The Quick Sync option is also GPGPU. It will use the GPU on Intel chips (Sandy Bridge and newer) to encode the video.

Nvidia NVENC is also GPGPU. It uses an Nvidia GPU to encode the video.

 

 

And no, it does not use a separate encoder to encode. There is a separate decoding block (like you said), but you can't use the decoding logics for encoding. It just does not work. You need specific encoding logics or to run it on the "generic" GPU cores. Having dedicated logics in the GPU for H.264 encoding is very new, but encoding with the GPU is fairly old. In fact, AMD did not include their "Video Codec Engine" until the 7000 series, but if you look at Anandtech's comparison of QuickSync you can see that they used a 6870 for the ATI Stream codepath encoding sample. The 6870 does not have an H.264 encoder (a feature introduced with the 7000 series). It used the regular GPU cores to encode it.

The same goes for the 460 they used for their CUDA encoding sample. It was done on regular CUDA cores. Not in a dedicated H.264 encoder.

 

Here is AMD's documentation of it in case you don't like Anandtech. As you can clearly read in that article. VCE is AMD's fixed-function H.264 encoder and no GPU before the 7000 series supported it.

 

 

Stop talking about things you do not understand. You are just spreading misinformation.

you know, you just proved my point?

 

i said "on the market".... which means "whats actually still produced and or sold as up to date hardware"

 

now, yes there is probably one or two retailers in the world that still has a few Thermis or 5000 series. However what IS on the "MARKET" today, is Haswell (EOL), Skylake, FX, Kaveri based APUs, AMD 200 series (EOL), AMD 300 series, AMD Fury series, Nvidia 700 series (EOL), Nvidia 900 Series.

 

that is what's on the market.

All of these products HAVE onbaord H.264 decode and encode...

Link to comment
Share on other sites

Link to post
Share on other sites

Is it weird that my mom doesn't know how to use Google Maps but knows about FPU and how insanely helpful it was back then?

Well with older computers, you had to actually had to understand how they computer worked to ensure that you could set it up correctly and have it running reliably and performing the tasks required (I'm really not looking forward to setting up my 386's ISA cards when I rebuild it). With newer computers its straight up just plug the parts together and they work, no tweaking or further knowledge needed.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

- APUs have "Compute cores" because of the technology behind them. This is why it's labeled with a high core count. While I agree the average customer will be mislead by this and sometimes assume it's a beefy processor for 1/3 the price of something with 1/3 the advertised core count from Intel, an APU is not the same as a CPU. They are not for the same thing. That's like saying I'm gonna go play Crysis 3 at Ultra++ with 16x forced supersampling on a Tegra K1 device. Like what the fuck.

- "Cores" vs "Threads" is interesting; threads are definitely not classified as cores but modules should not be considered cores at all. And yet, at the same time, Bulldozer+ has all shown that there are 8 logical/compute cores that resembles hyper-threading from Intel but absolutely does not equate to that. They are worlds different, even physically.. I disagree with the reasoning that the modules are = cores while the clusters are thrown under the bus as extra threads. They don't operate like hyperthreading, so again, they are not the same as threads.

There's no way it's a case against AMD. Argue all day on whether modules = cores, they are definitely not. We're comparing LandRovers to Rolls Royces. Rover isn't a truck, it's an SUV. The Rolls Royce is a sedan. Both are still considered cars. The dumbest news all year came in November..

Link to comment
Share on other sites

Link to post
Share on other sites

CPUs come with new transcoding blocks each generation. That Sandy Bridge doesn't have H.264 built in is no surprise. The fact that software-based decoding isn't keeping up smells like bad design or poor compiler options used to me.

Ehh.... What are you talking about? I think you have misunderstood something completely here. Sandy Bridge supports both encoding and decoding of H.264 in fixed-function hardware.

 

I don't think it was as simple as some compiler options being incorrect. It's the fact that things like a Core 2 Duo will have a hard time decoding high bitrate 1080p or 4K H.264 in software. It doesn't even have to be a Core 2 Duo. A lot of Atom processors and very low clocked Core processors (like Core M) might not be up for the task either. I don't have any Atom or Core M devices to test it with, but I was very active in the video community around the time H.264 was getting popular and a huge amount of users were reporting abysmal performance.

 

 

Using GPUs for AI in games is already a massive mistake. GPU-based AI is usually a distributed neural net for very specific use cases where branches can be reduced to mathematical functions. You're better off using AVX for AI since CPUs actually have extremely accurate branch predictors (which GPUs lack partly due to power/thermal constraints since having a predictor attached to every SMM/ACE is already going to be much larger and exponentially more complex than the ones attached to say Skylake's CPU cores. You overestimate how complicated game AIs are. They're nothing more than fuzzy Alpha-Beta algorithms most of the time.

I think you missed the part where I said "physics" as well. Surely even you will agree that running physics calculations on the GPU is a good idea. In fact, Microsoft recently bought Havok (from Intel) which can use the GPU for physics.

 

 

It's not that software doesn't need to be faster. I'm one of the biggest advocates of improving game programming on here (and I will admit I don't have much experience optimizing graphics code either in terms of draw calls or in the shader language itself; but I am damn good at the CPU side of it for AI, networking, physics, etc..).

Then why are you saying things along the lines of "it's fast enough"? You said it twice in your response. Once for compression algorithms and once for video decoding.

 

 

 

 

 

 

 

you know, you just proved my point?

 

i said "on the market".... which means "whats actually still produced and or sold as up to date hardware"

 

now, yes there is probably one or two retailers in the world that still has a few Thermis or 5000 series. However what IS on the "MARKET" today, is Haswell (EOL), Skylake, FX, Kaveri based APUs, AMD 200 series (EOL), AMD 300 series, AMD Fury series, Nvidia 700 series (EOL), Nvidia 900 Series.

 

that is what's on the market.

All of these products HAVE onbaord H.264 decode and encode...

What the hell are you talking about? Are you doing this on purpose or are you just delusional and extremely difficult to discuss things with?

You were wrong twice in a row even when you were putting words in my mouth and you just make some amazing mental gymnastics and honestly believe you are correct?

 

I will summarize what happened in the last 2-3 posts and hopefully you will understand how stupid you are.

1) I claimed that OBS can use GPGPU.

2) You puts words in my mouth and says that I claimed that OBS can use OpenCL/OpenGL, then you claim that it can't.

3) Not only did you put words in my mouth and pretend like it never happened when I called you out on it, it also turns out that you were wrong about OBS not being able to use OpenCL. It's just that you didn't know how to enable it. You didn't do any proper research and yet you claim I am wrong. Wrong about a claim I never made.

4) Then you just ignore all of the things you were wrong about and starts arguing semantics instead.

 

Do you see how ridiculous this is? Also, what it does in fixed-hardware functions is irrelevant. It is still GPGPU. This is not the first time you have moved the goal post when you realize you are wrong in this thread. You did the same thing when you claimed Intel's GPU could not do GPGPU at all. You changed from "not at all" to "not doing it properly".

 

 

I am done trying to argue with a massive fanboy like you. You should apply for the Olympics because the logical jumps you make are impossible to follow and understand.

Before I leave I will tell you that you know next to nothing about this subject. You might think you do, but you don't.

Link to comment
Share on other sites

Link to post
Share on other sites

- APUs have "Compute cores" because of the technology behind them. This is why it's labeled with a high core count. While I agree the average customer will be mislead by this and sometimes assume it's a beefy processor for 1/3 the price of something with 1/3 the advertised core count from Intel, an APU is not the same as a CPU. They are not for the same thing. That's like saying I'm gonna go play Crysis 3 at Ultra++ with 16x forced supersampling on a Tegra K1 device. Like what the fuck.

What makes AMD's APU qualify as having "compute cores" and why can't Intel call their chips "24 compute cores" or whatever?

I totally agree that consumers will be mislead by it. I also don't see any way this make things clearer for the consumer, so the only purpose the term "compute core" seem to have is to trick consumers. I think that is a far bigger issue than AMD calling their 4 module chips "8 cores". With the module counting as 2 cores there is a lot of gray area and they did try to explain it to reviewers. They were very open with it and other than visiting every retailer to make sure they write like a page long explanation every time they refer to a module as 2 cores, AMD did what they could do.

 

With the term "compute core" however, they just straight up made up a new term which sounds very similar to a well established term, then they just add two completely different types of cores together and advertise that. It would be like a car manufacturer with a 100 horsepower car suddenly deciding that their cars will measure power in "stallion power". 1 stallion power is 0.5 horsepower and on top of that, they will count each wheel the car has as another 10 stallion power. All of a sudden their 100 horsepower car is advertised as having 240 stallion power.

It's just a dirty trick to fool people. And don't tell me that consumers should look up what stallion power is and then use a formula to convert that to horsepower if they are going to buy it. It should be the companies responsibility to NOT deliberately make their specifications harder to understand in an attempt to deceive people.

Link to comment
Share on other sites

Link to post
Share on other sites

What makes AMD's APU qualify as having "compute cores" and why can't Intel call their chips "24 compute cores" or whatever?

I totally agree that consumers will be mislead by it. I also don't see any way this make things clearer for the consumer, so the only purpose the term "compute core" seem to have is to trick consumers. I think that is a far bigger issue than AMD calling their 4 module chips "8 cores". With the module counting as 2 cores there is a lot of gray area and they did try to explain it to reviewers. They were very open with it and other than visiting every retailer to make sure they write like a page long explanation every time they refer to a module as 2 cores, AMD did what they could do.

 

With the term "compute core" however, they just straight up made up a new term which sounds very similar to a well established term, then they just add two completely different types of cores together and advertise that. It would be like a car manufacturer with a 100 horsepower car suddenly deciding that their cars will measure power in "stallion power". 1 stallion power is 0.5 horsepower and on top of that, they will count each wheel the car has as another 10 stallion power. All of a sudden their 100 horsepower car is advertised as having 240 stallion power.

It's just a dirty trick to fool people.

AMD did however resort to saying that Bulldozer had cores instead of modules-which is misleading in itself as every other CPU made before Bulldozer by AMD and Intel had their cores defined in the same manner, meaning that the average consumer expecting to get 8 fully fledged cores would only get 8 parts of a core in the form of 4 modules.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Ehh.... What are you talking about? I think you have misunderstood something completely here. Sandy Bridge supports both encoding and decoding of H.264 in fixed-function hardware.

 

I don't think it was as simple as some compiler options being incorrect. It's the fact that things like a Core 2 Duo will have a hard time decoding high bitrate 1080p or 4K H.264 in software. It doesn't even have to be a Core 2 Duo. A lot of Atom processors and very low clocked Core processors (like Core M) might not be up for the task either. I don't have any Atom or Core M devices to test it with, but I was very active in the video community around the time H.264 was getting popular and a huge amount of users were reporting abysmal performance.

 

 

I think you missed the part where I said "physics" as well. Surely even you will agree that running physics calculations on the GPU is a good idea. In fact, Microsoft recently bought Havok (from Intel) which can use the GPU for physics.

 

 

Then why are you saying things along the lines of "it's fast enough"? You said it twice in your response. Once for compression algorithms and once for video decoding.

 

 

 

 

 

 

 

What the hell are you talking about? Are you doing this on purpose or are you just delusional and extremely difficult to discuss things with?

You were wrong twice in a row even when you were putting words in my mouth and you just make some amazing mental gymnastics and honestly believe you are correct?

 

I will summarize what happened in the last 2-3 posts and hopefully you will understand how stupid you are.

1) I claimed that OBS can use GPGPU.

2) You puts words in my mouth and says that I claimed that OBS can use OpenCL/OpenGL, then you claim that it can't.

3) Not only did you put words in my mouth and pretend like it never happened when I called you out on it, it also turns out that you were wrong about OBS not being able to use OpenCL. It's just that you didn't know how to enable it. You didn't do any proper research and yet you claim I am wrong. Wrong about a claim I never made.

4) Then you just ignore all of the things you were wrong about and starts arguing semantics instead.

 

Do you see how ridiculous this is? Also, what it does in fixed-hardware functions is irrelevant. It is still GPGPU. This is not the first time you have moved the goal post when you realize you are wrong in this thread. You did the same thing when you claimed Intel's GPU could not do GPGPU at all. You changed from "not at all" to "not doing it properly".

 

 

I am done trying to argue with a massive fanboy like you. You should apply for the Olympics because the logical jumps you make are impossible to follow and understand.

Before I leave I will tell you that you know next to nothing about this subject. You might think you do, but you don't.

i find it amusing how you either fail to read then assume victory. Or simply refuses to read and still assume victory.

 

anyway, gratz bro, you proved me wrong about OBS, how would i know you needed to type in commands to get openCL to work (when in every other program i know support it, it is usually automatic and or selectable by a box you check)

 

but hey, since you are either unable or unwilling to use your intellect to reply without changing your own statements and or disagreeing with what i say for the sake of disagreeing, then i must concur that this discussion is dead.

 

you aren't willing to communicate, and i arent willing to bother either.

 

/thread

Link to comment
Share on other sites

Link to post
Share on other sites

i find it amusing how you either fail to read then assume victory. Or simply refuses to read and still assume victory.

 

anyway, gratz bro, you proved me wrong about OBS, how would i know you needed to type in commands to get openCL to work (when in every other program i know support it, it is usually automatic and or selectable by a box you check)

 

but hey, since you are either unable or unwilling to use your intellect to reply without changing your own statements and or disagreeing with what i say for the sake of disagreeing, then i must concur that this discussion is dead.

 

you aren't willing to communicate, and i arent willing to bother either.

 

/thread

Lol, i arent? Don't you mean I'm not?

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

@LAwLz
"What makes AMD's APU qualify as having 'compute cores' and why can't Intel call their chips '24 compute cores' or whatever?"

HSA. Very specifically HSA.

Problem is HSAIL compatible software code is nowhere to be found.
To early to advertise 'compute' core or even HSA for consumers, imo.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

@LAwLz

"What makes AMD's APU qualify as having 'compute cores' and why can't Intel call their chips '24 compute cores' or whatever?"

HSA. Very specifically HSA.

Problem is HSAIL compatible software code is nowhere to be found.

To early to advertise 'compute' core or even HSA for consumers, imo.

HSA seems to be more of a thing for Mobile rather then desktop though. ANd i can understand why.

Link to comment
Share on other sites

Link to post
Share on other sites

HSA seems to be more of a thing for Mobile rather then desktop though. ANd i can understand why.

HSA brings many benefits, like power-effeciency, effective-throughput and latencies. Can be used from embedded, semi-custom, HPC, servers, desktop and mobile.

HSA got many mobile-players partners, so it might gather more attention there.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×