Jump to content

Samsung Demos In-Memory Processing for HBM2, GDDR6, DDR4, and LPDDR5X

Lightwreather

You people must be real joy to have casual conversation with...

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MageTank said:

Currently, memory is dumb. It doesn't know what to do without a controller telling it what to do. If it resides on the DIMM and can alleviate stress from the CPU memory controller or circumvent its function entirely, it would be interesting to see if this results in any gains in performance, specifically operational memory latency if it can process everything on the DIMM without board trace topology mattering at all.

If there's some sort of inference being conducted, this might improve throughput and efficiency at the expense of latency.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, RejZoR said:

You people must be real joy to have casual conversation with...

I only participate in competitive conversations. I am currently ranked top 500 in my league, waiting for a team to sponsor me.

 

Just now, StDragon said:

If there's some sort of inference being conducted, this might improve throughout and efficiency at the expense of latency.

Based on the slides, I'd have to agree with you, though I'd imagine it would technically reduce latency if it doesn't have to take the request directly from the IMC but rather on the DIMM. That's a lot of physical latency to avoid off the die entirely. I just don't know what exactly this controller will be doing in general and whether it will work in parallel with a CPU IMC or if it is designed to replace that function altogether. 

 

ycwfqKvH3cieCocqZnNaAB-970-80.jpg.webp

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MageTank said:

I think you are missing the point here. It has nothing to do with you calling it a "memory controller". That part I completely understand. Ignorance is often never intentional and I can get beyond that quite easily, especially on subjects as complicated as memory.

Well for me as it relates to the story extra memory registers or memory buffers isn't similar to these in memory chip processors. Registers and Buffers cannot be told to do anything, no computation can be sent to them. These on chip processors can do something and we can control them and issue computational work to them.

 

Sort of doesn't really matter what type of work can be done on them or how much we think it's useful, it's quite a lot different and entirely new development for memory chips, just not necessarily fixed function processing units themselves.

 

Today these might only be able to handle FP16 workloads, and specific instruction types, but Samsung is already planning to expand the capabilities which will open more uses cases to be explored and then one day it might be actually useful to us with our smartphones, CPUs and GPUs or w/e.

 

Quote

Speaking of HBM3, Samsung says that it will move forward from the FP16 SIMD processing in HBM2 to FP64 in HBM3, meaning the chips will have expanded capabilities. FP16 and FP32 will be reserved for data center usages, while INT8 and INT16 will serve the LPDDR5, DDR5, and GDDR6 segments.

Samsung wouldn't be looking to put it in LPDDR5 if there was no clear usage for consumer devices, LPDDR5 market is almost entirely consumer devices from what I understand. Does also seem like Automotive use it so maybe it could be more for that.

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, leadeater said:

Does also seem like Automotive use it so maybe it could be more for that.

I really hope ECC is inherently part of this new paradigm.

There are many computers that reside on the CAN bus, but having the primary suffer a bit-flip that's involved in self-driving would be a literal *crash*.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, RejZoR said:

It's as wrong as calling that bullshit any kind of Ai.

Good thing Samsung aren't calling it AI then. They are saying it is "in-memory processing for ML accelerators".

It's memory designed for AI accelerators.

 

3 hours ago, RejZoR said:

It's hardcoded rigid logic that we've been using since forever to do cache predictions on CPU's.

No it's not.

Cache prefetching is loading things into memory tat they think might be needed, before it is needed.

This is doing processing on the data while it is being held in memory. 

 

Loading something and processing something are two very different things.

 

 

3 hours ago, RejZoR said:

If there was ever any actual Ai, that Ryzen should learn what I run regularly and optimize performance on its own by 30% over this time. Yet it's behaving EXACTLY the same it did on day of release. And while it was good performer it's magical Ai did nothing. Just like this Ai does nothing. They just put in place some mechanism that speeds up certain things for data that's commonly shuffled across RAM sticks and instead of shuffling it on and off and flushing it they cache it in some clever way. Who cares how exactly, the point is, there is no magical "Ai". I wish this BS trend of calling everything "Ai" would die already. It's stupid.

Not sure why you got such a strong vendetta against the word "AI", but I think you have gotten a bit confused. When people say something like "our camera uses AI" or "our camera uses machine learning", they are typically referring to the fact that their program uses an algorithm that was created by an AI (if you're okay with calling machine learning AI, which I think it is).

For example my phone's camera has a scene optimization feature. When point the camera at something, it detects what I am pointing it at and tweaks the settings accordingly. That algorithm is rather static. The camera app does not get better the more pictures I take. It sometimes gets updated through a software update but that's not really machine learning on the device itself. But I think it's rather nitpicky to say my camera app doesn't use AI to optimize the settings. It doesn't train itself, but it does run the algorithm created by an AI, the same way the AI did when it was trained, and without the machine learning the picture would look different. So I guess if we want to be a bit pedantic we could say the camera doesn't use AI and instead it is "enhanced by AI". 

Link to comment
Share on other sites

Link to post
Share on other sites

have it in the memory so they can sell your data and move what they want to move 😛

"this looks important" *grabs*

 

Also so great of them by locking down their phones too, now disabled repair of parts...

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, StDragon said:

I really hope ECC is inherently part of this new paradigm.

There are many computers that reside on the CAN bus, but having the primary suffer a bit-flip that's involved in self-driving would be a literal *crash*.

Its part of their plan, for gen 2 of the tech.

 

I'm just going to link the anandtech blog because it has all the details of this tech straight out of samsung.

 

https://www.anandtech.com/show/16905/hot-chips-2021-live-blog-new-tech-infineon-edgeq-samsung its the last presentation out of those 3 so its at the bottom.

this is one of the greatest thing that has happened to me recently, and it happened on this forum, those involved have my eternal gratitude http://linustechtips.com/main/topic/198850-update-alex-got-his-moto-g2-lets-get-a-moto-g-for-alexgoeshigh-unofficial/ :')

i use to have the second best link in the world here, but it died ;_; its a 404 now but it will always be here

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, AlexGoesHigh said:

Its part of their plan, for gen 2 of the tech.

 

I'm just going to link the anandtech blog because it has all the details of this tech straight out of samsung.

 

https://www.anandtech.com/show/16905/hot-chips-2021-live-blog-new-tech-infineon-edgeq-samsung its the last presentation out of those 3 so its at the bottom.

Thanks. Because it's embedded in the image, I'll just requote below for everyone else to see.

"For Aquabolt-XL, we disabled system ECC because the HBM device cannot generate system-specific ECC code for PIM-generated data.

For the next generation of PIM-enabled HBM, we expect to deploy on-die ECC. In this architecture, PIM logic can share the ECC encode/decode circuitry, and data can be protected without incurring additional latency or throughput loss."

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, StDragon said:

Thanks. Because it's embedded in the image, I'll just requote below for everyone else to see.

"For Aquabolt-XL, we disabled system ECC because the HBM device cannot generate system-specific ECC code for PIM-generated data.

For the next generation of PIM-enabled HBM, we expect to deploy on-die ECC. In this architecture, PIM logic can share the ECC encode/decode circuitry, and data can be protected without incurring additional latency or throughput loss."

 

Thanks, I had a quick look and didn't spot this.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, NemesisPrime_691 said:

So, RAM mining when? 

Technically now. I know people that attempted to cache their spinners for chia mining with their RAM. Wasn't successful, but the fact that they tried it means it's already happening.

 

17 minutes ago, NemesisPrime_691 said:

Also, RAM scalping when?

A couple decades ago (and as recent as just a few years ago): https://en.wikipedia.org/wiki/DRAM_price_fixing. Every DRAM manufacturer has been caught doing it too. Scalpers can't compete when the manufacturers beat them to the price gouging, lol.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, MageTank said:

Technically now. I know people that attempted to cache their spinners for chia mining with their RAM. Wasn't successful, but the fact that they tried it means it's already happening.

 

A couple decades ago (and as recent as just a few years ago): https://en.wikipedia.org/wiki/DRAM_price_fixing. Every DRAM manufacturer has been caught doing it too. Scalpers can't compete when the manufacturers beat them to the price gouging, lol.


There's over 11,380+ crypto standards. What gives the best one value are the ones based around scarcity (duh!). And one way of doing that is setting obscene HW requirements so those that can invest in HW can leverage getting in early. The ol adage of "it takes money to make money" applies.

Chia mining is a circle-jerk fad that's barely profitable. Eventually the bottom will fallout and a glut of used storage will hit the market as everyone rushes to exit. Then, a new form of crypto will take center stage; could be ASIC, a type of CPU or GPU, RAM storage....whatever. Point is, it's a feedback loop where scarce hardware is seen as a leverage of wealth, so a crypto is based on it causing further shortages. Lather, rinse, repeat. 
 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×