Jump to content

SK Hynix reports record profits despite DDR4 "shortages"

Nocte
On 1/25/2018 at 2:10 PM, leadeater said:

Rambus DRAM users club wooo, PS2 and N64 had it too :).

A shame the latency is said to be atrocious. Much of the complaints Ive heard about developing for the PS2 seem revolve memory (vram and main) in some way or another. It (the memory) also had that odd quirk of requiring a dummy CRIMM if there was a free slot open, or in the case of the N64, the Jumper Pak. Though to be fair, RDRAM achieved comparable bandwidth to DDR over just a 16-bit bus (vs 64-bit for DDR1?), simplifying board designs. 

 

The PS3 also had Rambus XDR DRAM, the successor to RDRAM. XDR2 DRAM was also developed, though I'm not aware of anything that ever made use of it.

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Bit_Guardian said:

The phones use more dies than ever. The larger sticks use more dies too. Ever seen a 128GB DIMM and compared it to an 8GB DIMM of the same generation? Those 128 and now 256GB DIMMs are going into all those new cloud instances. It's how they can configure a uniform datacenter into clusters on the fly.

Well that's not exactly new, all DIMMs of the largest capacity of their time used the maximum amount of memory chips.

 

Here's a whopping 256MB xD

256mb-compaq-proliant-800-1600-3000-pc10

Double sided btw.

 

Which goes in one of these

s-l225.jpg

16 slots

 

Which goes in one of these

proliant-ml530-x866mhz-256mb-hp-401716-0

That flap in the top left is for the hot-swap PCI-X slots, there were buttons for each slot you pressed to turn them off so you can swap cards out while the system was running. ML570 G1 was great, freakin heavy though.

 

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, leadeater said:

Well that's not exactly new, all DIMMs of the largest capacity of their time used the maximum amount of memory chips.

 

Here's a whooping 256MB xD

256mb-compaq-proliant-800-1600-3000-pc10

Double sided btw.

That must've been quite expensive to produce. 

 

Phones now tend to come with as much RAM as entry level laptops, if not more, and seem to experience more frequent jumps in RAM capacity (at least, on the Android side). I'm not surprised this contributes to increased demand, though is probably not the only reason.

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Well that's not exactly new, all DIMMs of the largest capacity of their time used the maximum amount of memory chips.

 

Here's a whopping 256MB xD

 

Double sided btw.

 

Which goes in one of these

 

16 slots

 

Which goes in one of these

 

That flap in the top left is for the hot-swap PCI-X slots, there were buttons for each slot you pressed to turn them off so you can swap cards out while the system was running. ML570 G1 was great, freakin heavy though.

 

Not remotely representative anymore, especially with new TSV DIMMS in production with 2-4x the number of chips stacked up along the walls in the same places.

 

http://www.samsung.com/semiconductor/dram/module/M386AAK40B40-CWD/

 

And no, depending on your config, you can fit 2 EPYC or even 4 Xeon Scalable in a 1U with 32-48DIMMs alongside 2 Tesla V100s or 16 NICs in a 1U rack. The demand in volume of chips has easily quadrupled thanks to Samsung's and Micron's innovations in this space (both offering 128GB 4-Hi stacked DDR4 DIMMs). I'm sorry but you're grasping at straws here. Sure, the overall demand in DIMMs may have only grown 14% in the last 5 years, but the demand in chips has quadrupled. The shortage is far from artificial if we take that into account. It's not like Samsung, Hynix, and Micron have quadrupled their wafer processing in that same time span.

 

1 hour ago, Zodiark1593 said:

That must've been quite expensive to produce. 

 

Phones now tend to come with as much RAM as entry level laptops, if not more, and seem to experience more frequent jumps in RAM capacity (at least, on the Android side). I'm not surprised this contributes to increased demand, though is probably not the only reason.

Far less expensive than today's 128GB DIMMs (referenced in the link above).

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Bit_Guardian said:

Marketshare is only measured on the open market. I don't think you remember how many of its own products Intel sells using its own memory, a figure no one bothers to check against the market.

The only products with DRAM that Intel sells a lot of are their SSDs, and they just buy that DRAM, they don't make it themselves. For example, their recently launched 760p uses Micron DRAM. Unsurprising, given their cooperation on NAND manufacturing.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Sakkura said:

The only products with DRAM that Intel sells a lot of are their SSDs, and they just buy that DRAM, they don't make it themselves. For example, their recently launched 760p uses Micron DRAM. Unsurprising, given their cooperation on NAND manufacturing.

In the case of NAND, the controllers have always come from Micron since the flash and controllers come part & parcel. Intel still makes all of its own for the Stratix products, Curie, Edison, etc.. The MCDRAM for Xeon Phi is in fact an Intel-custom HMC design built in-house. Their 3DXP is also in-house along with the DRAM for its controller, which is why Micron hasn't yet launched its own. The two have also broken off their partnership moving forward.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Bit_Guardian said:

In the case of NAND, the controllers have always come from Micron since the flash and controllers come part & parcel. Intel still makes all of its own for the Stratix products, Curie, Edison, etc.. The MCDRAM for Xeon Phi is in fact an Intel-custom HMC design built in-house. Their 3DXP is also in-house along with the DRAM for its controller, which is why Micron hasn't yet launched its own. The two have also broken off their partnership moving forward.

HMC is a fairly niche product, and 3DXP is not DRAM at all (plus Optane is still a niche product as well).

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Sakkura said:

HMC is a fairly niche product, and 3DXP is not DRAM at all (plus Optane is still a niche product as well).

Not really. It's at the heart of Z13 and Z14 mainframes, as well as Oracle/Fujitsu mainframes. I think there's close to a Terabyte in each flagship model right now.

 

I didn't say 3DXP is DRAM. It does have a DRAM cache though for the dedicated SSDs. And Optane is definitely not niche. AWS EC2 Gen 5 instances optimised for storage and databases are backed by it at the caching layer.

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, Bit_Guardian said:

Not remotely representative anymore, especially with new TSV DIMMS in production with 2-4x the number of chips stacked up along the walls in the same places.

 

http://www.samsung.com/semiconductor/dram/module/M386AAK40B40-CWD/

That has the same number of chips lol. 18 per side on both or 36 in total.

 

Quote

To get all the chips into the DIMM format Samsung uses TSV interconnects on the DRAMs.  The module’s 36 DRAM packages each contain four 8Gb (1GB) chips, resulting in 144 DRAM chips squeezed into a standard DIMM format.  Each package also includes a data buffer chip, making the stack very closely resemble either the High-Bandwidth Memory (HBM) or the Hybrid Memory Cube (HMC).

Each package is more complex than ram from yesteryear but that's not the issue, the same number of packages are being used and the production of those is not wildly different to back then. Package density has gone up not the number of memory packages per DIMM.

 

Not everything is an attack on what you are saying, the number of packages used on DIMMs has always been high for servers and is not new. The number of DIMMs used per server has gone up, a lot.

 

49 minutes ago, Bit_Guardian said:

Far less expensive than today's 128GB DIMMs (referenced in the link above).

Those old DIMMs are much more expensive, they were around the $2k-$3k mark in the 90's.

 

49 minutes ago, Bit_Guardian said:

And no, depending on your config, you can fit 2 EPYC or even 4 Xeon Scalable in a 1U with 32-48DIMMs alongside 2 Tesla V100s or 16 NICs in a 1U rack. The demand in volume of chips has easily quadrupled thanks to Samsung's and Micron's innovations in this space (both offering 128GB 4-Hi stacked DDR4 DIMMs). I'm sorry but you're grasping at straws here.

DL760 G1 8 socket Xeon was a thing back then too you know, which wasn't uncommon to have 4 RAID cards in it with 1 to 4 DIMMs on them for write-back cache as well as quad socket NICs.

 

I think you're being overly defensive, some of us just appreciate tech and how it's progressed over time and have actually used it. I actually had that ML570 G1 to play around with that I got for free through my fathers work at the time, it was going in to the bin anyway.

 

What's driving up the demand for memory is the increase in number of devices in total, how many cellphones existed in the 90's and how many exist today. Cars now have ECUs in them and those require memory, our old 1990 Civic had none of that. Computer memory demand by itself at the current production rate would mean we'd be overflowing in DIMMs and be burning them for warmth lol. Throw some ram on the fire it's getting cold.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

That has the same number of chips lol. 18 per side on both or 36 in total.

 

Each package is more complex than ram from yesteryear but that's not the issue, the same number of packages are being used and the production of those is not wildly different to back then. Package density has gone up not the number of memory packages per DIMM.

 

-a point we agree on-

 

Those old DIMMs are much more expensive, they were around the $2k-$3k mark in the 90's.

-everything else we basically agree on-

No, 148 in total. There are now 4 chips per visible package on that 128GB DIMM. The packaged are each 4 chips stacked now.

 

Yes it's wildly different thanks to TSV usage as well as the new Load Reducing tech being used vs. the very simple register and buffer tech of tester-decade.

 

They're 4-6 grand now.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Bit_Guardian said:

No, 148 in total. There are now 4 chips per visible package on that 128GB DIMM. The packaged are each 4 chips stacked now.

Same number of packages, stacked dies, fabricated in a single step. The point is how many packages can be produced and placed on to a DIMM to create a product, 1 package = 1 package and it was just as hard back then to create the package as it is now. Technology yay.

 

5 minutes ago, Bit_Guardian said:

They're 4-6 grand now.

On google and you forgot about inflation didn't you. Our buy price for that would be around 1.6k-2k.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, leadeater said:

Same number of packages, stacked dies, fabricated in a single step. The point is how many packages can be produced and placed on to a DIMM to create a product, 1 package = 1 package and it was just as hard back then to create the package as it is now. Technology yay.

 

On google and you forgot about inflation didn't you. Our buy price for that would be around 1.6k-2k.

Not fabbed in a single step! It's just like HBM/2/3 production. 4 chips get fabbed on at least 1 wafer, and a separate process stacks them together, and a separate process adheres them to an interposer, and a separate step adheres that to the DIMM PCB. Let aloe testing between each step.

 

No, way easier back then. Manufacturing difficulty (not complexity which always goes up) has gone up, not stayed level.

 

No, you don't have the scale to get that kind of discount. Cray, HPE, and Dell maybe, but not you.

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Bit_Guardian said:

No, you don't have the scale to get that kind of discount. Cray, HPE, and Dell maybe, but not you.

Yea we do, no one at all pays list price for anything. If you pay list you're a fool. Stop being so arrogant especially when you have no idea at all what we pay and could not possibly know. We buy HPE servers so you just confirmed that I could acquire the product at that indicated price anyway.

 

39 minutes ago, Bit_Guardian said:

Not fabbed in a single step! It's just like HBM/2/3 production. 4 chips get fabbed on at least 1 wafer, and a separate process stacks them together, and a separate process adheres them to an interposer, and a separate step adheres that to the DIMM PCB.

Sorry that was badly worded, I meant the creation of the package. Stacked dies implies that you have to take multiple from a wafer and stack them, that is the step I was referring too.

 

Die process sizes were much bigger for SDRAM, defect rates were also higher back then so the package cost was not cheap at all. More steps are involved now and more dies are used per package but that doesn't make it more difficult than what they hand to contend with back then. Equipment, processes, controls, designs have all gotten better over time which all counteracts the increase in complexity of product keeping the difficulty from increasing to unsustainable levels.

 

Computer part prices in the 90's were wildly expensive, everyone knows that it's common knowledge. It was even worse in the 80's. You're saying the largest possible memory module of today costs more than back then, that cannot be true.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, leadeater said:

Yea we do, no one at all pays list price for anything. If you pay list you're a fool. Stop being so arrogant especially when you have no idea at all what we pay and could not possibly know. We buy HPE servers so you just confirmed that I could acquire the product at that indicated price anyway.

 

Sorry that was badly worded, I meant the creation of the package. Stacked dies implies that you have to take multiple from a wafer and stack them, that is the step I was referring too.

 

Die process sizes were much bigger for SDRAM, defect rates were also higher back then so the package cost was not cheap at all. More steps are involved now and more dies are used per package but that doesn't make it more difficult than what they hand to contend with back then. Equipment, processes, controls, designs have all gotten better over time which all counteracts the increase in complexity of product keeping the difficulty from increasing to unsustainable levels.

 

Computer part prices in the 90's were wildly expensive, everyone knows that it's common knowledge. It was even worse in the 80's. You're saying the largest possible memory module of today costs more than back then, that cannot be true.

You acquire it with whatever HPE's markup is. I'm not arrogant at all. I've scorched 30% off my company's AWS costs in this last year by overhauling their supply chain economics and scaling. It doesn't take a genius to know that this is an economy of scale cradle to grave. You don't have the scale. You're based in New Zealand for crying out loud.

 

And the chip sizes themselves have remained mostly static. No, defect rates once mass production starts are pretty much in line with the last generation, not miles ahead. No one goes full production scale without a die-killing defect rate lower than 10%.

 

Yes it does, more steps = more points of failure via more sources of error, which compound multiplicatively, not linearly.

 

Nope, my dad built a couple family systems for $1000-1500 in my early years. Components were certainly not wildly expensive even after you account for inflation. Of course, the economies in the Oceania region being the unstable laughing stocks they are may have made your mileage vary.

 

Hah! It is true. Attempt to prove otherwise. I've plenty of material from Samsung proving you're full of it. There again, let's just look at HBM vs. GDDR5 economics in terms of both density and performance. You're out of your mind.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Bit_Guardian said:

You acquire it with whatever HPE's markup is. I'm not arrogant at all. I've scorched 30% off my company's AWS costs in this last year by overhauling their supply chain economics and scaling. It doesn't take a genius to know that this is an economy of scale cradle to grave. You don't have the scale. You're based in New Zealand for crying out loud.

Supply chain understanding fail lol. It doesn't matter what country I'm in, HPE cost is the same then add markup, shipping and local taxes. Customer cloud cost saving has nothing to do with this, cool story?

 

14 minutes ago, Bit_Guardian said:

Nope, my dad built a couple family systems for $1000-1500 in my early years. Components were certainly not wildly expensive even after you account for inflation. Of course, the economies in the Oceania region being the unstable laughing stocks they are may have made your mileage vary.

Why are you using a personal home computer in a server part discussion?

 

ram-prices-2.jpg

You're welcome to do the math.

1990 or 1995 256MB price vs 2017 256GB price.

 

You can stop replying to me now, not interested anymore because you have no idea what you are talking about and never touched or used any of the server parts I'm talking about. Massive thread derail and utter discussion point ignoring.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Bit_Guardian said:

Of course, the economies in the Oceania region being the unstable laughing stocks they are may have made your mileage vary.

 

Quote

New Zealand’s strong commitment to economic freedom has resulted in a policy framework that encourages impressive economic resilience. Openness to global trade and investment are firmly institutionalized. The financial system has remained stable, and prudent regulations allowed banks to withstand the past global financial turmoil with little disruption.

 

Other institutional strengths of the Kiwi economy include relatively sound management of public finance, a high degree of monetary stability, and strong protection of property rights. The government continues to maintain a tight rein on spending, keeping public debt under control and sustaining overall fiscal health. A transparent and stable business climate makes New Zealand one of the world’s friendliest environments for entrepreneurs.

https://www.heritage.org/index/country/newzealand

 

Quote

We’re ranked by the World Bank as the easiest place in the world to start a business (2015) and the world’s second easiest country to do business in generally.

 

The Heritage Foundation rated New Zealand the world’s third freest economy in its 2015 Index of Economic Freedom, just behind Hong Kong and Singapore.

There are few restrictions on establishing, owning and operating a business here. In fact, by using the government’s online portals the official paperwork to set up a business can be completed in a matter of hours.

 

New Zealand came in third in Forbes’ ‘Best Country for Business’ report, (December 2014) just behind Denmark and Hong Kong.

Forbes commented that “Over the past 20 years the government has transformed New Zealand from an agrarian economy dependent on concessionary British market access to a more industrialized, free market economy that can compete globally. This dynamic growth has boosted real incomes and broadened and deepened the technological capabilities of the industrial sector.”

https://www.newzealandnow.govt.nz/investing-in-nz/economic-overview

 

You know just to be a bit patriotic and defend my country from unfounded uneducated assertions.

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, leadeater said:

Supply chain understanding fail lol. It doesn't matter what country I'm in, HPE cost is the same then add markup, shipping and local taxes. Customer cloud cost saving has nothing to do with this, cool story?

 

Why are you using a personal home computer in a server part discussion?

 

 

You're welcome to do the math.

1990 or 1995 256MB price vs 2017 256GB price.

 

You can stop replying to me now, not interested anymore because you have no idea what you are talking about and never touched or used any of the server parts I'm talking about. Massive thread derail and utter discussion point ignoring.

Strawman. Scaling please. When you take into account memory requirements of operating systems and apps going back that far, the actual system cost is pretty much level for comparative requirements.

 

And no, HPE costs differently country to country, much the way Apple does, even after accounting for taxes and shipping costs.

 

12 minutes ago, leadeater said:

 

https://www.heritage.org/index/country/newzealand

 

https://www.newzealandnow.govt.nz/investing-in-nz/economic-overview

 

You know just to be a bit patriotic and defend my country from unfounded uneducated assertions.

Utter hogwash. The cost of your food alone is double what it is in Australia. The only thing you have going for you is property prices in urban areas.

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, leadeater said:

Supply chain understanding fail lol. It doesn't matter what country I'm in, HPE cost is the same then add markup, shipping and local taxes. Customer cloud cost saving has nothing to do with this, cool story?

 

Why are you using a personal home computer in a server part discussion?

 

ram-prices-2.jpg

You're welcome to do the math.

1990 or 1995 256MB price vs 2017 256GB price.

 

You can stop replying to me now, not interested anymore because you have no idea what you are talking about and never touched or used any of the server parts I'm talking about. Massive thread derail and utter discussion point ignoring.

not much change from 2013 till now, though now the cost is worse.

 

Also increase ram requirements of OS is not good, in the case for android and many OEM PC/laptops, its the bloatware that increasingly needs ram.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, System Error Message said:

not much change from 2013 till now, though now the cost is worse.

 

Also increase ram requirements of OS is not good, in the case for android and many OEM PC/laptops, its the bloatware that increasingly needs ram.

And if you look at system requirements scaling, system cost is still DOWN vs. 2012 after accounting for inflation. However, RAM prices are going up now thanks to the shortage.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Supply chain understanding fail lol. It doesn't matter what country I'm in, HPE cost is the same then add markup, shipping and local taxes. Customer cloud cost saving has nothing to do with this, cool story?

 

Why are you using a personal home computer in a server part discussion?

 

ram-prices-2.jpg

You're welcome to do the math.

1990 or 1995 256MB price vs 2017 256GB price.

 

You can stop replying to me now, not interested anymore because you have no idea what you are talking about and never touched or used any of the server parts I'm talking about. Massive thread derail and utter discussion point ignoring.

Wait. It would have cost around $2K originally for the 2GB of RAM (PC133 SDRAM, all sticks matched+with sequential serial numbers) that came with my Abit VP6?

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Bit_Guardian said:

Strawman. Scaling please. When you take into account memory requirements of operating systems and apps going back that far, the actual system cost is pretty much level for comparative requirements.

This issue with that statement is even where I live servers used those memory modules at the time and it still doesn't change the fact that it existed and the conversation at hand is the cost of the largest possible between now and then. Ram prices, and server prices in general are much cheaper now than in the 90's. That's not up for debate as it's fact, there's a wealth of articles over years saying this and more graphs and statistics you can shake a stick at that shows it.

 

The servers at the time either had 128MB or 256MB modules with a maximum system supported of 4GB. You didn't have as many size options back then since you didn't need them as there was no such thing as dual channel memory, you just added ram till you got what you needed, no having to make sure you install in sets of 4 etc.

 

We can even go back a bit further to the Compaq servers that used 256MB EDO memory modules, that was a real thing too.

s-l225.jpg

 

 

I really don't have time to teach you the history of servers and I shouldn't need to, you can research this for yourself. The difference is I actually got to use the servers after they were decommissioned.

 

And why does it even matter? Why does it matter if these 256GB modules cost more than the best in the 90s? Who the hell cares. It's still not true though.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Dabombinable said:

Wait. It would have cost around $2K originally for the 2GB of RAM (PC133 SDRAM, all sticks matched+with sequential serial numbers) that came with my Abit VP6?

This is probably a better graph for that time period, after 1995 ram prices dropped like a rock. Careful though its megabit not megabyte. 

 

dram_index_2008.jpg

 

If only there was a zoomed version of this one.

 

storage_memory_prices-2013-02.png

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Supply chain understanding fail lol. It doesn't matter what country I'm in, HPE cost is the same then add markup, shipping and local taxes. Customer cloud cost saving has nothing to do with this, cool story?

 

Why are you using a personal home computer in a server part discussion?

 

ram-prices-2.jpg

You're welcome to do the math.

1990 or 1995 256MB price vs 2017 256GB price.

 

You can stop replying to me now, not interested anymore because you have no idea what you are talking about and never touched or used any of the server parts I'm talking about. Massive thread derail and utter discussion point ignoring.

I remember having to shell out $6M for a gig stick back in the day... oh wait..

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, mr moose said:

I remember having to shell out $6M for a gig stick back in the day... oh wait..

Much deeper pockets than my parents ever had ;)

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

Much deeper pockets than my parents ever had ;)

Well, it's not entirely true,  I mean, I only got $5M a week pocket money and truth be told I spent most of that on lollies.  So I probably never bought memory.

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×