Jump to content

SSD level cells and DRAM cache

FRD
Go to solution Solved by mariushm,

SLC  stores 1 bit in each memory cell. Think of it like charging a cell with electricity,  or filling a cup with water.  0..50% full means bit is set to 0, 50..100% means bit is set to 1.

MLC stores 2 bits in each memory cell.  Think of it like 0..25% means 0 and 0 , 25% to 50% means 0 and 1, 50 to 75% means 1 and 0, 75% to 100% means 1 and 1

TLC stores 3 bits in each memory cell, so now you have 8 levels, or steps of around 12.5%

QLC stores 4 bits in each memory cell, so now you have 16 levels, or steps lower than 10%

 

The more bits you store in a cell, the longer time it takes for that charge inside the cell to settle to a percentage.

So, it takes very little time for the ssd controller to dump some charge and get the charge of energy in a SLC cell to let's say above 70% and that's good enough for a 1 bit.

With MLC it takes a bit more care and attention, but it's still fast.

TLC is slower and QLC is very slow at setting up those bits properly.

So in order of speed, you get SLC, then MLC then TLC then QLC.

 

Reading is much faster , writing is slower.

 

In order to make SSDs faster (when writing to them) , each SSD controller configures some portion of the memory in pseudo-SLC mode, writing just one bit inside the memory cell instead of 4 bits. So writing to this portion of memory is much faster, and this portion of memory will also retain its endurance for longer time.

 

So for example, let's say there's 200 GB worth of free space on your SSD and the SSD has TLC memory where each cell can hold 3 bits.

The SSD controller will silently take 100 GB of that space, and mark it as pseudo-SLC and only store 1 bit instead of 3 in each cell in that reserved area, obtaining around 30-40 GB of SLC write cache.

If you're copying something onto the SSD, the SSD controller can now write super fast in this SLC area until the transfer is done or until the cache is filled - when the slc area is full, you may see the write speed going down a bit, because the controller will switch to writing directly to TLC memory which is more difficult and takes more for bits to settle.

When the SSD is idle or the transfer is done, the SSD controller will silently , in the background, start transferring the data from the SLC area into the TLC (or QLC) memory for a long duration storage, and frees up the pseudo-SLC area so that you can write new stuff.

 

If you write 100 GB out of those 200 GB of free space, there's no more free TLC memory so the SSD controller will un-reserve a portion of that SLC memory and shrink the pseudo-SLC write cache... for example, it may shrink the slc cache from 40 GB to 30 GB and those 10 GB of pseudo-SLC memory are converted back to 30 GB of TLC memory that can store stuff.

 

Now keep this in mind as it comes into play later.

 

With mechanical drives, the operating system can go and read any byte from the hard drive platters and send command to overwrite that byte with other information.

The data is organized as if everything is in one huge track which contains sectors , where each sector contains a fixed amount of data, usually 512 bytes or 4096 bytes.

So the operating system can tell a mechanical drive something like "go to track 500, sector 32100 and overwrite byte 300 with letter "A" " and the hard drive knows track 500 is one the first platter, the second side of the platter so the 2nd read/write head must be used, then it knows how much to spin the platter to put sector 32100 under the write head and overwrite that byte.

 

Flash memory doesn't work like that. It resorts to a compromise. In order to reduce the amount of silicon used and make the chips cheap, they designed Flash memory so you can read the memory cells, but you can only write individual memory cells once and not overwrite them. The erase circuitry can not erase an individual cell, it can only erase a big chunk of memory cells.

 

The data is still arranged in "sectors" of 512 bytes or 4096 bytes to make it backwards compatible, but a bunch of these "sectors" are grouped together in a unit that's typically 16-32 MB (with QLC maybe even more)  and you can write any of the 512 byte or 4096 byte sectors once but to overwrite them, you have to erase the whole block of 16-32 MB. Once the block is erased, all the sectors inside the block can be written again.

 

This erase process is what damaged the Flash memory, it weakens the cells and the cells support a maximum number of erases until they become unreliable enough they're taken out of circulation.

In the case of SLC, it supports at least 10k erases, MLC used to support over 5-8000, TLC starts from around 2000 and up, QLC supports less than 1000 erases.

 

So the SSD controller tries as hard as possible to erase a block of data only as a last resort.

If the operating system comes with the same command "go to track 500, sector 32100 and overwrite byte 300 with letter "A"  to the SSD controller,  the SSD controller figures out the 512 byte or 4096 byte sector that holds that byte, reads that sector, edits the byte to hold the letter A but  can't write the sector back because it can only do so by erasing the whole 16-32 MB block of memory. 

So it finds a sector in another block which can be writable, writes the information there, and makes a note in a special list that says something like " if the operating system ask for data at track 500, sector 32100, it's now in the memory chip xyz, block 100, sector 50

 

This same "translation table" is used to make notes of where the data is actually stored when the operating system wants to write data in a cell that's currently reserved for use in pseudo-slc mode ... the SSD controller simply finds sectors that can be written to in various blocks and adds records to this translation table to keep track of data. 

 

There's also a big chunk of flash memory that's hidden from you on purpose from the moment the SSD was sold to you.

Flash memory is manufactured using multiples of 1024, and memory is arranged into 512 or 4096 byte sectors, so everything revolves around powers of two.

So f r example, a 2 TB SSD is shown to the operating system as around 1900 GB (2,000,000,000,000 bytes / 1024 / 1024 = 1,907,348 MB but the flash memory chips actually hold 2048 GB of data.

The SSD controller uses that difference of 100 GB or so of flash memory for reserves to extend the life of the ssd (when a chunk of 16-32 MB is erased too much and becomes unreliable, it's replaced with a chunk from this hidden area)  and some portion is also used to store this translation table (a few GB)

 

This translation table can use a lot of disk space, it can be more than 1 GB for 1 TB of flash memory.

So this is where that DRAM less versus ssd with ram comes into play.

If the drive has no DRAM, this translation table is physically stored onto the actual SSD in a portion of flash memory hidden from you.

Each time you request some data, the SSD controller has to look in the translation table where the data actually is located, then go to those memory chips and retrieve it.

If you write something to the SSD, after every sector is written, the SSD also has to make a second write to update the translation table with the new information.

 

If the SSD has a dram cache, the SSD can copy that 1 GB (or whatever big) translation table into the RAM and read and update that translation much faster compared to reading or writing the flash memory. It still has to backup the translation table to SSD because the RAM loses its data once the power is off, but the SSD controller can schedule backups and write batches of changes in one shot, like for example a burst of writing to flash memory every 3-5 seconds or so, or when the SSD detects power loss (computer shutting down)

 

So basically, having dram on the SSD means that if use applications that read from LOTS of places or write lots of small chunks of data, if you use the SSD intensively, there's a performance increase from the fact that the ssd controller spends less time looking where the data physically is located in the flash chips if the translation table is cached into RAM, compared to if the translation table stays in the flash memory.

After locating the data, the ssd controller still has to actually read the data from the flash memory chips.

 

There's also nvme SSDs that can reserve a portion of ram on the computer for caching this translation table. For example, a SSD can reserve 64-128 MB of regular RAM and cache portions of that translation table into that ram portion to speed up things

 

 

 

I would like to know a bit more about SSD levels cells and also DRAM cache of a certain SSD.

So SSDs can be SLC, MLC, TLC or QLC. I know what they stand for. These days SLC and MLC drives aren't that common, most common are TLC and QLC of which TLC is the best. QLC drives usually are slower since they can only write a certain amount before their speed will drop down (having no cache left). I hope I understood that right.

As for TLC, I think it's the best and most mainstream type of cell level to get these days.

My first and old SSD was SLC I believe. My question is, are SLC and MLC drives good? Or is in this case more cells better (Triple level), but not Quad level? I'm not looking to buy a SLC or MLC drive by the way.

 

Then about DRAM cache, specifically the Samsung 980 (standard, non Pro).
I often see people saying: Don't buy it, since it has no DRAM cache. It does however it does have a SLC cache. How does this work then? When will it be full and become slower in speed?

Also, the cell level is eTLC. What is the "e" exactly there?

Link to comment
Share on other sites

Link to post
Share on other sites

Slc is probably best for reliability imo

 

And slc cache is act as an actual "media" for storing data (image, music, apps, all that good stuff)And dram cache act as more or Less look up table for the controller

 

 

01110100 01101000 01100001 01110100 00100000 01110111 01100001 01110011 00100000 00110111 00110000 00100000 01101001 01101110 01100011 01101000 00100000 01110000 01101100 01100001 01110011 01101101 01100001 00100000 01110011 01100011 01110010 01100101 01100101 01101110 00100000 01110100 01110110

 

 

 

 

 

 

 

 

 

 

Audio Interface I/O LIST v2

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, FRD said:

It does however it does have a SLC cache. How does this work then? When will it be full and become slower in speed?

The controller writes a single level to the entire QL cell as the drive fills the SLC cache diminishes.

Link to comment
Share on other sites

Link to post
Share on other sites

SLC  stores 1 bit in each memory cell. Think of it like charging a cell with electricity,  or filling a cup with water.  0..50% full means bit is set to 0, 50..100% means bit is set to 1.

MLC stores 2 bits in each memory cell.  Think of it like 0..25% means 0 and 0 , 25% to 50% means 0 and 1, 50 to 75% means 1 and 0, 75% to 100% means 1 and 1

TLC stores 3 bits in each memory cell, so now you have 8 levels, or steps of around 12.5%

QLC stores 4 bits in each memory cell, so now you have 16 levels, or steps lower than 10%

 

The more bits you store in a cell, the longer time it takes for that charge inside the cell to settle to a percentage.

So, it takes very little time for the ssd controller to dump some charge and get the charge of energy in a SLC cell to let's say above 70% and that's good enough for a 1 bit.

With MLC it takes a bit more care and attention, but it's still fast.

TLC is slower and QLC is very slow at setting up those bits properly.

So in order of speed, you get SLC, then MLC then TLC then QLC.

 

Reading is much faster , writing is slower.

 

In order to make SSDs faster (when writing to them) , each SSD controller configures some portion of the memory in pseudo-SLC mode, writing just one bit inside the memory cell instead of 4 bits. So writing to this portion of memory is much faster, and this portion of memory will also retain its endurance for longer time.

 

So for example, let's say there's 200 GB worth of free space on your SSD and the SSD has TLC memory where each cell can hold 3 bits.

The SSD controller will silently take 100 GB of that space, and mark it as pseudo-SLC and only store 1 bit instead of 3 in each cell in that reserved area, obtaining around 30-40 GB of SLC write cache.

If you're copying something onto the SSD, the SSD controller can now write super fast in this SLC area until the transfer is done or until the cache is filled - when the slc area is full, you may see the write speed going down a bit, because the controller will switch to writing directly to TLC memory which is more difficult and takes more for bits to settle.

When the SSD is idle or the transfer is done, the SSD controller will silently , in the background, start transferring the data from the SLC area into the TLC (or QLC) memory for a long duration storage, and frees up the pseudo-SLC area so that you can write new stuff.

 

If you write 100 GB out of those 200 GB of free space, there's no more free TLC memory so the SSD controller will un-reserve a portion of that SLC memory and shrink the pseudo-SLC write cache... for example, it may shrink the slc cache from 40 GB to 30 GB and those 10 GB of pseudo-SLC memory are converted back to 30 GB of TLC memory that can store stuff.

 

Now keep this in mind as it comes into play later.

 

With mechanical drives, the operating system can go and read any byte from the hard drive platters and send command to overwrite that byte with other information.

The data is organized as if everything is in one huge track which contains sectors , where each sector contains a fixed amount of data, usually 512 bytes or 4096 bytes.

So the operating system can tell a mechanical drive something like "go to track 500, sector 32100 and overwrite byte 300 with letter "A" " and the hard drive knows track 500 is one the first platter, the second side of the platter so the 2nd read/write head must be used, then it knows how much to spin the platter to put sector 32100 under the write head and overwrite that byte.

 

Flash memory doesn't work like that. It resorts to a compromise. In order to reduce the amount of silicon used and make the chips cheap, they designed Flash memory so you can read the memory cells, but you can only write individual memory cells once and not overwrite them. The erase circuitry can not erase an individual cell, it can only erase a big chunk of memory cells.

 

The data is still arranged in "sectors" of 512 bytes or 4096 bytes to make it backwards compatible, but a bunch of these "sectors" are grouped together in a unit that's typically 16-32 MB (with QLC maybe even more)  and you can write any of the 512 byte or 4096 byte sectors once but to overwrite them, you have to erase the whole block of 16-32 MB. Once the block is erased, all the sectors inside the block can be written again.

 

This erase process is what damaged the Flash memory, it weakens the cells and the cells support a maximum number of erases until they become unreliable enough they're taken out of circulation.

In the case of SLC, it supports at least 10k erases, MLC used to support over 5-8000, TLC starts from around 2000 and up, QLC supports less than 1000 erases.

 

So the SSD controller tries as hard as possible to erase a block of data only as a last resort.

If the operating system comes with the same command "go to track 500, sector 32100 and overwrite byte 300 with letter "A"  to the SSD controller,  the SSD controller figures out the 512 byte or 4096 byte sector that holds that byte, reads that sector, edits the byte to hold the letter A but  can't write the sector back because it can only do so by erasing the whole 16-32 MB block of memory. 

So it finds a sector in another block which can be writable, writes the information there, and makes a note in a special list that says something like " if the operating system ask for data at track 500, sector 32100, it's now in the memory chip xyz, block 100, sector 50

 

This same "translation table" is used to make notes of where the data is actually stored when the operating system wants to write data in a cell that's currently reserved for use in pseudo-slc mode ... the SSD controller simply finds sectors that can be written to in various blocks and adds records to this translation table to keep track of data. 

 

There's also a big chunk of flash memory that's hidden from you on purpose from the moment the SSD was sold to you.

Flash memory is manufactured using multiples of 1024, and memory is arranged into 512 or 4096 byte sectors, so everything revolves around powers of two.

So f r example, a 2 TB SSD is shown to the operating system as around 1900 GB (2,000,000,000,000 bytes / 1024 / 1024 = 1,907,348 MB but the flash memory chips actually hold 2048 GB of data.

The SSD controller uses that difference of 100 GB or so of flash memory for reserves to extend the life of the ssd (when a chunk of 16-32 MB is erased too much and becomes unreliable, it's replaced with a chunk from this hidden area)  and some portion is also used to store this translation table (a few GB)

 

This translation table can use a lot of disk space, it can be more than 1 GB for 1 TB of flash memory.

So this is where that DRAM less versus ssd with ram comes into play.

If the drive has no DRAM, this translation table is physically stored onto the actual SSD in a portion of flash memory hidden from you.

Each time you request some data, the SSD controller has to look in the translation table where the data actually is located, then go to those memory chips and retrieve it.

If you write something to the SSD, after every sector is written, the SSD also has to make a second write to update the translation table with the new information.

 

If the SSD has a dram cache, the SSD can copy that 1 GB (or whatever big) translation table into the RAM and read and update that translation much faster compared to reading or writing the flash memory. It still has to backup the translation table to SSD because the RAM loses its data once the power is off, but the SSD controller can schedule backups and write batches of changes in one shot, like for example a burst of writing to flash memory every 3-5 seconds or so, or when the SSD detects power loss (computer shutting down)

 

So basically, having dram on the SSD means that if use applications that read from LOTS of places or write lots of small chunks of data, if you use the SSD intensively, there's a performance increase from the fact that the ssd controller spends less time looking where the data physically is located in the flash chips if the translation table is cached into RAM, compared to if the translation table stays in the flash memory.

After locating the data, the ssd controller still has to actually read the data from the flash memory chips.

 

There's also nvme SSDs that can reserve a portion of ram on the computer for caching this translation table. For example, a SSD can reserve 64-128 MB of regular RAM and cache portions of that translation table into that ram portion to speed up things

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, mariushm said:

SLC  stores 1 bit in each memory cell. Think of it like charging a cell with electricity,  or filling a cup with water.  0..50% full means bit is set to 0, 50..100% means bit is set to 1.

MLC stores 2 bits in each memory cell.  Think of it like 0..25% means 0 and 0 , 25% to 50% means 0 and 1, 50 to 75% means 1 and 0, 75% to 100% means 1 and 1

TLC stores 3 bits in each memory cell, so now you have 8 levels, or steps of around 12.5%

QLC stores 4 bits in each memory cell, so now you have 16 levels, or steps lower than 10%

 

The more bits you store in a cell, the longer time it takes for that charge inside the cell to settle to a percentage.

So, it takes very little time for the ssd controller to dump some charge and get the charge of energy in a SLC cell to let's say above 70% and that's good enough for a 1 bit.

With MLC it takes a bit more care and attention, but it's still fast.

TLC is slower and QLC is very slow at setting up those bits properly.

So in order of speed, you get SLC, then MLC then TLC then QLC.

 

Reading is much faster , writing is slower.

 

In order to make SSDs faster (when writing to them) , each SSD controller configures some portion of the memory in pseudo-SLC mode, writing just one bit inside the memory cell instead of 4 bits. So writing to this portion of memory is much faster, and this portion of memory will also retain its endurance for longer time.

 

So for example, let's say there's 200 GB worth of free space on your SSD and the SSD has TLC memory where each cell can hold 3 bits.

The SSD controller will silently take 100 GB of that space, and mark it as pseudo-SLC and only store 1 bit instead of 3 in each cell in that reserved area, obtaining around 30-40 GB of SLC write cache.

If you're copying something onto the SSD, the SSD controller can now write super fast in this SLC area until the transfer is done or until the cache is filled - when the slc area is full, you may see the write speed going down a bit, because the controller will switch to writing directly to TLC memory which is more difficult and takes more for bits to settle.

When the SSD is idle or the transfer is done, the SSD controller will silently , in the background, start transferring the data from the SLC area into the TLC (or QLC) memory for a long duration storage, and frees up the pseudo-SLC area so that you can write new stuff.

 

If you write 100 GB out of those 200 GB of free space, there's no more free TLC memory so the SSD controller will un-reserve a portion of that SLC memory and shrink the pseudo-SLC write cache... for example, it may shrink the slc cache from 40 GB to 30 GB and those 10 GB of pseudo-SLC memory are converted back to 30 GB of TLC memory that can store stuff.

 

Now keep this in mind as it comes into play later.

 

With mechanical drives, the operating system can go and read any byte from the hard drive platters and send command to overwrite that byte with other information.

The data is organized as if everything is in one huge track which contains sectors , where each sector contains a fixed amount of data, usually 512 bytes or 4096 bytes.

So the operating system can tell a mechanical drive something like "go to track 500, sector 32100 and overwrite byte 300 with letter "A" " and the hard drive knows track 500 is one the first platter, the second side of the platter so the 2nd read/write head must be used, then it knows how much to spin the platter to put sector 32100 under the write head and overwrite that byte.

 

Flash memory doesn't work like that. It resorts to a compromise. In order to reduce the amount of silicon used and make the chips cheap, they designed Flash memory so you can read the memory cells, but you can only write individual memory cells once and not overwrite them. The erase circuitry can not erase an individual cell, it can only erase a big chunk of memory cells.

 

The data is still arranged in "sectors" of 512 bytes or 4096 bytes to make it backwards compatible, but a bunch of these "sectors" are grouped together in a unit that's typically 16-32 MB (with QLC maybe even more)  and you can write any of the 512 byte or 4096 byte sectors once but to overwrite them, you have to erase the whole block of 16-32 MB. Once the block is erased, all the sectors inside the block can be written again.

 

This erase process is what damaged the Flash memory, it weakens the cells and the cells support a maximum number of erases until they become unreliable enough they're taken out of circulation.

In the case of SLC, it supports at least 10k erases, MLC used to support over 5-8000, TLC starts from around 2000 and up, QLC supports less than 1000 erases.

 

So the SSD controller tries as hard as possible to erase a block of data only as a last resort.

If the operating system comes with the same command "go to track 500, sector 32100 and overwrite byte 300 with letter "A"  to the SSD controller,  the SSD controller figures out the 512 byte or 4096 byte sector that holds that byte, reads that sector, edits the byte to hold the letter A but  can't write the sector back because it can only do so by erasing the whole 16-32 MB block of memory. 

So it finds a sector in another block which can be writable, writes the information there, and makes a note in a special list that says something like " if the operating system ask for data at track 500, sector 32100, it's now in the memory chip xyz, block 100, sector 50

 

This same "translation table" is used to make notes of where the data is actually stored when the operating system wants to write data in a cell that's currently reserved for use in pseudo-slc mode ... the SSD controller simply finds sectors that can be written to in various blocks and adds records to this translation table to keep track of data. 

 

There's also a big chunk of flash memory that's hidden from you on purpose from the moment the SSD was sold to you.

Flash memory is manufactured using multiples of 1024, and memory is arranged into 512 or 4096 byte sectors, so everything revolves around powers of two.

So f r example, a 2 TB SSD is shown to the operating system as around 1900 GB (2,000,000,000,000 bytes / 1024 / 1024 = 1,907,348 MB but the flash memory chips actually hold 2048 GB of data.

The SSD controller uses that difference of 100 GB or so of flash memory for reserves to extend the life of the ssd (when a chunk of 16-32 MB is erased too much and becomes unreliable, it's replaced with a chunk from this hidden area)  and some portion is also used to store this translation table (a few GB)

 

This translation table can use a lot of disk space, it can be more than 1 GB for 1 TB of flash memory.

So this is where that DRAM less versus ssd with ram comes into play.

If the drive has no DRAM, this translation table is physically stored onto the actual SSD in a portion of flash memory hidden from you.

Each time you request some data, the SSD controller has to look in the translation table where the data actually is located, then go to those memory chips and retrieve it.

If you write something to the SSD, after every sector is written, the SSD also has to make a second write to update the translation table with the new information.

 

If the SSD has a dram cache, the SSD can copy that 1 GB (or whatever big) translation table into the RAM and read and update that translation much faster compared to reading or writing the flash memory. It still has to backup the translation table to SSD because the RAM loses its data once the power is off, but the SSD controller can schedule backups and write batches of changes in one shot, like for example a burst of writing to flash memory every 3-5 seconds or so, or when the SSD detects power loss (computer shutting down)

 

So basically, having dram on the SSD means that if use applications that read from LOTS of places or write lots of small chunks of data, if you use the SSD intensively, there's a performance increase from the fact that the ssd controller spends less time looking where the data physically is located in the flash chips if the translation table is cached into RAM, compared to if the translation table stays in the flash memory.

After locating the data, the ssd controller still has to actually read the data from the flash memory chips.

 

There's also nvme SSDs that can reserve a portion of ram on the computer for caching this translation table. For example, a SSD can reserve 64-128 MB of regular RAM and cache portions of that translation table into that ram portion to speed up things

 

 

 

Wow, I couldn't ask for a better answer. Thanks for the extensive explanation with all the examples as well.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×