Jump to content

Let's start at the hardware level. According to my knowledge of SSDs, they have a bunch of flash cells. These flash cells store data by trapping electrons inside them between layers of insulation, which will either allow (empty) or block (full) current going through it. Every cell has a read transistor to try to make current go through the cell to take the reading and a program transistor to connect it to the onboard high voltage generator that will force the electrons in and program it to zero. Forcing the electrons through the insulator slowly damages the cell and makes a few electrons impossible to remove, reducing the differnce between programmed and erased until eventually the controller cannot tell the difference anymore. Each block also has an erase transistor which drains all the cells in the block, resetting them to 1. MLC, TLC, QLC, PLC, etc. differ by having more possible states that each cell can be. This makes them cheaper because there are fewer cells needed, but makes them wear out faster because with more states, there is less difference between them, and it will therefore take fewer writes until the controller can no longer tell the difference. SSDs being writable readable and programmable in pages of 4kiB is a firmware limitation, but the tradeoff is that it increases the performance by reducing the number of read and write requests needed, while being erasable in blocks of 64kiB is a hardware limitation due to the fact that to save cost there is only an erase transistor for every 64kiB that affects the entire block. Is this correct so far? With this knowledge of how SSDs work, I fail to understand 2 things:

1. Why is it that SSDs with more bits per cell are slower, not faster, than SSDs with fewer bits per cell. You are reading far fewer cells, so the only way this makes sense is if it takes longer to read each cell, and the difference more than offsets the advantage by having to read fewer cells. Why is reading a cell with more bits slower?

2. I understand how you can divide an SLC, MLC, QLC, 8LC, etc. SSD into pages of 4kiB and blocks of 64kiB. SLC has 32,768 cells per page and 524,288 cells per block. MLC has 16,384 and 262,144, respectively. QLC has 8,192 and 131,072. However, if we apply the same logic to any xLC that is not a factor of 32,768, we get fractions of cells per block and page. For example, TLC should have 10,922⅔ cells per page and 174,762⅔ cells per block. This does not make any sense at all. How do TLC, PLC, 6LC, 7LC, 9LC, etc. SSDs deal with this?

Also, I understand that some SSDs have a DRAM cache to improve performance. What is the advantage of putting this on the SSD board rather than using the system DRAM? My knowledge tells me that Linux uses the system DRAM as a cache, both loading stuff that it anticipates being needed and buffering writes. If the RAM cache is on the SSD, it is a small chunk of RAM that can only be used for that purpose, while if we don't bother and just use the system RAM, we can use way more cache than what the SSD can offer, and we can use it for other purposes if we need it. Considering the price differences between DRAMless and DRAM cache SSDs, it seems like it would be better to spend the money on more system RAM instead.

Now, let's go up to the firmware level. When you write to an SSD, it copies the relevant block into working memory, makes the changes requested, writes it to a ready-to-go free block, and then marks the old block as free. The new block is now logically in the same location that the old block was, and the old block will be cleared the next time TRIM happens. This sounds like a lot like how copy-on-write works. Should I use a copy-on-write filesystem like btrfs because it won't be fighting the SSD by trying to keep everything together, or should I not use it because it's an additional layer of keeping track of things that's just adding additional writes to do what's already being done? Is there a way to configure ext4 to stop actively trying to prevent fragmentation, or is it even worth doing that? Also, if there's basically no difference internally between sequential and random read/write, why is it that disk benchmark programs show a pretty significant difference for most models?

And finally, do I need LVM in order to use LUKS? What order does it go in? Is it filesystem on top of LVM on top of LUKS on top of hardware? And do I need to enable TRIM on the filesystem level, the LVM level (if used), or the LUKS level? I'm concerned about the filesystem trying to write a block of all ones, but having LVM and/or LUKS turn it into something other than all ones. If my SSD has hardware overprovisioning (240 GB usable of 256 GB hardware), do I even need to bother with TRIM at all?


Also, is it possible for a hard drive to produce an insanely loud very high pitched noise for about a minute, continue to work, and show nothing in SMART? The noise was loud enough to cause discomfort, a little bit higher pitch than the CRT sound, I couldn't tell where it was coming from, and my teacher couldn't hear it. I was carrying my laptop (which is going to be upgraded to my SSD soon, but is still on an HDD) across a classroom and I started hearing that noise. A friend also heard it, approached me, and asked me if it was my computer. It went away after about a minute. Can hard drives do that, or was that something else?

Edited by Zm1TDkSnQkY4KEqskCARSBpk
Scratching out parts that were already answered
Link to post
Share on other sites

22 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Also, I understand that some SSDs have a DRAM cache to improve performance. What is the advantage of putting this on the SSD board rather than using the system DRAM?

Local dram on a ssd is much faster and lower latency that system ram, cause the sata or pcie bus is slow.

 

22 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

My knowledge tells me that Linux uses the system DRAM as a cache, both loading stuff that it anticipates being needed and buffering writes. If the RAM cache is on the SSD, it is a small chunk of RAM that can only be used for that purpose, while if we don't bother and just use the system RAM, we can use way more cache than what the SSD can offer, and we can use it for other purposes if we need it. Considering the price differences between DRAMless and DRAM cache SSDs, it seems like it would be better to spend the money on more system RAM instead.

The ram in a ssd is mostly for the ftl, not a write or read cache.

 

23 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Now, let's go up to the firmware level. When you write to an SSD, it copies the relevant block into working memory, makes the changes requested, writes it to a ready-to-go free block, and then marks the old block as free. The new block is now logically in the same location that the old block was, and the old block will be cleared the next time TRIM happens. This sounds like a lot like how copy-on-write works. Should I use a copy-on-write filesystem like btrfs because it won't be fighting the SSD by trying to keep everything together, or should I not use it because it's an additional layer of keeping track of things that's just adding additional writes to do what's already being done? Is there a way to configure ext4 to stop actively trying to prevent fragmentation, or is it even worth doing that? Also, if there's basically no difference internally between sequential and random read/write, why is it that disk benchmark programs show a pretty significant difference for most models?

COW works fine on ssds.

 

SSDs will change where a logical block it so writes are spread out, cause some files are writtem much more than others.

 

24 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

And finally, do I need LVM in order to use LUKS? What order does it go in? Is it filesystem on top of LVM on top of LUKS on top of hardware? And do I need to enable TRIM on the filesystem level, the LVM level (if used), or the LUKS level? I'm concerned about the filesystem trying to write a block of all ones, but having LVM and/or LUKS turn it into something other than all ones. If my SSD has hardware overprovisioning (240 GB usable of 256 GB hardware), do I even need to bother with TRIM at all?

Depends on your exact setup. Normally let your distro do it for you.

 

Luks only allows one partition, so lvm is normally used to have multiple partitions inside luks, and gives you thinks like easy expansion and snapshots.

 

25 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

. If my SSD has hardware overprovisioning

They all have overprovision these days. A 240/250/256 gb ssd has 256gib of flash normally, so even a 256gb ssd still has around 20gb of extra space.

 

Might as well use trim if you can.

 

26 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Also, is it possible for a hard drive to produce an insanely loud very high pitched noise for about a minute, continue to work, and show nothing in SMART?

Depends on the cause, was it coil wine? Seems like it.

 

A hdd will often fail with no smart warning

 

 

Link to post
Share on other sites

22 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

1. Why is it that SSDs with more bits per cell are slower, not faster, than SSDs with fewer bits per cell. You are reading far fewer cells, so the only way this makes sense is if it takes longer to read each cell, and the difference more than offsets the advantage by having to read fewer cells. Why is reading a cell with more bits slower?

With SLC, each cell only has two voltage-levels -- high and low -- to indicate what data it contains, ie. one or zero. With all the others, there are more than two voltage-levels and the more voltage-levels there are, the smaller the difference between them and therefore the more precise any readings and writings must be. This is why it all is slower.

 

27 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Also, I understand that some SSDs have a DRAM cache to improve performance. What is the advantage of putting this on the SSD board rather than using the system DRAM? My knowledge tells me that Linux uses the system DRAM as a cache, both loading stuff that it anticipates being needed and buffering writes.

All modern OSes use RAM as cache, but that doesn't make SSD's own, built-in cache useless. The OS can just write stuff to the SSD and go do something else and the SSD will completely independently work with the stuff in its cache, all without needing to bother the OS. On the other hand, with cacheless SSDs, the OS has to keep constantly polling the SSD to determine whether it has yet finished writing, so the OS can send it more stuff to write, which slows the OS down.

 

30 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Should I use a copy-on-write filesystem like btrfs because it won't be fighting the SSD by trying to keep everything together, or should I not use it because it's an additional layer of keeping track of things that's just adding additional writes to do what's already being done?

No, you'll just use whatever filesystem you like to use. As long as the filesystem supports TRIM, there's no difference. The thing is, the SSD doesn't know or care what filesystem you are using; it only deals with raw data-blocks and the filesystem has zero control over the SSD-controller's algorithms.

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to post
Share on other sites

6 minutes ago, Electronics Wizardy said:

The ram in a ssd is mostly for the ftl, not a write or read cache.

Then what does the ftl use it for?

7 minutes ago, Electronics Wizardy said:

SSDs will change where a logical block it so writes are spread out, cause some files are writtem much more than others.

I know they do that, but does that mean that I should configure filesystems not to try to avoid fragmentation?

 

8 minutes ago, Electronics Wizardy said:

Depends on your exact setup. Normally let your distro do it for you.

Well, I'm going to be using Arch, so I think I'm going to be doing it myself. I want an encrypted / partition, with /boot and the LUKS header on external storage.

 

10 minutes ago, Electronics Wizardy said:

Depends on the cause, was it coil wine? Seems like it.

I really don't think it was coil whine. My laptop has never had coil whine, and as far as I know, the amount of coil whine that a device produces remains constant for its entire life. This suddenly started happening, lasted for about a minute, stopped completely, came back twice but less loudly and for less time, and then returned to being completely silent. Also, as far as I know, coil whine isn't particularly loud. This was loud enough that it made my ears uncomfortable and someone ten feet away also complained about it being so loud.

 

13 minutes ago, Electronics Wizardy said:

A hdd will often fail with no smart warning

It did not yet fail. A few months ago, it was making clicks every few hours during operation, but that went away.

 

9 minutes ago, WereCatf said:

the more precise any readings and writings must be. This is why it all is slower.

Why is more precise reading and writing so much slower?

 

11 minutes ago, WereCatf said:

you'll just use whatever filesystem you like to use

Well, then, I guess ext4 it is.

Link to post
Share on other sites

5 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Why is more precise reading and writing so much slower?

Voltage-levels are real-world stuff, they are analog, and therefore any reading you take and convert to a digital value ends up being an approximation. There's always some amount of jitter in the signal and since we are talking about very small voltage-level differences here, the smaller the difference, the more time this ADC-stage has to spend on getting a precise approximation.

 

Imagine having a lamp that has three stages: off, mid-brightness and fully-on. The lamp is a little wonky and even when you have it set at mid-brightness, it occasionally dips down to off for, say, 100ms, and occasionally goes to full brightness for 100ms, but most of the time it'll remain at mid-brightness, though: if you look at the lamp's status at the wrong moment, you might think you've set it to off or to full brightness, so you need to spend a little bit more time looking at it and then determining that, on average, it is actually at mid-brightness, even with those dips and pops.

 

I'm not going to explain in detail how, exactly, all the different kinds of ADCs function, but they do a similar thing as to the lamp-example: they look at the source-voltage, ie. the lamp, for a certain amount of time and then approximate the voltage over that time and output a digital value for the approximation. The longer time you let the ADC spend on this approximation-stage, the more representative of the input the output will be. If, on the other hand, you spend too little time, you may end up giving those jittery dips and pops too much weight and end up with incorrect output. Since we are talking about very small voltage-differences in an SSD-flash, it's not such an easy thing to get precise readings at fast speeds.

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to post
Share on other sites

1 hour ago, WereCatf said:

Voltage-levels are real-world stuff, they are analog, and therefore any reading you take and convert to a digital value ends up being an approximation. There's always some amount of jitter in the signal and since we are talking about very small voltage-level differences here, the smaller the difference, the more time this ADC-stage has to spend on getting a precise approximation.

 

Imagine having a lamp that has three stages: off, mid-brightness and fully-on. The lamp is a little wonky and even when you have it set at mid-brightness, it occasionally dips down to off for, say, 100ms, and occasionally goes to full brightness for 100ms, but most of the time it'll remain at mid-brightness, though: if you look at the lamp's status at the wrong moment, you might think you've set it to off or to full brightness, so you need to spend a little bit more time looking at it and then determining that, on average, it is actually at mid-brightness, even with those dips and pops.

 

I'm not going to explain in detail how, exactly, all the different kinds of ADCs function, but they do a similar thing as to the lamp-example: they look at the source-voltage, ie. the lamp, for a certain amount of time and then approximate the voltage over that time and output a digital value for the approximation. The longer time you let the ADC spend on this approximation-stage, the more representative of the input the output will be. If, on the other hand, you spend too little time, you may end up giving those jittery dips and pops too much weight and end up with incorrect output. Since we are talking about very small voltage-differences in an SSD-flash, it's not such an easy thing to get precise readings at fast speeds.

Got it, thanks.

Link to post
Share on other sites

2 hours ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Then what does the ftl use it for?

The ftl sotres where the logical blocks are on the flash chips. Basically the directory or filesystem of the ssd

 

2 hours ago, Zm1TDkSnQkY4KEqskCARSBpk said:

I know they do that, but does that mean that I should configure filesystems not to try to avoid fragmentation?

 

Don't worry about fragmentation, ssds don't really care

 

2 hours ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Well, I'm going to be using Arch, so I think I'm going to be doing it myself. I want an encrypted / partition, with /boot and the LUKS header on external storage.

 

Then might as well use lvm, no reason not to. then make a swap volume, and a few more for other directories

 

2 hours ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Well, then, I guess ext4 it is.

Id do btrfs or zfs personally for snapshots and compression. Makes restoring files much easier.

Link to post
Share on other sites

42 minutes ago, Electronics Wizardy said:

The ftl sotres where the logical blocks are on the flash chips. Basically the directory or filesystem of the ssd

Then why does it need it's own big chunk of RAM? Don't we want that information to persist on shutdowns? Or was I correct in my original assumption that it's a cache?

 

43 minutes ago, Electronics Wizardy said:

Then might as well use lvm, no reason not to. then make a swap volume, and a few more for other directories

Wouldn't there be less overhead if I put everything on one filesystem rather than a bunch of different ones? Also, is swap really necessary? I don't intend to hibernate, and I don't want to waste my write cycles so I'd rather run the low memory killer when I'm low on RAM. There seem to be some stability concerns with btrfs. However, zfs looks promising. I'm not sure if I'll ever actually use snapshots.

Link to post
Share on other sites

6 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Then why does it need it's own big chunk of RAM? Don't we want that information to persist on shutdowns? Or was I correct in my original assumption that it's a cache?

Its about 1gb per tb. Its not a cache, but the ftll. It is recreated on reboot

 

7 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Wouldn't there be less overhead if I put everything on one filesystem rather than a bunch of different ones?

depends on your goals. Sometimes people like /home to be seprate so reinstals keep home files. Some people like /tmp seprate so log files don't stop the whole system.

 

8 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

There seem to be some stability concerns with btrfs.

Not really for single disk, and backups are there is somrething really goes wrong. I have used it a lot on many syhstem.

 

8 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

I don't want to waste my write cycles

don't worry about write cycles, won't be a issue.

 

8 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Also, is swap really necessary?

I would, helps keep the system faster but swapping out stuff that isn't needed so it can use the ram as a disk cache, and beet er than the out of  memory snapshots

 

9 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

I'm not sure if I'll ever actually use snapshots.

Might as well, not real downsides and I will almost garantee you will do something to a file you will want to undo;.

Link to post
Share on other sites

2 minutes ago, Electronics Wizardy said:

Its about 1gb per tb. Its not a cache, but the ftll. It is recreated on reboot

So the DRAM on the SSD is a physical to logical address map, and it gets backed up to the controller or somewhere on the main flash every so often and when the SSD is told that a shutdown is expected so it can be recalled on power up?

 

4 minutes ago, Electronics Wizardy said:

Sometimes people like /home to be seprate so reinstals keep home files.

Can LVM give me "partitions" that automatically resize based on how much space is used? If that's the case, I think I'll put /home on a different one than / for that purpose.

 

8 minutes ago, Electronics Wizardy said:

Some people like /tmp seprate so log files don't stop the whole system.

What is meant by log files stopping the system? I don't think I've had that problem before.

 

12 minutes ago, Electronics Wizardy said:

Not really for single disk, and backups are there is somrething really goes wrong. I have used it a lot on many syhstem.

OK, I guess I'll use btrfs. I back up to my school's infinite google drive using deja-dup. It's fun to imagine the admin scratching his head at what my folder with ~250 GB of 50MB chunks of gibberish is.

 

9 minutes ago, Electronics Wizardy said:

I would, helps keep the system faster but swapping out stuff that isn't needed so it can use the ram as a disk cache, and beet er than the out of  memory snapshots

In my experience, switching to swapped-out stuff is painfully slow, often comparable to opening the thing from scratch, but that's on my mechanical hard drive. Is it better on an SSD? And how do I make sure that swapping doesn't burn through all of my writes in the first week?

 

10 minutes ago, Electronics Wizardy said:

Might as well, not real downsides and I will almost garantee you will do something to a file you will want to undo;.

How much space do snapshots take, and can I put them in a folder on the main filesystem or do they need to go on a separate logical volume? And if I use btrfs+lvm, should I use the filesystem's snapshots or lvm's snapshots?

Link to post
Share on other sites

12 hours ago, Zm1TDkSnQkY4KEqskCARSBpk said:

How much space do snapshots take, and can I put them in a folder on the main filesystem or do they need to go on a separate logical volume? And if I use btrfs+lvm, should I use the filesystem's snapshots or lvm's snapshots?

Its basicly the space take up by the changes. If you make a snapshot and only add files or don't change it, the snapshot will take no space. But if you delete a 1gb file, the snapshot will now be 1gb bigger.

 

12 hours ago, Zm1TDkSnQkY4KEqskCARSBpk said:

In my experience, switching to swapped-out stuff is painfully slow, often comparable to opening the thing from scratch, but that's on my mechanical hard drive. Is it better on an SSD? And how do I make sure that swapping doesn't burn through all of my writes in the first week?

Swap on a ssd is pretty fast normally. With desktop linux you really don't notice swap, and it doesn't really cause any issues.

 

Don't worry about writes, it won't cause an issue. The ssd can handle way more writes than you will ever put on your drive.

 

12 hours ago, Zm1TDkSnQkY4KEqskCARSBpk said:

What is meant by log files stopping the system? I don't think I've had that problem before.

If you have too many log files(rare, but happens) you can be in a situation where there is no free disk space, and stuff just stops working when there is no free disk space.

 

12 hours ago, Zm1TDkSnQkY4KEqskCARSBpk said:

Can LVM give me "partitions" that automatically resize based on how much space is used? If that's the case, I think I'll put /home on a different one than / for that purpose.

LVM can't auto resize on its own.

 

12 hours ago, Zm1TDkSnQkY4KEqskCARSBpk said:

So the DRAM on the SSD is a physical to logical address map, and it gets backed up to the controller or somewhere on the main flash every so often and when the SSD is told that a shutdown is expected so it can be recalled on power up?

Yup basically.

Link to post
Share on other sites

10 hours ago, Electronics Wizardy said:

If you make a snapshot and only add files or don't change it, the snapshot will take no space. But if you delete a 1gb file, the snapshot will now be 1gb bigger.

So why doesn't adding a file take up space on a snapshot? And what about where they need to go and which snapshots I should use?

 

10 hours ago, Electronics Wizardy said:

If you have too many log files(rare, but happens) you can be in a situation where there is no free disk space, and stuff just stops working when there is no free disk space.

That has never been a problem for me, but I'll keep it in mind the next time I run out of disk space.

 

10 hours ago, Electronics Wizardy said:

LVM can't auto resize on its own.

Can I at least resize without rebooting my system?

Link to post
Share on other sites

On 12/19/2019 at 7:09 PM, Zm1TDkSnQkY4KEqskCARSBpk said:

Can I at least resize without rebooting my system?

depends on fs, but lvm can do hot resize. Some filesystem can only expand or do unmounted resize.

 

On 12/19/2019 at 7:09 PM, Zm1TDkSnQkY4KEqskCARSBpk said:

So why doesn't adding a file take up space on a snapshot? And what about where they need to go and which snapshots I should use?

Because it only shows changes. Adding a file Don't have in changes in comparison to the old version

 

Snapshots aren't a file and stored somewhere, there part of the filesystem, and the extra files are hidden from you unless you mount one.

Link to post
Share on other sites

13 minutes ago, Electronics Wizardy said:

depends on fs, but lvm can do hot resize. Some filesystem can only expand or do unmounted resize.

That's good to know so I did some research, but right now it looks too hacky to be something I want to use, so I think I'll stick with one / partition, but still use LVM just in case I want more partitions later.

 

19 minutes ago, Electronics Wizardy said:

Because it only shows changes. Adding a file Don't have in changes in comparison to the old version

 

Snapshots aren't a file and stored somewhere, there part of the filesystem, and the extra files are hidden from you unless you mount one.

I think I understand how they work now. But if I'm using LVM and a filesystem with its own snapshots, which should I use?

Link to post
Share on other sites

30 minutes ago, Zm1TDkSnQkY4KEqskCARSBpk said:

I think I understand how they work now. But if I'm using LVM and a filesystem with its own snapshots, which should I use?

Id normally prefer filesystem snapshots if your fs supports it, but use lvm if it doesn't.

 

 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×