Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

NewMaxx

Member
  • Content Count

    126
  • Joined

  • Last visited

Awards


This user doesn't have any awards

1 Follower

About NewMaxx

  • Title
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. CrystalDiskInfo and Hard Disk Sentinel to start! In order to check health. Although what you experienced is not uncommon for a dying drive.
  2. Could be a heat issue, or might be a form of SLC caching (yes, TLC-based USB drives can use this).
  3. It is a good drive. I have it myself, you can find more information inside my post here. (ignore the low benchmark results - it's due to an unresolved X570 bug with certain drives)
  4. I'll only talk about block size. There's two lines of thinking. First, you want to go for the typical 4KB block size, because this matches general cluster size and also how SSD mapping is arranged. That is to say, 32-bit (4B) addressing for 4KB clusters, therefore 1B of overhead per 4KB. Yes, a lot of times sector size is 512B ("512e"), but generally you're looking at 4KB ("4Kn") for optimal performance. On the other hand, with many HDD systems you would tend to stripe to 128KB for cold storage. Of course in reality your TLC SSD probably writes 16KB pages (16x3 = 48KB requests in TLC). So anywhere in the 4KB-128KB range is fine, although for best performance you would want to match cluster size (4KB). Your SSD does its own mapping and the OS does filesystem-level caching so this is more about how you want to balance the performance.
  5. Yeah, update the BIOS to see if it can manage NVMe booting via UEFI.
  6. With reads: because there's more reference voltages. With just one bit you need just one, with two you need two, with three you need four, with four your need eight. Higher latency, and latency is directly related to bandwidth. With writes it's because of voltage sensitivity. The first (least significant) bit is easy to write with large voltage pulses, each subsequent bit requires finer voltage pulses/correction which takes longer in procedure. When writing pages you are writing only one bit of a cell, but it randomly selects the bits such that the average speed of the four bits is the actual speed, therefore four bits is slower on average. Beyond this reading the values requires higher sensitivity over time (due to voltage drift, read disturb, etc) which means soft-sensing error correction which can increase read latency. Also, SLC mode takes four blocks of QLC per block of SLC which means you have to convert sooner and data in transition - that is, being folded or compressed back into QLC - suffers a read latency penalty (and also the additional reliance on background management, which also folds, means more latency both in reads with buffering the pages and again when writing the new block). Latency is directly related in random performance, of course. Also, layers in the sense of 3D flash - as opposed to levels (e.g. triple-level cell = TLC) - is a different subject. But assuming you mean levels, each subsequent level improves capacity less while endurance and performance is reduced more greatly. For more information, please refer to pgs. 14-18 of this document and also pg. 3 of this document.
  7. Much better, thanks for the update. It's still likely to do with the SLC caching. And some of my other drives have issues on X570 when over the chipset (vs. CPU lanes).
  8. If you're on Windows you can use Storage Spaces. This is tiering (not the same as caching) whereby a heat map is created and the most-accessed files will be put in the faster SSD tier while less-accessed will go to cold storage (HDD tier).
  9. edit I see these are S8-based so are SATA drives and the GT72 says 4x RAID up to 1600 MB/s, my bad. Although that really makes it just a worse idea. Also want to point out, the MS-17822 adapter DOES have a M.2 connector, but the MS-17812 does not.
  10. I don't see any chips on the adapter's PCB which means it's just a dumb adapter; looks to support quad (four) drives. RAID would be software-based (so overhead) with some diminishing returns as always, main benefits would be sequentials (esp. with queue depth) and 4K/IOPS with high queue depth (uncommon). Haven't seen the card before but searing MS-17812 gives you the details. Not sure how it connects, looks like a PCIe-LP socket/slot perhaps, if so it would likely be x4 or x8 PCIe 2.0 if it's over the HM87's chipset. The shown drives are I believe A1000 or Phison E8-based which are x2, hence the B+M keying, so x8 would make sense - it would bifurcate to x2 per drive.
  11. SLC cache. The E16 drives (MP600) have full-drive SLC caching which means the entire drive of TLC is capable of SLC mode. SLC mode takes up three times the capacity of TLC. Therefore, the SLC eventually has to be emptied and converted to TLC via folding which is a much slower state. You can see that here for example. Note that this is one reason I caution people against saying the E16 drives are faster than the 970 EVO Plus. In any case, the drive should naturally empty the SLC in the background when idle and return to its faster state, although I've known drives with large SLC caches to get wonky sometimes enough to require a secure erase. Keep in mind that as such drives get fuller they're more likely to exhibit this behavior as the cache is smaller and there's fewer blocks with which to work.
×