Set it to 0% when it's 25 degrees C or less. Have it go up to 20% fan speed at 35 degrees. 40% at 55 degrees. 80% at 80 degrees, and 100% at 85 degrees.
Really interesting information!
Although I would wager that a couple of variables played a factor in this success too. Of course with Luke and Luke's mother also appearing on the streams (as I have understood), there is already a 'grasping point' for people not familiar with PlantyTimes' stream.
The raids are a good introduction, the cameos of well-known people (if you watch Linus, they are well-known) are a good reason to tune into more streams.
But of course it being enojoyable content (I presume, I don't really watch live-streams + post-WAN show raids are not viable within my timezone) makes people stick around too.
10/22/2020 Update:
The RAID has slowed down again, in exactly the same way as before. I'm guessing this means my previous "fix" was never a fix, but a band-aid caused by how the drives got formatted after their stripe sizes were changed... or something. I have since learned that AMD RAID drives don't support TRIM, which might have something to do with the slow-down in the first place. At this point, I believe there is a systematic issue with AMD based raids, and I will be putting this RAID onto a dedicated NVMe RAID card from this point forward.
I'm making this post purely as an informational post that might wind up in various search results down the line. I've spent the better part of 2-3 weeks of consistent time trying to fix this issue, and have now resolved it. So let's get into it.
In November of 2017, I swapped my system over from running an Intel i7 3960x to an AMD Threadripper 1950x. I did this for a few reasons, and one of those was the capability to create a bootable NVMe RAID. The idea of having a bootable drive that pulled 3500MBps Reads, 2000MBps writes really appealed to me, since I wanted a stupidly fast drive to run my video editing software from. In this case, I know IO/s are more important than raw speed, and NVMe delivers on that too.
However, some time after getting the RAID-1 up and running, I noticed something very odd. While I was getting 3500MBps reads, I was only getting about 200-400MBps writes. For a while, I chalked it up to the RAID controller used within my ASUS Zenith Extreme motherboard being bad or something, even though my other 2 RAID-1's could pull their rated write speeds. One of those raids being around 300MBps, and the other being around 500MBps (which is faster than what my NVMe was doing at the time). After a little bit of tinkering, I gave up on it, and just ran my system with this flaw.
A screenshot taken days before I fixed the issue. Getting above 300MBps w/ CrystalDiskMark was rare.
Fast forward a couple years. At random times between then and now, I had tried to fix the issue, but was limited because I didn't want to be re-formatting my main OS drive every couple of days. However, last week, I finally got tired of the issue after finishing a networking upgrade to my house. I wanted this last little annoyance taken care of. I tried everything I could think of, and everything I had read online. I tried running the drives within every configuration of ports this motherboard offered. Both in the DIMM.2 slot; one in the DIMM.2 slot and the other in the slot behind the heatsink on the bottom right of the motherboard. One in an ASUS Hyper M.2 riser and the other in the DIMM.2 slot or the motherboard's slot. The write speeds remained around 200-400MBps, usually in the 300's. I ran an IO/s test the other day, before fixing this problem, and that was around 70,000 or so, very well short of the 370,000 this SAMSUNG 970 Pro NVMe SSD is capable of. The day before fixing this issue, I also speed-tested the drives individually, not in a RAID environment, and I got a solid 3500/2000 on both drives.
I noticed that whenever I would do a file transfer within Windows, it WOULD report the drive was doing 1200MBps or so, however, Task Manager would show that the data was actually being very slowly written at about 100-200MBps. I tried a whole manner of different stress tests. What clued me into this problem being fixable was the fact that even though these drives are in a motherboard RAID-1, where Samsung's Magician software does NOT have direct access to the drives to update the firmware (and I checked while diagnosing, they are on the latest firmware), that software was able to push 1,200MBps writes to the drive. I had also tried enabling and disabling both Read Ahead and Write-Back caches, but that didn't change anything either. I tried disabling them too. No difference. Sooooo okay. If ATTO, and CrystalDiskMark can't push beyond 400MBps to the drive, but Samsung's benchmark can, what gives?!
I'm actually running 3 SSD based RAID-1's on my system. One for my OS (The NVMe one), one for my Programs / Games, and one to capture videos to for my day-to-day work (Since a LOT of data is written to it, I don't want to add unnecessary stress to my OS or Programs drive). The other day, at like 3am, while basically giving up and accepting that my NVMe drives were just going to run slowly, I did one last thing to the RAID when I rebuilt it for the last time. I set the stripe size of the NVMe RAID to 64KB, which matches the other 2 RAID's in my system. I had always ran a higher stripe size because I read SOMEWHERE that it helped with performance. While this may be true if it's the ONLY RAID on the controller, or if all RAID's have matching stripe sizes, this is what FIXED my problem.
By setting the stripe size of my NVMe RAID-1 to match my other 2 RAID-1's, which in my case is 64KB, the write speeds of the NVMe RAID shot up from 200-400MBps all the way up to 1800+MBps, with Windows often reporting 2000MBps.
Again, I made this thread JUST in case someone is experiencing a similar issue, googles for a solution, and stumbles across this. I obviously can't guarantee my fix will work for you, but if you're running multiple RAID's on a single controller with differing stripe sizes, and you're experiencing these sorts of speed issues, give this solution a try. Hope it works! And if you're curious, I am presently running the drives in an ASUS Hyper M.2 riser card so each drive has a proper PCIe 3 x4 lane. Some configuration of DIMM.2 + using the motherboards connector always caused one of the lanes to be x2. Plus, without a heat-sink, these drives thermal throttle QUITE fast. Fast enough to show up in the ATTO benchmark.
A screenshot I just took. CrystalDiskMark never seems to be able to pull the full speed of the drive. Still, a LOT better than before.
ATTO Disk Benchmark today w/ the drives in RAID-1, w/ matching stripe sizes to the other 2 RAID's I have.
I increased the file size since that tended to provide more consistent results.