Search the Community
Showing results for tags 'parity'.
-
Idk if that belongs here, but I have a question about unRAID and how it handles redundancy. I understand how RAID 5 works, the other disks can combine their data to form a new disk if one fails. But that's striped across the drives. I also understand that unRAID is fundamentally different and does not stripe data (fills disks individually) but I wanna know how some scenarios would work out. Imagine a setup like this on unRAID: The parity disk can be reconstructed since the data is still on the other disks. Disk A, B and C can easily fit on the parity disk. But what if disk D fails? Where is the parity for that one?
- 6 replies
-
- raid
- redundancy
-
(and 1 more)
Tagged with:
-
Hi, I've just purchased 2 x 10tb drives to replace my parity drives. The existing ones will then be wiped to replace existing data drives. I've completed the parity sync twice now with no success. It completes but then the drives shows hundreds of errors and unraid disables the drive. Will post specs shortly... Any ideas would be very much appreciated. Many thanks.
-
Okay, I'm really sorry to bug y'all, but I'm finding real spotty info on this topic. I'm running Windows 10, and (separate from my boot drive) I've got a 2TB SSD, and 5x3TB HDDs. I'm trying to run the HDDs in Parity, with the SSD as a cache. I've run Storage Spaces before on just the HDDs with Parity, and I've seen several posts around the web discussing Tiered Storage with Storage Spaces as well. However, I've only seen folks talk about striping or mirroring with tiered storage. Every time I see someone with the same thoughts as me ask, it's just people dunking on Parity without being much help. Look, I'm just a gamer with a habit of data hoarding. I'd like to maximize my storage, with a little redundancy, but still have whatever game is my addiction of the week load relatively snappy. I appreciate any advice y'all can give me, and hope everyone reading this has a lovely day!
-
I have two Unraid servers in my homelab one with a parity one without, I am attempting to transfer a measly 18gb folder to my backup server(the one with the parity) from my main server(non parity) but I'm getting 2MB/s transfer speeds. Even if I compress one of the folders and transfer it. I'm fairly new to using a parity so just looking to see if I maybe I did something wrong or if there is a trick to the trade I just don't know yet. Also I have gigabit internet and turbo write enabled on both
-
In traditional RAID setups, the data is spread out across all drives in the RAID array. If you were running RAID 5 with six drives, for example you would have five drives worth of storage space, with the missing space being for parity data. If you lose one drive, you haven't yet lost all your data. Throw in a new drive, and rebuild your RAID array (and pray you don't run into an unrecoverable read error or silent corruption, because then you can kiss your data good-bye). This has obvious advantages: It is space efficient for the amount of redundancy it provides, and can increase read/write performance with good hardware, since there are multiple drives to run I/O on. However, a single drive going down will kill all of your data. What if you wanted to be able to choose how much overhead you wanted to use for parity calculations? Or what if you wanted a drive loss to not completely kill all of your data? Here's an approach. Here, we have a single giant parity RAID setup. Each color represents pieces of data belonging to a single data chunk (e.g. all the red blocks represent a chunk, spread out over all the drives). This is how traditional RAID works. The proposed "betterRAID" method is to have a fixed parity ratio in a RAID array. For instance, if I want a RAID volume with N parity drive worth of space for every M drives worth of space, then I will have N/(N+M) for my overhead (for N = 1 and M = 4, I have a RAID 5 with five drives, pretty common). However, let me use any number of drives with this setup, and write a given chunk of data to 5 of those drives, then the next chunk to the next 5 drives, and so on, like this: Here, the red data is written across five drives (twice as much data is written to an individual drive) instead of across all ten drives. The orange data gets written to the next five drives, then the green, etc. Notice that if I kill any two drives, I am guaranteed to have 50% of my data survive in the worst case, and 100% of my data survive in the best case. To gain this advantage over traditional RAID 5, I sacrificed one additional drive worth of space (one drive for every five, meaning two drives of the ten are reserved for parity). Obviously, for very large files that span tons of data chunks, they will become corrupted. For smaller files (which can fit inside of a single data chunk), they would survive if the chunk survived, and therefore would be recoverable. Here is a slightly more complicated example. Black lines indicate dead drives. In this case, we write chunks of data across five drives (with 20% of that space used for single-parity), and have 18 drives total in our array. In this case, we can kill two drives, and in the worst case we have lost only 25% of our data. To clarify: A "chunk" is not a complete file. A chunk is just a chunk of data (say, 512KB). If I was writing a 10KB file, it would fit within that chunk, and the next file I wrote might also fit within that chunk. When the chunk is completely full of data, the next one would start to be filled with new incoming data. Writing a multi-gigabyte file would span thousands of chunks. There are obvious upsides to this, most notably the fact that losing more drives than there are parity will not destroy all data, though much of it would likely be corrupted if it spanned many chunks. This also makes disaster recovery a little bit better, ensuring that a failure will not necessarily kill absolutely everything. In addition, if we used dual-parity we could make it even harder to kill data. The downside is that now it is harder to manage the data for an individual file, since you have to find which drives the data lives on. This doesn't provide the same level of protection that dual-parity or triple-parity RAID does. It provides a measure of disaster recovery in case a RAID fails completely. I think it'd be really cool for a software RAID solution like ZFS to implement something like this for RAID Z1, Z2 and Z3.
-
I'm looking for some guidance on how I should go about configuring my first DIY NAS. I'm planning on creating a very large media server to host BD rips and lossless audio. I just ordered the Fractal Design Refine 7 XL for all the HDD bay space, and I was going to use my wife's old gaming rig setup as the basis for the system. I'm wondering about how I could go about setting up the storage drives with redundancy and parity while still being able to expand in the future as I add drives to the system. I'm very much a novice in this area so I was planning on sticking with the already installed Windows. Any help and "simple" solutions would be greatly appreciated. Thanks!
-
Hi if any of you know how unraid works i have a question what is a parity for and how large should it be for 4 virtual machines another question will ryzen work for vm
-
I think I royally fckd up. A drive failed in my unraid NAS and I got a new a one to replace it, but I also added another empty drive to, since I planned to upgrade anyway and had it open already... Unfortunately, I wasn't aware, that adding both at the same time and then rebuilding makes the rebuild pretty hard. Any Ideas how to save my data?
-
I have a server with Windows Server 2012 R2 with a Storage Spaces pool consisting of 4 Hitachi 3TB disks (1 hot spare), the pool is configured with parity. It has worked fine until now, a bit slow, but still working. Now I can open the volume and see the files, but can't play any of the media. When trying to copy something from the volume there is a short burst with 15-20 MB/s reads, but then it dips down to 0. Once every couple of minutes it will transfer a couple of KB. Storage Spaces says that all disks are healthy, but HWInfo is saying that two of the disks have SMART errors. The weird thing is that writing to the volume is completely OK, I am getting +80MB/s when writing. What could be the issue here? Is there some settings that could be tweaked, or is the data already on the disks corrupted and unsalvageable?
- 4 replies
-
- windows server
- parity
- (and 4 more)
-
if I do a raid 5 softraid (parity mode) can I add another drive later on? I plan on adding 3 drives and adding more later. I will be using hard drive sentinel to monitor drives and sent notification if something is about to fail or has failed. if you know a better way than doing it thru windows like another software fell free to suggest it.
-
I am trying to create either a raid 5, or a raid 6 array in my new R730XD running Windows Server 2019, but I am running into a problem. I can create a simple (raid 0), and a mirrored (raid 1) array just fine, but when I try to use any kind of parity I receive an odd error. It errors and says: "The storage pool does not have sufficient eligible resources for the creation of the specified virtual disk." I have 8 x 14TB Seagate Ironwolf Pro drives in the machine, and I am trying to create the raid array with four of them, using a fifth for a hot spare. Please Help! I am testing this out now, and must get this working before I deploy it to a client.
- 12 replies
-
- server 2019
- windows server
-
(and 4 more)
Tagged with:
-
Hello, I had just finished making a NAS and had about 30 minutes left on the parity sync at which point I went and had dinner. When I came back the web UI had stopped working and when I switched to the IP it was running on or the new named URL I had set it...nothing worked. The login on the monitor did nothing when I typed on my keyboard, so I hard reset it by holding down the power button as I could not think of anything else to do. Why did this problem occur? It is now saying it needs to do the whole parity sync again and says it will take 4 hours, what is going on? Thanks. FYI I have 3 (3TB Toshiba P300's) drives running 1 as a parity with a Ryzen 5 1500X on a B450 motherboard.
-
I attached 3 x 10 TB WD Red HDDs. I configured them as parity, since I have less than 7 then it was single parity. In 2 weeks I will add 4 x 8 TB WD Red HDDs bringing the total to 7, so in that case I am able to use dual parity...correct? Can I re-configure my system to use dual parity instead of single parity w/o data loss or am I forced to back up my data and re-build my storage pool from the start? Thnx
- 3 replies
-
- storage spaces
- windows 2016 server
-
(and 1 more)
Tagged with:
-
I've been running my unRAID server for about 6 months now with 2x2tb drives (1 green and 1 red) and a 6 tb green drive. I recently got notified that my green 2tb drive was failing and has multiple SMART errors and bad sectors. I, fortunately, found a 2TB purple drive lying around and decided to replace the green drive with the purple drive. I have no Parity setup on the server yet. Is there a way to remove all the info from the green drive named (sdb) to the new drive named (sdd) without having to set up a parity disk. the failing driver and the new drive both are 2TB each. the new 2TB purple driver has been precleared and awaiting mounting.
-
Source: http://www.gamespot.com/articles/batman-arkham-knight-aiming-for-graphics-parity-ac/1100-6423481/
- 6 replies
-
- rocksteady
- parity
-
(and 4 more)
Tagged with:
-
Enterprise HDDs have something called TLER or Time Limited Error Recovery enabled on them. This basically means "If you try to read a sector, and you get an error, keep trying for X seconds, but give up after that." to the HDD. This is useful because if it hits a bad sector in a RAID, it will say "Oh, that data is corrupt, but look at all this other data that's perfectly fine." Consumer HDDs don't have that. They will keep trying to read the sector for an extended period of time (not sure on the details of that). This creates a major problem with some RAID setups because a drive not responding to a read request (which is what continuously reading that bad sector looks like to the RAID) for more than 8 seconds basically means that drive has failed. According to the RAID. The drive is fine and would work fine if it were alone, but to the RAID, it not responding means it's failed. The problem with this is that a drive may be perfectly fine, aside from having a few bad sectors, but still cause a RAID 5 array to fail, which basically means you lose all the data on it when only a tiny section is bad. The problem is exacerbated by how big drives and the data on them are getting. SATA drives have an URE or Unrecoverable Read Error rate of 10^14 or 100 trillion bits. That's 12 Terabytes. That might not sound like a lot, but remember, that's read. Not write. Not total disk capacity, but read. Imagine how long it would take a media streaming and/or backup server to have read 12 Terrabytes especially when checking parity and other things along with that. Or anything similar. Combine this with a drive failure rate (let's assume the low of 3%), and the odds of having a drive fail while another drive has a sector read error at the same time in a RAID 5 isn't that low. The odds get quite a bit (pun intended) lower as you go to RAID 6 and 7, but I wouldn't even use those personally. As drives get bigger and bigger, this also gets worse. We're at 4TB drives being the usual backup/storage drives now. It wouldn't take long to read 12TB with those (especially multiple ones). If you want to read this information in a more cheeky "Make a headline" fashion, read this article. However, I just wanted to mention this so that anyone considering building a NAS or similar RAID environment would know the potential pitfalls of using large consumer drives with parity RAID and what you should do to recover them. Things to keep in mind. If you just absolutely have to have parity RAID, use enterprise drives that have TLER. Seagate NAS drives, and WD Reds and up (SE and RE) have it. I wouldn't trust anything consumer personally. YMMV