Search the Community
Showing results for tags 'raid6'.
-
I'm investing in a RAID array for my computer, particularly RAID 5 or 6 which, after some research, I figure would be a safer bet than RAID 1. I want to fully encrypt my entire system with VeraCrypt, including the RAID drives but it left me wondering. If any of my drives in my RAID 5/6 array fail and I replace them, what happens to the new ones since the old ones were encrypted by VeraCrypt? Is using RAID arrays with VeraCrypt practical at all?
-
I am very new to RAID. As far I know RAID 6 can withstand two disk failures at one time. This is what I want. I heard that modern RAID controllers store metadata on the HDD so we can easily replace one controller with another. I need a controller that I can swap easily without loosing data (in case the controller dies). I do not want to match the model etc or be the victim of vendor lock-In (this is the main reason I am considering a custom solution instead of a off the shelf solution). I need 6 x 3.5 SATA hot swappable bays. It needs to have HDD status indicator like PowerEdge server (Ex. amber blinking light that says the HDD need replacement. or, alternating amber AND green to say that the HDD is about to fail) or something like that. The idea is, when I see the indicator, I just replace the HDD and rest is done automatically (rebuild the raid array and stuff like that). The data connection from my PC to the DAS needs to be as fast as possible. I am considering USB 3.1 Gen2 (10 Gbps) or anything faster. I am using Linux, in case this information become relevant at any point. So, my questions are: What casing do I need to use? What RAID controller do I need to use? I am looking for RAID storage (DAS), not NAS. P.S. I am actually willing to consider off the shelf solution if that meet my requirements (can replace the RAID controller, support RAID 6, has 6-bays, has USB 3.1 Gen2, has failing or failed hdd indicator, HDDs are hot swappable).
-
Budget (including currency): USD Preferred Range 1800-2000| Hard Cap 2500 Country: United States Games, programs or workloads that it will be used for: Non-Games: Fusion 360; SolidWorks; Blender; Premier Pro and other Adobe Creative Cloud Products; PrusaSlicer, Simplify 3D, Cura Games: RuneScape; SC2; Civ 6; Stellaris | Upcoming: Star Citizen(Maybe lol) and Chronicles of Elyria Expected Build Time-Frame: Late August 2020; potentially waiting slightly longer to allow for new product releases / price drops due to new product releases. Other details : Background: I just want to start out by saying I totally understand that gaming and editing machines don’t mix well. While I would want to maintain the ability to game comfortably, you’ll note from the games I play that it is not as primary of a feature of what I would like to accomplish with this. I currently use a 4-display setup with 2 being 1920x1080 and 2 being 3840 x 2160 I do not remember the models as I am cross country currently from my setup. 4k gaming is not very important to me but would be nice to do as currently I can still only go up to 1080 in games even on the 4k monitors due to the lack of strength in the current system. The primary intent of this machine will to create an editing/production machine for creating video and editing 3D files for content creation and 3D Printing. Videos will be working with 4k 60 FPS RAW and exporting in 1080 Primarily and 4k if possible. I am generally happy with the build below but would like general feedback/constructive criticism and help on the local storage solution: To the actual point of the post: I would like to ensure local redundancy with RAID as I have had drives fail in the past and while data was recovered or backed up externally, it was a headache to get up and running again with any type of speed. I will be getting a NAS in the future (Q1 2021) and have plenty of external storage right now to hold me over until that point. In the list I intend to us the M.2 for a boot and main program drive; and the HDDs in RAID 6 to allow 4 TB of usable space with two redundancy. As I have found again and again hardware RAID is recommended over software and is essentially necessary for RAID 6. However, I worry about write speeds with RAID 6 especially since I am having a hard time vetting a RAID controller as much of the information seems to be older. The Questions: What RAID controller would those with the knowledge recommend? Is RAID 6 overkill in a local machine? Is it even possible? Does AMD hinder this in any way and should I look to Intel? PCPartPicker Part List CPU: AMD Ryzen 7 3700X 3.6 GHz 8-Core Processor ($289.99 @ Amazon) CPU Cooler: Corsair H115i PRO 55.4 CFM Liquid CPU Cooler ($159.99 @ Best Buy) Motherboard: MSI MPG X570 GAMING PLUS ATX AM4 Motherboard ($169.99 @ Best Buy) Memory: G.Skill Ripjaws V 32 GB (2 x 16 GB) DDR4-3600 Memory ($134.99 @ Newegg) Storage: Patriot VPN100 512 GB M.2-2280 NVME Solid State Drive ($87.99 @ Amazon) Storage: Seagate Barracuda Compute 2 TB 3.5" 7200RPM Internal Hard Drive ($58.98 @ Newegg) Storage: Seagate Barracuda Compute 2 TB 3.5" 7200RPM Internal Hard Drive ($58.98 @ Newegg) Storage: Seagate Barracuda Compute 2 TB 3.5" 7200RPM Internal Hard Drive ($58.98 @ Newegg) Storage: Seagate Barracuda Compute 2 TB 3.5" 7200RPM Internal Hard Drive ($58.98 @ Newegg) Video Card: Gigabyte GeForce RTX 2080 SUPER 8 GB GAMING OC Video Card ($693.98 @ Newegg) Case: Cooler Master HAF XB EVO ATX Desktop Case ($109.99 @ Amazon) Power Supply: NZXT C 850 W 80+ Gold Certified Fully Modular ATX Power Supply ($129.99 @ B&H) Operating System: Microsoft Windows 10 Pro OEM 64-bit ($142.88 @ Other World Computing) Case Fan: Noctua NF-R8 redux-1800 31.37 CFM 80 mm Fan ($9.94 @ Amazon) Case Fan: Noctua NF-A14 industrialPPC-2000 PWM 107.41 CFM 140 mm Fan ($27.95 @ Amazon) Case Fan: Noctua NF-A14 industrialPPC-2000 PWM 107.41 CFM 140 mm Fan ($27.95 @ Amazon) Case Fan: Noctua NF-A20 PWM 86.46 CFM 200 mm Fan ($30.00) Total: $2251.55 Prices include shipping, taxes, and discounts when available Generated by PCPartPicker 2020-05-25 15:47 EDT-0400
-
What's the best way you guys can suggest for the raid array I've mentioned in the title ? We are in India and I'm unable to find someone who can advice me on this topic. I need to build a raid array with total usable capacity of over 30 TB. I researched a bit and found 4tb and 6tb drives that were to my liking . But I would like to use the 4tb versions which would mean I lose less capacity as parity in both raid 5 and raid 6. But that would mean I have to use multiple raid controllers and if I do split up my raid volumes , I will lose more capacity as each volume would need it's share of parity disks . Is it possible to share the raid volume over multiple raid controllers ? And is it recommendable ? . I would like your view on how I should tackle this challenge . I am also okay with software raid since the only processing load on the system is gonna be the storage itself .
-
Hey you all. I'm looking to buy a raid card that can do raid 6. But there are so many cards out there and i don't know what i should look for. Right now i use 4x3TB wd red in software raid 5. And the performance are ok, but not that good. I want to buy 2 more disk so i will get 6x3TB and then run them in raid 6. So i have better data protection. Should i just buy a cheap raid card on ebay or get a new on? and what brand should i look for? I have look at this https://www.tomshardware.com/reviews/sas-6gb-raid-controller,3028-3.html but thats from 2011. But those card are really cheap on ebay right now. But don't know if i buy a new one, if it will that much better or not? I hope to buy one below 200$ if that possible. So is there anyone here that know a bit more about raid cards that will help me with this one
-
Hi, I'm looking to setup a NAS at home to replace the many individual USB drives... And if course I don't know what I'm doing. May I please get some advice? I have about 8 drives in external cases, some USB 3 some USB 2. Ideally, I'd like to spend no or very little money to combine each drive into one large volume that can be accessed over my home network by all the windows pcs in the house. Here's the info I think would be helpful: • Windows 10 computers will be connecting to it. • As cheap as possible. I would actually use the external drive cases I'm currently using with each drive. • If using separate usb drives is not possible, the enclosure I need will have to have at least 8 bays. • I'd like any computer on my network to access the NAS without a user name or password. • The current hard drives are different. Only 2 or three are the same model and size. • Preferably, I would like to add a drive to the NAS and it's data get integrated and not wiped. This is something those expensive drobo boxes can do. • RAID 6 appeals to me because two drives can fail and no data would be lost. Each hard drive will be a different age and will sometimes be a different size and from a different manufacturer. Basically, I'd like an easy windows based solution that I can plug drives that already have data on them into a setup that can easily process into an array that can also replace drives without losing any data. I appreciate any perspective or advice on this. I think it might be possible but don't know what is realistic and what isn't. Is this even possible? What would your approach be?
-
We're looking at building and 80-100TB StorageVault that we can back up to with Veeam. We've bounced back and forth between Raid6 with 24-4TB drives and have 2 hot spares or a Raid10 setup with 40-4TB drives. The Raid 6 setup would have insane rebuild times but the Raid10 is a ton of drives and not sure how well a storage array that big would be used/recognized. What would you guys do? Any other options we may be missing?
-
So right now, using Windows 7 with a PERC H700 and 8, 4TB drives in a RAID 6 setup. (i5 6600K/8GB RAM). I'm looking to expand my array by adding 4 more, 4TB drives. I've done tons of reading and thought about going with Linux (Ubuntu LTS Server or Antergos) and using ZFS, but since I'm not going to use ECC memory, I'm getting gun shy about just using a HBA+my motherboard SATA connectors to hook all the drives up because of nightmare scenarios regarding using ZFS without ECC memory. The ONLY reason I want to use ZFS, is for faster rebuild times in case of a drive failure. (I wonder what the rebuild time would be on a 4TB drive with ZFS (assuming the drive only has 1.5TB of data on it) vs the rebuild time with a LSI RAID 6 card with the same amount of data?) Even if my fears are calmed about using ZFS without ECC memory, I wonder what kind of CPU load I'll see during scrubs? I realize my CPU will be busy as I copy 14TB of data back to this newly created setup but after that, it will just be running Plex and transcoding with the occasional write as I copy a new TV show over to the array or some pictures/movies. (I'll be bumping my CPU up to an i7 7700K as part of this upgrade.) Also with ZFS, do things start to slow down over time as your drives start filling up? Next up, operating system. I refuse to use Windows 10 so this leaves me with Windows 7 (which support ends for in a hair over 2 years from now) or Linux. I'd rather not switch over to Linux 2 years from now so when I do my upgrade, it will be using the Linux OS. (Will PROBABLY be Ubuntu Server LTS or Debian 9 stable. Which is better when it comes time for full upgrades to the next version?) I'm a newbie when it comes to Linux but I've been dual booting between Mint, Antergos and Debian for roughly a year and since the only thing my server will be running is Plex, Sonarr, NZBGet and Radarr, I'm confident that there's enough information out there that if I run into a problem, some Googling around, I'll be able to get things fixed and be back up and running. So if ZFS is a no go, thoughts/opinions on using Ubuntu Server LTS/Debian 9 with a nice LSI card? (LSI hardware should work just as good with Ubuntu or Debian, right?) Speaking of LSI cards, would anyone care to recommend a RAID 6 card (for under $400, cheaper the better) that offers great speed in a RAID 6 setup? I'll probably have to couple it with something like an Intel SAS expander (RES2SV240) so I can hook up all 12 drives. The drives I'll be using are plain Jane HGST 4TB NAS drives. Thanks for reading and looking forward to your opinions!
-
Hi All, I need to build a 80TB storage server for backup my video's. It is very important to keep the risk of losing data as small as possible. With raid10 i missing to much space so raid6 seems for me the best option for me so far what i can find, i looked also to rockstor and freenas, but rockstor not advice raid6 into production, and freenas is based on freebsd and i not have much skills on freebsd. I am really struggling with choosing the best and safes option, so any advice in this is very welcome. The follow system setup i have in my mind: 16x hotswap supermicro case Supermicro X10SLL-F Intel Xeon E3-1220 V3 16Gb ECC Memory LSI MegaRAID SAS 9260-16i Intel 10G Ethernet Server Adapter 10Gbps Dual Port PCI-E X520-DA2 4x HGST Deskstar NAS 4TB HDN724040ALE640 Starting with 4x 4TB to keep the cost low as possible, later when i need more storage i will adding more disks into the system till 80TB. Thanks a lot for your reply. Greetings
-
So I'm an assistant for a technology director at a local school where I live. We had bought a barebones Intel server off of Newegg as a remote backup and installed FreeNAS onto it. All the drives were put into a RAID 6 I believe. Over the summer the maintenance crew came in and stored the computer for around a month up on a self with no power. We came back after the summer to get things ready for the coming year and the computer says something along the lines of (NAS device, cannot boot, no boot drive) and all of our drives are now listed separately. Neither of us know fully what we are doing in the area of RAID and server computing. If I could get any help on this it would be much appreciated. This backup server really needs to go into use. I'm gonna post some pictures for background. Any help is needed. If you don't know please bring in people who can help. Also I may have hit the little ID button pictured below on the front panel. I have no idea what it does and that might have also screwed it up.
-
In this video: at 2:38 Linus says that the Lsi megaraid sas 9361-8i (http://www.newegg.com/Product/Product.aspx?Item=N82E16816118231&cm_re=9361-8i-_-16-118-231-_-Product) supports "Online capacity expansion" a feature which is esential for me since I want to add drives to the setup as time progreses. However, I do not want to go with that expensive raid card and instead I want to use the LSI megaraid sas 9260-8i (http://www.newegg.com/Product/Product.aspx?Item=N82E16816118105). Does that raid card support "online capacity expansion"? I will be running raid 6
-
Which drive is best for a nas which will initially have 8 drives and then will be expaned to 12 drives? The WD Red is said to have a max of 8 drives at one however in this video : Linus has 8 drives in raid 5 and says that in the future it will be expanded. Therefore, is it okay to have more than 8 at once?? The WD Red Pro are said to have a limit of 16 which is great, however the added cost is tremendous. Do the (hitachi) HGST deskstar NAS drives have a limit to the number drives that can be in one server? If yes what is that limit? 8? 12? 16? Thanks, Dennis.
-
I've built several PCs but never a RAID server. In fact (quite embarassingly I've never used RAID before). Basically I am not sure where to start, since the requirements for a server are so different from a PC. This server would not be used as a final backup solution, but rather for video footage storage while working, and short term storage. Final backup would be onto LTO drives. I know I want enterprise grade drives with UREs at least 10^15, but beyond that not much. So what I don't know about is: CPU: Probably a Xeon Motherboard: Never shopped for a server mobo before, not sure. Memory: I'm assuming something with ECC, but not sure how much RAID Controller: No clue about good brands / types here either OS: Not sure, maybe FreeNAS? My main consideration is driver compatibility for any interface cards I'd be installing now and in the future, such as Thunderbolt III or whatever. Array: Probably going to use RAID 6, but was thinking about ZFS because I hear good things about it. Anyone want to point me in the right direction?
- 16 replies
-
- raid
- raid server
-
(and 3 more)
Tagged with:
-
Hi so I need your opinions on this backup NAS so here is my planned config: Server: HP Proliant DL180 G5 Raid Controller: LSI LOGIC MEGARAID SAS 8888ELP 8-PORT 512MB Raid Config: Total usable storage capacity (GB) = 16000 RAID Type: RAID6 (Stripe set with double parity) Number of RAID groups = 1 Number of drives per RAID Group = 4 Total number of drives = 4 Drive capacity (GB) = 8000 Capacity of a single RAID group (GB) = 16000 Space efficiency = 0.5 (50%) Fault tolerance = 2 disk drives per RAID group IO penalty (read) = 1/1 (one RAID IO per each host IO) IO penalty (write) = 6/1 (6 RAID IOs per each host IO) Minimum number of drives per RAID6 group = 4. Thanks. Alex
-
Hi, I need some help. I'm planning to build a new computer utilising a GA-Z97N-Gaming 5, a 250gb samsung 840 eve SSD for the boot drive and 3 WD 4tb Red drives. What I was wondering was what would be the best RAID configuration for the 3x 4TB drive array. Its mostly going to be used as storage for documents, photos and videos and read/write speeds aren't totally critical for me. I was thinking RAID 5, would I be correct in saying thats my best chose? Thanks
-
Ok guys - I have been doing performance testing with a 3 drive WD RE 4tb/drive raid5 volume, and a 4 drive WE Red raid5 set. By any chance has anybody done performance testing with a raid5 then in raid6 with 4-11 drives? All in 1gigE or 10gigE if you have it. I would like to see your results if possible.