Search the Community
Showing results for tags 'raid10'.
-
I guess this was really stupid. I thought I understood well enough how this stuff works to convince myself it WASN'T a really stupid thing to do, but I guess I was wrong. Here's the short version: I have a Marvell 92xx SATA 6G PCIe raid controller card which hosts a 4-HDD RAID10 array that serves as my personal data storage and archive server. I pulled drives out while the system was still up, but I offline'd the array via Disk Management (Win11) before I did so. As soon as I pulled the second drive, I got a concerning notification from the controller itself saying the virtual disk was down. I plugged the two disks I had removed back into the system expecting the controller to take them back up once more and bring the array back online. But, that didn't happen. Both the controller card's firmware and Disk Management report them as unformatted disks. The controller card flags the virtual disk that is the array with a red X and reports it as "offline" with only 1 of the original drives still associated with it. The other three are labeled "unconfigured" (why THREE are labeled as unconfigured when I only pulled TWO drives is a mystery to me). So, my question is this: How fk'd am I, and how can I go about unfk'ing myself? The drives should still have maintained parity, right? Can I reassociate the drives with the array somehow? I'll never do it again. I swear.
-
Currently just got myself a dell t420 server with a perc h710 raid card and i have 4 4tb drives. i plan to set them up in raid 10. Currently when i set them up in the raid card bios its raid 10 with 2 spans each with 4tb each. But when i go into windows server 2016 and try to set it up the only storage type i can use is simple and not mirror. Is this normal and i should just select normal and move on or am i doing something wrong? Cheers
-
Hi. After a restart yesterday my system would not boot and went into Emergency Mode, so I had to clear my /etc/fstab to be able to boot again as there was a problem with the mounting of the RAID-10 disk. After that I checked the Disks application and could not even manually mount the disk. I have tried to fix this issue for a good while now but I'm not tech-savvy enough to find the solution since I'm a Linux n00b. Any help is very much appreciated. I'm adding a screenshot from the Disks application in Pop!_OS and from the terminal where I checked the disk with mdadm. The state looks funky here:
-
Hey I am looking to build a smallish array to house some virtual machines as a small homelab. I have a computer with a 12 core cpu and 128gb ram, but I am not quite sure what would be the best way to build a raid arry. I have an boot nvme ssd and two 1tb sata ssds for some fast storage, but I am going to need more space in order to really utilize all of that cpu and ram. What drives would you guys recommend for this kind of an build? SAS or Sata is fine. Yes I know that HDD's are not blazing fast compared SSD's but it does not matter to me.
-
So I’m trying to set up a NAS and I keep getting this message (look down below in the image) what am I doing wrong? I set it up to RAID 10 since I have 4 HDD and I’m putting it into BIOS mode instead of UEFI mode (for True Nas) any clues as to what it’s doing?
-
Ok long story short swapped cases, numbered my drives and sata cables. Boop everything works fine, except a missing drive on my array. Apparently that’s “critical” for raid10 on 8 drives…? x570 whatever. So I did the routine, check sata, power and all. Nope, drives gone. Whatever a few days later now I got trouble, my Z drive is gone. Pull up bios immediately and of course 3 of 8 are showing. Fudge at this point I’m using hiren to DD each drive individually to my server, via a usb3 adapter. They’re all 2.5 so power isn’t a concern any suggested next steps or, do this instead? It’s 300g on the first drive and still 3 hours to go if not more - but it is using the network gig connection, 50ish/mbps
-
Hey, I have Windows 10 Home 22H2 x64 and I'm trying to combine my hard drives into a RAID 10 I have them all directly connected to my motherboard and I don't think it has an onboard RAID controller, I also don't have any PCIe slots left So I wanted to use software RAID in Windows I have read that it's pretty simple, that I simply have to set up 2 mirrored volumes and then make a striped volume with those 2 mirrored volumes. Th problem is, I don't have the option to makw mirrored volumes. Can anyone help me?
-
I am in the process of building a server and for storage I will be using a 4 drive RAID 10 SSD array. However im still somewhat concerned about drive failure. The server will also have a HDD for backups, as well as an offsite storage VPS to upload backups too. However im more concerned about the interruption to the running of the server if multiple drives where to fail. Since SSD's have a limited number of R/W cycles and RAID 10 will mean all 4 drives are under roughly the same workload therefore will be coming to the end of their life expectancy around the same time. So the chance of 2 or more drives failing around the same time is somewhat likely. Since RAID 10 is a stripe of mirrors drive 1 & 2 contain the same data, and so do 3 & 4. I am considering buying drive 1 & 3 from one manufacturer and 2 & 4 from another. I would make sure both drives had different life expectancies, therefore they are more likely to fail in pairs, rather than all 4 around the same time. (Since RAID 10 can take 2 drive failures as long as the drives don't contain the same data. Hence why I will arrange them 1 & 3, 2 & 4 etc). As an added note, I will be purchasing consumer grade drives, since I don't have a great deal to spend. Though i am aware enterprise grade drives do last substantially longer. How I see it, that makes complete sense. However I wanted to ask for others opinions that have more experience that I do, since I could be completely missing something. Would it work, or would it just be a waste of time?
-
Hello , I wants to set up raid on my desktop using hard drive a data drive and run on raid 10 . And use ssd for boot up drive . How and where do I go to set up raid on Maximus hero mother board ? .
-
- Edit: It works!! I've added two pictures of the working configuration. Yes, I soldered onto my hotswap psu because the desktop ones don't deliver the right signals. Yes it's three power supplies. But it works. Hi guys, this is my first foray into non-consumer hardware, I've been watching the YT channel for like a year, gave me the stoke for crazy computers. So I picked up the S5500WB motherboard 12v only variant, with two E5620's, oem coolers, two 12v only YM-2541C hotpluggable psu's all for £80, also 16gb ddr3 1333 registered ecc for £20, will probably get 16gb more if it all works. That's a damn good price, but it only has sata2. And I still don't have a case, so I can't use those psu's, I'll be using the 12v rails from two desktop psu's for now, until I can find one. It has a backplane for 4 drives, so I'm gonna do raid10 which I think is reasonable for my purposes. I want to boot from a 240gb ssd and have a large hdd for backups, that will utilize all 6 sata2 ports and keep it cheap. So my questions are as follows. 1. Does anyone know of a board with two LGA1366 sockets, and sata3? This one was so cheap I don't mind replacing it. -- Found one, see below. 2. Can I get this board to boot uefi? - Edit: Yes I can boot uefi. -- No I can't, still working on it, see below. 3. Will I run into big problems jumping two dektop psu's to power this thing? The hotplug psu's say 12v 37a each, the desktop ones say 12v 25a each. - Edit: I did run into problems like this, I didn't have a front panel connected and jumping the power btn pins didn't power it on. - Edit: I fixed it by bypassing the power distribution board that comes in the case, there's a detailed pinout in the motherboard manual. Also any advice you guys have, like something I haven't considered, would be greatly appreciated. I can post pics etc. The guy I got the board from forgot to pack the coolers so I'm still waiting for them, I expect them here today so I can put it together and post results. - Edit: The oem coolers haven't turned up so I got some other coolers. Couldn't mount them the normal way because of huge backplates, so I used m3 screws and rubber washers. Thanks very much. - Edit: I've found the spec for the custom power distribution board that comes with the complete server. Is this the sort of thing I could make? Or buy separately? I get the impression it's not. I'm gonna bug the seller to sort me out with a case. This PDF, page 12 on... http://www.intel.com/content/dam/support/us/en/documents/motherboards/server/s5500wb/sb/sr1695wb_tps_r1_4.pdf - Edit: I've added some photos, you can see there the hotplug pinout, and the 24-pin internal connector. - Update: It's now Feb 16th 2017, it still works. I've been playing with VMs on Linux server, I put in an old GTX760, it does a bunch of stuff, it's great. Benchmark scores are not amazing, but I haven't done them all. Still don't have any decent drives, I've revised my plan, gonna get 6 cheap 32GB SSDs, seen some with 450ish r and 320 w I think they have write-caching, just to saturate the interface. Then maybe serve from a PCIe SSD, I don't think I can boot from that, it's gen2 x8, I can't find any cheap enough now though. And UEFI. So it would give me a warning saying this machine probably can't boot from this GPT disk, when I ignored it crash later during the install. I put in my laptop drive which is GPT Ubuntu / Windows 7, would show up the EFI boot options but they wouldn't work, used the EFI shell to navigate to the files and execute them, this way it started grub, but grub couldn't start Ubuntu. Clover started properly, but also couldn't start the OSs. I had none of these problems with MBR style installs. So I realize now I'm not going need large volumes, the new reason I want UEFI is for GPU passthrough, I went through the setup properly and everything worked until I created the VM, couldn't grab the GPU even though it was reserved. Many tutorials explicitly said it needs UEFI base system. And I wanna mess with that stuff, like a Windows VM for game servers or what have you. Anyway, conclusion is find a better motherboard, maybe one that'll take a standard PSU. I found the EVGA Classified SR-2 dual socket that supports overclocking, has two SATA3's and true x16 four-way SLI and Crossfire. Can't find one anywhere in the EU, I guess if you buy one you use it until it blows up, but oh to dream. http://www.evga.com/articles/00537/ I modified one of my server PSUs with a lot of solder and electrical tape, so it now powers the CPUs, motherboard and that funny aux connector. I'm still using a desktop PSU for some of the fans, the GPU and the drive. I guess I'll sort out the 5v out from the motherboard for drives, I just haven't done it yet. Got a Coolermaster Cosmos S for £25, the fans it came with all have red LEDs, the board fits, can't complain. It looks like a convection heater, it's horrible, but that's why I like it. So for what, £230 or less, I've got this mad machine that should not be, it's a toy, is what it boils down to, who wants a mac anyway? That's only like $280 how times have changed since I started this thread. Actually hardware has suddenly gone up in price here, SSDs and RAM like 10-15%, I wonder why? Anyway, just thought I'd leave this here to conclude, it's been a journey, I've learn a lot, I recommend you all start messing about with end-of-life server gear if you don't already.
-
Good Day everyone, Linus and Anthony showed in one of there last videos, how to setup an "side-by-side VM - 32:9 Monitor" This is a great idea and something that would fit perfectly in my wish PC So I want to create a "plan" for my future setup; and because I have fun doing this, I created a diagram with draw.io I think this makes it easyer to understand what I want to built: What I expect from this: - More Secure with VMs (almost no added lag or greater performance impact ?) - Easy to update, upgrade and replace parts and software - Fast i/o with programms and games and Temp-files on SSD-Raid (only through lan to PC) (Internal connection possible ? PCIE-to-PCIE ?) - Semi fast but very reliable HDD-Raid for backups and finished projects and Media (20 TB accessable 40 TB in total) (Share within the same Wifi as router ?) - Stream one VM to TV (toggle between or atleast Windows only) (Miracast ?) - Ofcourse the display (and a software to share keyboard and mouse) does all the magic, bringing it together (I just HOPE this is reliable enough for day-to-day work, gaming and fun projects) - a price around 3500-4000€ + Display (Therefor the "plan"; buying the PC first, than storage, than server, than display ?) Dreams: - Get a X68000 Case and fitting everything perfectly, a similiar modern case, or DIY! So, would this work the way I think it does? Are there any big misconceptions on my side? Or do you think this would be stupid because ...? What's important: that it works in the end
-
I'm planning to make a rendering rig with a fairly large budget. would I be able to stripe 2 ssds (the raid 0 part) for the speed, then have those 2 ssds mirrored by 2 hdds (the raid 1 part). my thinking is raid 1 is all about redundancy, so I shouldn't need the drives to be fast? I have never used ssds. I have never used raid. so wanted to check to make sure I haven't got this wrong before I blow a few hundred quid on storage.
-
In order to implement RAID 10 it is necessary to have at least 4 drives, and if you have an odd number of drives, one of the drives will be used to replace another in case it fails. With that being said, establishing a RAID 10 array is very simple. First, make sure that in your computer's BIOS settings, RAID is enabled. This does not turn on any form of RAID, but rather makes it to where the BIOS doesn’t stop the array from being created. Once you have done this, it is important to make sure that your drives are the same size model. If they are not you can still create the storage array, but none of the benefits of RAID will actually take place or they will only be able to have the size of the smallest drive. After you have verified this, go into the control panel, which can be done by simply typing it into the windows search bar. Go into system and security and then click on manage storage spaces. Press the create a new pool and storage space button and then if it prompts you to be able to make changes to your device, click yes. From the drive selection menu, if you have 4 drives that are going to be in the array, make 2 different pools, each with 2 drives in it. With each pool go into the resiliency heading and change the resiliency type to two mirror. Now do this with the other set of drives so that you will have two different pools of storage, each with mirroring which is also known as RAID 1. This effectively gives you two usable drives, but there is still more to do. Now that you have your two pools in RAID 1, it is time to implement RAID 0 in each of the pools with each other. To do this, open up disk management by simply typing it into the windows search bar and clicking on it. Now delete each volume to clear out the storage, this is needed for RAID 0 as it is going to take each volume and put part of it into each other so that it will have the performance benefits. Once you have done this for each of your arrays, right click onto one of the drive pairs that is going to be in the array and select new striped volume. In the new window that will open up, select the other RAID 1 arrays that are going to be put in and then click next. Finally, go through the Windows prompts and wait for the array to finish being built. You may need to restart your computer to get the array working, but other than that your RAID 10 array should be completely working.
-
RAID 0 - RAID 0 is for the people who want the best performance and are willing to sacrifice drive safety in order to implement it.This version of RAID takes your drives and puts them in a single volume, instead of having two 1.5 TB drives, you get one 3 TB volume. This sounds like it wouldn’t do anything until you realize that this sounds similar to dual channel memory. Instead of putting in and taking what you are doing out of a single drive, it takes half of each and puts it in or takes it out of both drives. While this does not always double the performance, it will make it much faster and for people who just want the fastest storage, it could be worth it. In order to implement RAID 0, you must have at least two drives. The main downside of this form of RAID is that if one drive fails, both fail, with no way to recover that data because there is no redundancy. Because of this, it is not advisable to to use this on the drive that your operating system is on. RAID 1 - This nearly the opposite of RAID 0. It is for the people that need their data at all costs and are willing to sacrifice half of their total storage for it, although it does not change read or write speed in any dramatic way, it does make write speeds slower. RAID 1 needs at least two drives in order to be implemented. It takes your data and duplicates one drive, onto another one; this makes it to where you cannot use that drive for anything except redundancy. This makes it to where if one drive fails, there is always a back up that the drive can be rebuilt off of. RAID 5 - This is the version of RAID that most people would benefit from the most. It takes the data and splits part of it across each of the three or more disks that is needed, while also repeating parts of data across all drives. This means that not much total storage is lost, and it increases performance, although not as much as RAID 0. RAID 5 is the most common version of RAID because it has a good balance of safety and speed. Even if one drive fails, it can be rebuilt; the only danger in this is that if another drive breaks while this is happening, all storage is lost. RAID 10 - This is a combination of RAID 1 and RAID 0, hence the name RAID 10. It takes your data and repeats it across all disks, and takes data out of multiple disks at once as well, but since it has to put the same data across all disks, right performance is decreased and the total storage available is cut in half due to the mirroring of data. With that being said, it is very fault tolerant and is one of the safer forms of RAID. The minimum number of disks needed in order to have it is 4 so it is going to be pretty expensive, but for some people it would be the right option to use.
-
We're looking at building and 80-100TB StorageVault that we can back up to with Veeam. We've bounced back and forth between Raid6 with 24-4TB drives and have 2 hot spares or a Raid10 setup with 40-4TB drives. The Raid 6 setup would have insane rebuild times but the Raid10 is a ton of drives and not sure how well a storage array that big would be used/recognized. What would you guys do? Any other options we may be missing?
-
I am planning on building a video editing computer and I want to use RAID10. My question is, with RAID 10 using 4 drives, if I wanted to have project and scratch drives, would I need to have 8 drives or could I set up the scratch drives as RAID1 and then have the project drives as RAID10? Or is it impossible to have 2 different RAID configurations in one PC? Please, HELP!!!! :unsure: :wacko: :blink:
-
A few days ago my RAID10 failed (2TB x 4) with "Unknown disk on Controller 255, Port Unknown: Missing". The hard drives show up in the bios and even in 'My Computer' but they are inaccessible. If you take a look at the picture it shows that 2 drives have a normal status and the other two are "missing". I have a ton of important data on these disks that I didn't want to loose(hence the raid10...). Is there a way to relink these without data loss/ what steps should I take? I'm using the Asus X99 Deluxe and Intel Rapid Storage Technology program for my RAID config. The weird thing is that it fell apart after I installed the latest nvidia graphics driver. Basically what happened was I downloaded the driver, installed it, but never restarted my computer to finish the installation until a few days later. At some point between when I installed it and when my raid failed I forgot that I had emptied my downloads folder which was where the graphics driver was. I had a little pop up that stated that I need to restart the computer to finish the install, so I did, and that was when it all kinda freaked out. I had to re-download the graphics driver and finish the install then and there but the raid is still broken. Meanwhile, I have purchased a 5TB HDD in case I need to copy everything over somehow but I'm not sure how or if I can even do that.
-
So I have two samsung 840 Evo's in my computer I built 6 months ago; a 250GB with the OS on it and a 500GB for games, Adobe software, etc. I also have 4x 2TB HDD's in a RAID10 array. My question is, Samsung magician software does not support RAID configurations of any kind, but can my computer work properly with the Magician software installed since it's not the SSD's that are in RAID config? I do a lot of 4K video editing and it just seems like my SSD's aren't behaving like they should, so I'm wondering if I should have installed the Magician software 6 months ago... Any info would be helpful, thanks
- 5 replies
-
- samsung 840 evo
- magician software
-
(and 3 more)
Tagged with:
-
On my first build topic: http://linustechtips.com/main/topic/386764-first-time-build-some-help-please/, I was talking about a RAID 10 setup with 4 Seagate SSHD 1TBs, but no one completely understood me, can anyone here tell me if this setup would work for a Gaming/Workbench Deskstop? Thank you for your time!
-
So I was casually browsing youtube on the LinusTechTips channel and watched the following two videos; From what I understand from those two videos, making a RAM drive allows you to use the fast access speed of RAM but the computer sees it as a physical drive. Also, RAID arrays allow you to 'combine' drives to increase speed/performance depending on how you set up your RAID. My question is this; can RAM drives be set up in a RAID array? Answers JeffreyEagan: No. Oshino Shinobu: Possibly, but for booting a computer it's an impractical idea since the RAM drives would have to load the data before the computer could boot. If this is possible, I have a curiosity about things such as speed, performance, and heat output. Also to see whether or not this idea for RAM drives would make them worth using in Linus' opinion. I'll bet someone has already thought of this and it's probably not possible, but I thought I'd ask here to see what turns up. Thanks for any information in advance! I'll be keeping up to date with this thread for at least an hour from the original post for any discussion that occurs.
-
Say I have an existing drive with data already on the disk, am I able to directly create a RAID 1 array with another disk without having the original disk's contents wiped? Similarly, is it possible to then have that RAID 1 array formed into a RAID 10 also without having the original disks' contents wiped?
-
Hi, I need some help. I'm planning to build a new computer utilising a GA-Z97N-Gaming 5, a 250gb samsung 840 eve SSD for the boot drive and 3 WD 4tb Red drives. What I was wondering was what would be the best RAID configuration for the 3x 4TB drive array. Its mostly going to be used as storage for documents, photos and videos and read/write speeds aren't totally critical for me. I was thinking RAID 5, would I be correct in saying thats my best chose? Thanks